i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others. but instead of telling you what i think of them, i wanted to give them a chance to share their insights in ...

 

status quoi? part iii and more...




status quoi? part iii

Oryx1

i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

find part i of the series here (with a longer introduction)

find part ii of the series here

 

Part 3: Jackie Thompson, Joe Hilgard, Sophia Crüwell


Jackie Thompson

To me, failures of communication are the biggest blind spot in science. 
One aspect, notably, is science's focus on an outdated publishing model designed for an age when communication was slow, and relied on isolated scientists disseminating their findings on "sheets of pulped-up tree guts" (https://thehardestscience.com/2019/07/). We need to focus on new, more flexible ways to envision sharing of scientific inquiries-- for instance, making academic papers interactive and dynamic, or doing away with the idea of a "paper" altogether, and instead publishing individual elements of the research process (see the Octopus model by Alex Freeman; https://t.co/BPCarBGhZZ)

Another massive blindspot (almost literally) is a kind of self-focused myopia -- we spend loads of energy trying to reinvent the wheel, not communicating between fields and sub-fields, when some have already solved problems that other fields struggle massively with. (How many decades were preprints popular in physics before they started catching on in other fields?) Psychology put a name to the fundamental attribution error, but as a field we still fall prey to it every day. Many psychologists (myself included) scoff when we see non-academic ventures that fail to implement basic rules of good experimental practice -- e.g., businesses that send out poorly written surveys, or government departments that claim to show their interventions worked, despite not including any control groups. Yet, we turn around and tell our scientists to run their labs without any training in management; we try to change cultures and incentives without calling on any knowledge from sociology or economics; we try to map the future of science without delving into the history of science. We have so much expertise and wisdom at our fingertips from our colleagues just beyond the fences of other fields, yet we don't think to look beyond the fences for help from our neighbors; we happily restrict ourselves to the gardening tools we already have in our own backyards. Call this myopia, call it hypocrisy -- whatever the label, it's clear that this mindset results in a huge waste of time and effort. Interdisciplinary collaborations are incredibly valuable, but not valued (at least not by the insular academic ingroups they try to bridge.) The academic community needs to embrace the value of collaboration, and learn the humility to ask "who might know more about this than I do?"

 
Joe Hilgard

I study aggression, and I feel like my area is five years behind everybody else regarding the replication crisis. In 2011, psychology realized that its Type I error rate was not actually 5%, that several p values between .025 and .050 is a bad sign, and that we shouldn't see 95% of papers reporting statistically-significant results. Yet when I read aggression studies, I tend to see a lot of statistical significance, often with just-significant p-values. I worry sometimes that some of our research doesn't ask "is my hypothesis true?" but rather "how hard do we have to put our thumb on the scales to get the anticipated result?".

While we're still catching up with the last crisis, I think the next crisis will be measurement. We know very little about the reliability and validity of many popular measures in experiments about aggression. We've assumed our measurements are good because, in the past, we've usually gotten the statistical significance we expected -- maybe due to our elevated Type I error rates. Research from other fields indicates that the reliability of task measures is much too poor for between-subjects work. I think we've assumed a lot of our nomological network, and when we test those assumptions I don't think we'll like what we find.


Sophia Crüwell

I think we need to stop pushing the boundaries of our collective and individual ideals. We also need to stop thinking and arguing that some deviation from those ideals is acceptable or even good in certain circumstances, such as getting a career advantage – whether for ourselves or for others. Treating our ideals as optional in this way is the beginning of losing sight of why we (are being paid to) do research in the first place: to get closer to the truth and/or to improve people's lives. This goes for any scientist, really, but metascientists are in a particularly privileged position here: at least we know that potential future academic employers will be more likely to value e.g. openness and rigour over simple publication count. I believe that we have a responsibility to use this privilege and change the conversation with every decision and in all criticisms we can reasonably make.

However, we also really need to look into incentive structures and the lived experiences of actual scientists to figure out the specific underlying flaws in the system that we have to address in order to make any of the fabulous symptomatic solutions (e.g. preregistration, data sharing, publishing open access) worth each scientist's while. Sticking to your scientific and ethical ideals is incredibly difficult if it means having to fear for your livelihood. But just waving at "the incentives" and carrying on cannot be the solution either – we need to investigate the problems we have to deal with, and we need to try to deal with them.

Therefore, my appeals (to senior people in particular) are: please stick to your ideals to make it easier for everyone else to stick to them too. And if you are in a position to materially make it easier for people to stick to our ideals, please dare to make those decisions and have those conflicts.

 

status quoi? - part ii

Rhino9

i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

find part i of the series here (with a longer introduction)

find part iii of the series here


Part II: Emma Henderson, Anne Israel, Ruben Arslan, and Hannah Moshontz

Emma Henderson

I feel safer than most to embrace open research because I’m not set on staying in academia. However my lack of trepidation is not the case for most ECRs: There’s a constant background radiation of work-based anxiety amongst those researchers who would, in an ideal world, be uncompromisingly bold in their choices. But they’re hampered by a “publish or perish” culture and a lack of sustainability and security in their jobs (if they have jobs in the first place).

The decision to embrace open research shouldn’t leave us vulnerable and looking for career opportunities elsewhere - taking our skills and enthusiasm with us. Academic freedom is the scaffolding for best research practice: unrestrained exploration, and a loyalty to data, regardless of whether it yields “desired” or “exciting” outcomes.  

As a community we have the tools, the talent, and the tenacity to change things for the better, but to sustain these changes we need fundamental reform in both employment and research evaluation. Here we need established, tenured academics to educate publishers, employers, funders, and policy makers, pointing them towards research that prizes integrity over performance. People higher up need to aim higher.


Anne Israel

I am mainly struggling with the experience that collaborative research projects across different psychological (sub-)disciplines are still rare and difficult to implement, even though there often is a substantial overlap between key research questions. When I entered into my PhD program, I thought the essence of research was to try to ask and answer smart questions in multidisciplinary teams in order to understand complex phenomena from different perspectives and to make our research understandable to the public as well as to different neighboring fields. Instead, I often feel that researchers nowadays spent a large amount of time fighting over limited resources by trying to prove that their way of doing research is the right one, that their questions are the better ones, and that their ideas are superior to those of other colleagues.

Don’t get me wrong - I am aware that collaborating with others can be quite a challenge: we learn different research practices, we speak different research dialects, and in order to work together productively we need to invest a lot of chronically scarce resources (such as money, time, and patience). However, in my opinion, not investing these resources cannot be an option, because one good integrative research project can be worth a lot more than ten isolated ones. Moreover, complex research questions require complex methods and as much competence as we can get to answer them adequately. Thus, it is about time that we overcome the constraints currently discouraging interdisciplinary work, such as outdated incentive structures that value first-authorships over teamwork, or unequal pay across different subdisciplines, to name just a few examples. We shouldn’t forget that the major goal of research is gaining a deeper understanding of important phenomena and providing the public with relevant new insights. In the end we are the people, who build the system. I hypothesize it’s worth it - let’s collect data.


Ruben Arslan

Have you ever decided not to read a friend's paper too closely (or even not at all)? 

I have. We need more public criticism of each other's work. I won't pretend I love getting reviews as an author, but I like reading others' reviews when I'm a reviewer. I often learn a great deal. Many colleagues know how to identify critical flaws in papers really well, but all that work is hidden away. The lack of criticism makes it too easy to get away with bad science. No matter how useful the tools we make, how convincing the arguments we spin and how welcoming the communities we build, good, transparent, reproducible, open science requires more time to publish fewer papers. We cannot only work on the benefits side. There need to be bigger downsides to p hacking, to ignoring genetic confounding, to salami slicing your data, to overselling, and to continuing to publish candidate gene studies in 2019 to name a few. 

Maybe these problems are called out in peer review more often now, but what do you do about people who immediately submit elsewhere without revisions? Two to three reviewers pulled from a decreasingly select pool. Roll the dice a few times and you will get through.

So, how do we change this? The main problem I see with unilaterally moving to a post-publication peer review system (flipping yourself) is that it will feel cruel and unusual to those who happen to be singled out in the beginning. I certainly felt like it was a bit unfair for two teams that I happened to read their work in a week where I was procrastinating something else. I also had mixed feelings because their open data let me write a post-publication peer review with a critical re-analysis. I do not want to deter data sharing, but then again open data loses all credibility as a signal of good intentions if nobody looks.

I thought unilaterally declaring that we want the criticism might be safer and would match critics with those receptive to criticism, so I put up an anonymous submission form and set bug bounties. I got only two submissions in the form so far and no takers on the bug bounties. 

So, I think we really need to just get going. Please don't go easy on early-career researchers either. I'm going on the job market with at least two critical commentaries on my work published, three corrections and one erratum. Despite earning a wombat for the most pessimistic prediction at this year's SIPS study prediction workshop I don't feel entirely gloomy about my prospects.

I'd feel even less gloomy if receiving criticism and self correction became more normal. Simine plans to focus on preprints that have not yet received a lot of attention, but I think there is a strong case for focusing on "big" publications too. If publication in a glamorous* journal reliably led to more scrutiny, a lot more people would change their behaviour. 

* I'd love if hiring criteria put less weight on glam and more on quality, but realistically there will not be a metrics reform any time soon and we cannot build judgments of quality into our metrics if reviews are locked away.

Hannah Moshontz

I think that being new to the field doesn't necessarily give me any insight into truly new issues that others haven't identified or started to address, but it does give me some insight into the abstract importance of issues independent of their history or causes. I also think that being somewhat naive to the history and causes of issues in the field helps me see solutions with more clarity (or naivete, depending on your perspective!). There are two issues that I think people don't pay enough attention to or that they see as too difficult to tackle, and that I see as both critical and solvable.

The first issue is that most research is not accessible to the public. We spend money, researcher time, and participant time conducting research only to have the products of that research be impossible or hard to access for other scholars, students, treatment practitioners, journalists, and the general public. In addition to the more radical steps that people can take to fundamentally change the publication system, there are simple but effective ways that people can support public access to research. For example, individual researchers can post versions of their manuscripts online. Almost all publication agreements (even those made with for-profit publishers) allow self-archiving in some form, and there are a lot of wonderful resources for researchers interested in responsibly self-archiving (e.g., SHERPA/RoMEO). If each researcher self-archived every paper they'd published by uploading it to a free repository (a process that takes just a few minutes per paper), almost all research would be accessible to the public.

The second issue is that there are known (or know-able) errors in published research. I think that having correct numbers in published scientific papers is a basic and important standard. To meet this standard, we need systems that make retroactive error correction easy and common, and that don't harm people who make or discover mistakes. There have been many exciting reforms that help prevent errors from being published, but there are still errors in the existing scientific record. Like with public access to research, there are small ways to address this big, complicated issue. For example, rather than taking an approach where errors are individually reported and individually corrected, people who edit or work for journals could adopt a systematic approach where they use automated error detection tools like statcheck to search a journal's entire archive and flag possible errors. There are many more ways that people can tackle this issue, whether they are involved in publishing or not (e.g., requesting corrections for minor errors in their own published work, financially supporting groups that work actively on this issue, like Retraction Watch).

 

status quoi? - part i

Lion1
tired of lionization


"she's too old for breaking and too young to tame"
-kris kistofferson, from the song sister sinead

the older i get the less i understand reverence of authority or eminence.  when i was a student, i assumed that those who rose to eminence must have some special wisdom - they often acted as if they did, and others seemed to hang on their every word, so i gave them the benefit of the doubt even if i couldn't see it.  now i'm pretty convinced that there's nothing to this.  some eminent people are wise, and some are full of shit.  just like everyone else.

there are so many messages out there reinforcing the idea that high status people have so much wisdom to offer.  every time a conference stops all parallel programming for a famous person's keynote, every special issue with only invited submissions by senior people sharing their wisdom, every collection of intellectual leaders' opinions on random questions at The Edge - they all send this message.   

it's not that eminent people never have useful advice to give, or important experiences we can all learn from.  it's just that we should judge this on a case by case basis, rather than assuming it.  the knee-jerk assumption that eminent people should be listened to, detached from the actual value of what they're saying, is the problem.  if eminent people are just using their eminence to reinforce existing incentives and hierarchies (even if they do so in ways that seem benevolent and generous), rather than challenging them, then maybe we should listen to them less.  those of us who are more senior and regularly find ourselves being given too much deference should find ways to challenge this dynamic. 

i'm shocked at how often i hear grown-up people with tenured jobs and fancy titles say things like "i know he's a terrible choice but how could we have said no to Mr. Bigshot?" in contrast, i've now met dozens of early career researchers who have actually stood up to the Mr. Bigshots of their fields, or to the power structures more generally, and pointed out glaring flaws in the system.  the fact that people in precarious positions are more willing to do this than are the leaders in our field should be a wake-up call.

i'm embarrassed to say that it took me a long time to trust my own judgment and not just assume that eminent people earned their eminence.  which is why i'm so impressed by the early career researchers i meet these days who have the courage to question this.  those who trust their own perception of things, who see the problems with the status quo, and who decide for themselves who deserves their respect.  i have no idea where they got the wisdom and courage to see these things and point them out, but i am in awe.

i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.


Part 1: Ivy Onyeador, Hannah Fraser, Anne Scheel, and Felix Cheung

Ivy Onyeador

There are a number of issues we need to be tackling, and for many of them, we’re paying plenty of attention, or at least engaging in lots of discussion. I think what is missing sometimes is a big picture strategy or goal that we could collectively put our efforts toward that would address the multitude of issues we’re facing.

I think we should be figuring out how to create abundance. As academics in the US, we operate in a context marked by scarcity. Some people are tenured and have lots of resources, but too many people feel they have to be hyper focused on trying to secure resources constantly. Operating in this way breeds insecurity and pettiness, narrows our vision, and pulls us away from our values and ultimate purpose. At core, I think a lot of the issues we need to tackle (e.g., inadequate pay for graduate students, adjuncts and even some professors; the unnecessarily steep competition for publications, funding, tenure track jobs, etc.; how unhappy way too many people are; many diversity and inclusion issues) have a common cause, the denominator is too small. Our initial impulse is to figure out how to operate with limited resources, and we do, but to truly address any of these issues, we need more investment. I think working to secure more resources, for instance, organizing and lobbying to increase state support in higher education, is something more academics should consider.

Hannah Fraser

Ecology is a fascinating field inhabited by passionate people who are genuinely doing their best to to understand the world around us and how to preserve it. However, there is a disconnect between the way this research is conducted and how it is described and interpreted. The vast majority of research in ecology is conducted in an exploratory manner, it's very rare that hypotheses are overtly described and when asked what their hypotheses are, ecologists often insist that they have none. However, the resultant articles are written in a way that implies that the research confirms well justified expectations and, despite the preliminary nature of the results of exploratory work, direct replications are virtually unheard of and deliberate conceptual replications are rare. Published research is treated as truth and any contradictions in the literature are attributed to environmental variation rather than the increased false discovery rate that accompanies exploratory research.

Anne Scheel

In psychology we've been used to having our cake and eating it: Discover a phenomenon and confirm your theoretical explanation for it in just one JPSP paper with a total N of less than 200! We've since learned that the cake wasn't really there, that we need larger N and that most manipulations are not as clever as we thought. But I think the full implications of the message haven't sunk in yet: Way more of what we've been taught needs to be burnt to the ground questioned and potentially rebuilt. We learned to slap layers of ill-defined, implausible, internally inconsistent (but quantifiable!) concepts onto each other, moving so far away from the real world that we fail to recognise that cookie-depleting our ego probably doesn't cause our marriages to break and that a 6-month-old infant probably doesn't have a concept of ‘good’ and ‘evil’. We invent EPFSEA* for phenomena we haven't bothered to describe in any reasonable detail, or to even establish that they're real**!

Let's go back to empiricism. Let's look at those phenomena that made us want to do science in the first place. What's going on? Is it real? Can we describe it? Can we identify necessary and sufficient conditions for it to occur? Can we manipulate it? Each of these questions is a step in a research programme that might take a lot of effort and time -- and require tools that often aren't taught in quantitative psych programmes (qualitative methods, concept formation, ...). They'll feel like baby steps that we prefer to ignore or treat as a dull check-box exercise before we can do real science, testing hypotheses. I think that most of our 'real' science is futile in lieu of the baby steps. And I worry that we're not willing to really embrace baby-step science and its consequences for our everyday research -- a system fed on a diet of illusory cake won't switch to bread crumbs easily.

PS Many others have made similar and better points before (and I’m a cake offender myself!). But I think more of us need to pay more attention to the problem.

* Extremely Premature but Fancy-Sounding Explanations with Acronyms

** Go-to example: newborn imitation
 

Felix Cheung

White hat bias; causal inference; and global/real world relevance.
 
1. White hat bias refers to the tendency to misrepresent information in ways that support a righteous goal (in the authors' mind). I think this can be seen in research on income inequality and related fields. In daily speech, income inequality carries a heavy negative connotation of unfairness, and it can seem like the 'right' thing to do to keep saying how bad income inequality is. But we need to keep in mind that the common operationalization of income inequality in research is not a measure of unfairness, but a measure of income differences (Gini). I am willing to say that some income differences can be fair and just (astronauts with years of specialized training should make more than a clerk). Of course, there are also income differences that are driven by economic injustice. The problem is that the common measure (e.g., Gini) is only a measure of income differences but not income unfairness. In short, income inequality in research is not exactly the same as income inequality in daily speech.
 
Prior research has found mixed results on the link between income inequality and well-being. However, it is not hard to read papers on income inequality citing only papers that found negative effects in the introduction. I have heard of anecdotes of researchers saying something along the line of "if I find that income inequality is good, there's no way I am publishing that". If we want to tackle important real world problems based on data, we must let the data speak. This is why pre-registration is so important, especially in areas that can be controversial.
 
2. Causal inference. I think sometimes, when we use observational studies, we can be too comfortable studying 'associations', and not causal relations. There are powerful designs within observational studies that when applied appropriately, can get us closer to causal inference. Methods, such as natural experiment, regression discontinuity design, Mendalian randomization, and convergent cross mapping, all hold promise to improve causal inference. Of course, some of these methods have already been used in psychological studies, but I would love to see more of them.
 
3. Psychology has a strong real world relevance, and this is partly reflected in the media attention that psychological studies can get. Many of our studies already have strong real world applications (e.g., clinical psych). However, I think we can do more. I currently work in a public health department, and I heard multiple stories of how the entire field was mobilized to tackle major health issues, such as tobacco control, vaccination, and epidemic outbreak. These efforts have saved many lives. If we want to elevate the real world relevance of our field, I believe we can do so by mobilizing our field to focus on major global issues that are having heavy impact on people's thoughts and behaviors (e.g., refugee crisis, social unrest around the globe, violations of basic human rights, denial of science [e.g., in the form of anti-vaccination or anti-climate change]). It is not going to be easy to study these topics (e.g., you cannot use college student samples to study them), and it would mean building strong collaborative partnerships with local governments and international institutions. 
 
 
 

 

 

flip yourself - part ii

Kangaroo2

 

[for flip yourself - part i see here]

we’ve recently seen a big push to make the scientific process more transparent. bringing this process out in the open can bring out the best in everyone – when we know our work will be seen by others, we’re more careful.  when others know they can check the work, they trust us more.  most of our focus has been on bringing transparency to how the research is done, but we also need transparency about how it’s evaluated – peer review has become a core part of the scientific process, and of how scientific claims get their credibility.  but peer review is far from transparent and accountable.

we can help bring more transparency and accountability to peer review by ‘flipping’ ourselves.  just like journals can flip from closed (behind a paywall) to open (open access), we can flip our reviews by spending more of our time doing reviews that everyone can see.

one way we can do this is through journals that offer open review, but we don’t need to limit ourselves to that.  thanks to preprint servers like PsyArXiv, authors can post manuscripts that anyone can access, and get feedback from anyone who takes the time to read and comment on their papers.  best of all, if the feedback is posted directly on the preprint (using a tool like hypothes.is), anyone reading the paper can also benefit from the reviewers’ comments.

closed review might have been necessary in the past, but technology has made open review really simple.*  sign up for a hypothes.is account, search your field’s preprint server or some open access journals in your field, and start commenting using hypothes.is.  this approach to peer review is ‘open’ in multiple senses of the word.  anyone can read the reviews, but also, anyone can participate as a reviewer.  evaluation is taken out of the hands of a few gatekeepers and their selected advisors, and out from behind an impenetrable wall. 

there are several advantages and risks of open review, many of which have been discussed at length.  i’ll summarize some of the big ones here.

advantages:

  1. less waste: valuable information from reviewers is shared with all readers who want to see it. reviewers don’t have to review the same manuscript multiple times for different journals.  everyone gets to benefit from the insights contained in reviews.  a few recent examples of public, post-publication reviews vividly illustrate this point: these three blog posts are full of valuable information i wouldn’t have thought of without these reviews (even though all three papers are in my areas of specialization). 
  2. more inclusive: more people with diverse viewpoints and areas of expertise can participate, including early career researchers who are often excluded from formal peer review. personal connections matter less.  this will allow for more comprehensive evaluations distributed across a team of diverse reviewers with complementary expertise and interests.
  3. more credit, better incentives: it’s easier to get recognition for public reviews, and getting recognition for one’s reviews can create more incentives to do (good) reviews.
  4. better calibration of trust in findings: when a paper is published that’s exactly in my area of expertise, i might catch most of the issues other reviewers would catch (though let’s be honest, probably not). but when we need to evaluate papers even a bit outside our areas of expertise, knowing what others in that area think of the paper can be extremely useful.  think of the professor evaluating her junior colleague for tenure based on publications in a different subfield.  or the science journalist figuring out what to make of a new paper.  or policy makers.  instead of relying just on the crude information that a paper made it through peer review, all of us can form a more graded judgment of how trustworthy the findings are, even if the paper is a bit outside our expertise.

risks:

  1. filtering out abusive comments: one benefit of the moderation provided by traditional peer review is that unprofessional reviews – abuse, harassment, bullying – can be filtered out. they sometimes aren’t, if you believe the stories you hear on social media, but there is at least the threat of an editor catching bad actors and curbing their behavior.  there are solutions to this problem in other open online communities (e.g., up- and down-voting comments, and having moderators review flagged comments).  perhaps having more eyes on the reviews will lead to a more effective system.
  2. protecting vulnerable reviewers: many people who may want to participate as reviewers in an open review system could be taking big risks – those with precarious employment, those still in training, or anyone criticizing someone much higher up in the hierarchy. the traditional peer review system allows reviewers to remain unidentified (known only to the editor), which provides more safety for these vulnerable reviewers (if they get invited to review).  open review systems should also find a way to allow reviewers to post comments without revealing their identity.  this is in tension with the desire to keep out trolls and bullies, though.  once again, i think we can look to other communities online to learn what has worked best there.  in the meantime, perhaps allowing researchers to post reviews on behalf of unidentified reviewers (much like an editor passes on comments from unidentified reviewers) may be a good stopgap.
  3. conflicts of interest: the open review system could easily be abused by authors who ask their friends to post favorable comments. conflicts of interest may be more common than we’d like to believe in the traditional system, too, and it would be a shame to exacerbate that problem.  in my opinion, all open reviews should begin with a declaration of anything that could be perceived as a conflict of interest, and there should be sanctions (e.g., downvotes or flagging) against reviewers who repeatedly fail to accurately disclose their conflicts.
  4. unequal attention: if open review is a free-for-all, some papers will get much more attention than others, and that will almost certainly be correlated with the authors’ status, among other things. one advantage of the traditional peer review system is that it guarantees at least one pair of eyeballs on every submission (though some form desk rejection letters leave wide open the possibility that those eyeballs were not aimed directly at the manuscript).  of course, we don’t know that status doesn’t affect the way a paper is treated at journals (it almost certainly does). the rich-get-richer “matthew effect” is everywhere, and it’ll be a challenge for open review.  perhaps open review will push the scientific community to more fully acknowledge this problem and develop a mechanism to deal with it.

 

what now? 

i’ve attended many journal clubs where someone, usually a graduate student, asks “how did this paper get into Journal X?”  we then speculate, and the rest of journal club is essentially a discussion of what we would’ve said had we been reviewers.  often the group comes up with flaws that seem to not have been dealt with during peer review, or ideas that could have greatly improved the paper.  the fact that we can’t know the paper’s peer review history, and can’t contribute our own insights as reviewers, is a huge waste.  open review can remedy this.

open review has some challenges to overcome.  it will not be a perfect system.  but neither is the traditional peer review system.  it is not uncommon to hold proposals for reform to much higher standards than current policies are held, often because we forget to ask if the status quo has any of the problems the new system might (or bigger ones).  one advantage of an open review system is that we can better track these potential problems, and identify patterns and potential solutions.  those of us who want open review to succeed will need to be vigilant, dedicated, and responsive.  open review will have to be experimental at first, subject to empirical investigation, and flexible. 

to start with, i think we need to do what that cheesy saying tells us: be the change you wish to see in the world.  here’s what i plan to do, and i invite others to join me in whatever way works for them:
-search for preprints (and possibly open access publications in journals) that are within my areas of expertise.
-prioritize manuscripts that have gotten little or no attention.
-try to keep myself blind to the authors’ identities as i read the manuscript and write my review (this will be hard, but i have years of practice holding my hand up to exactly the right place on my screen as i download papers and scroll past the title page).
-write my review: be professional – no abuse, harassment, or bullying.  stick to the science, don’t make it personal. not knowing who the authors are helps with this.
-review as much or as little as i feel qualified to evaluate and have the skills and time to do well.  i’ll still try to contribute something even if i can’t evaluate everything.
-after writing my review but before posting it, check the authors’ identities, then declare my conflicts of interest at the beginning of all reviews (if i have serious conflicts, don’t post the review).
-post my review using hypothes.is
-contact the author(s) to let them know i’ve posted a review.
-i may also use plaudit to give it a thumbs up to make it easier to aggregate my evaluation with others’
-post a curated list of my favorite papers every few months.

this might sound like a lot of work.  it’s not much different than what you’re probably already doing for free for commercial publishers, who take your hard work, hide it from everyone else, give you no credit or much possibility of recognition, use your input to curate a collection of their favorite articles, and then sell back access to those articles to your university library, who uses money that could otherwise go to things you need. 

the beauty of open review is that you can do just the bits that are fun or easy for you.  if you want to go through and only comment on the validity of the measures used in each study, go for it.  if you just want to look at whether the authors made appropriate claims about the external validity of their findings, knock yourself out.  if you just want to comment “IN MICE” on the titles of papers that failed to mention this aspect of their study, well, that’s already taken care of.  by splitting up the work and doing the parts we’re best at, we can do what closed peer review will rarely accomplish – vet many different aspects of the paper from many different perspectives.  and we can help shatter the illusion that there is a final, static outcome of peer review, after which all flaws have been identified.

you’re probably already doing a lot of this work.  when you read a paper for journal club, you’re probably jotting down a few notes about what you liked or found problematic.  when you read a paper for your own research, you might think of feedback that would be useful to the authors, or to other readers.  why not take a few minutes, get a hypothes.is account, and let the rest of the world benefit from your little nuggets of insight?  or, if you want to start off easy, stick to flagging papers you especially liked with plaudit.  every little bit helps.

 

want to have your paper reviewed? 

by posting a preprint on PsyArXiv, you’re signaling that the paper is ready for feedback and fair game to be commented on.  but there are a lot of papers on PsyArXiv, so we could prioritize papers by authors who especially want feedback.  if you’d like to indicate that you would particularly like open reviews on a paper you’re an author on, sign up for a hypothes.is account and add a page note to the first page of your manuscript with the hashtag “#PlzReviewMe”

 

constraints on generality

do i think this will be a magic solution?  no.  it might not even work at all – i really don’t know what to expect.  but after many years on editorial teams, executive committees of professional societies, task forces, etc., i’m done waiting for top-down change.  i believe if we start trying new things, take a bottom-up, experimental approach, and learn from our efforts, we can discover better ways to do peer review.  i don’t think change will come from above - there are too many forces holding the status quo in place.  and to be clear, i’m not abandoning the traditional system – like most people, i don’t feel i can afford to.  i’ll keep working with journals, societies, etc., especially the ones taking steps towards greater transparency and accountability.  but i’m going to spend part of my newfound free time trying out alternatives to the traditional system, and i hope others do, too.  there are many different ways to experiment with open review, and if each of us tries what we’re most comfortable with, hopefully some of us will stumble on some things that works well.

 

* if you have a complicated relationship with technology like i do, this guide to annotating preprints will be helpful.

 

flip yourself - part i

Bison1
a bison who recently finished her editorial term.

my mom asked me a few months ago what i was going to do once i was no longer editor in chief of a journal.  she was worried about my well-being.  “you love being an editor,” she said.  when i told her i didn’t know, she said “could you just keep reading other people’s papers and sending them your comments?”

it’s not the first time my mom’s innocent suggestion, preposterous on its face, turned out to be the answer. in high school when i wanted to quit the basketball team but both of us still wanted me to have an after school sport, my mom suggested i join the wrestling team.  after laughing at her for a day or two, i realized it was the perfect solution.  there are perks to having unconventional parents.*

for the last few months i’ve been thinking about what i’ve learned from being an editor, what i loved about it, and what i didn’t love about it.  i loved the day to day work.  the intellectual challenge, and the challenge of using my power for good.  i am sure i made mistakes, and i apologize to authors, readers, and reviewers who were affected by those mistakes.  but my mom is right – i loved it, and i am going to miss it.  i applied for another editor in chief position but didn’t get it (my vision statement for my application is here).  but at the same time that i’ve enjoyed it and would have liked to do it more, a part of me felt conflicted.  the more i saw of journals, societies, and formal peer review, the less i believed in the current system.  i believed it was worth working within the system to make it better, and i still think this is a laudable approach and am grateful to those who continue to do this kind of work.  but i’m ready to flip.  yesterday, i was working for a traditional journal.  today, i’m flipping myself: i’ll spend some of the time i was spending as editor in chief to work for open, journal-independent peer review.

what does ‘flip yourself’ mean?

open science is about making the entire research process transparent.  one part of that is open access – making published articles open for everyone to read, without paying or having a subscription.  in the open access movement, ‘flipping’ a journal is when an editorial board walks away from an existing subscription journal whose papers are behind a paywall, and starts a new journal where all the papers are openly available.

the same principles of openness, transparency, and accountability should apply to the peer review part of the research process, too.  so I started to wonder – can we flip peer review?

we can, and we don’t even have to wait for a journal or society to flip to open review.  we can all flip ourselves, by spending part of our time doing open reviews.  tools like hypothes.is and plaudit let you post comments or evaluations on papers and preprints.  instead of, or in addition to, doing reviews for commercial publishers that can’t be read by others, we can volunteer our time as open reviewers to make the content and process of peer review open for everyone to see.  in my next blog post i’ll describe in greater detail what i am imagining, and what’s in it for science.  for today, i want to talk about how my experience as editor in chief of a traditional journal led me here, to flipping myself.

lessons from my time as EiC

first, i want to say that i am extremely grateful to have had the opportunity to edit a journal.  i am lucky to have been trusted with such an important responsibility.  i admire many of the editors i’ve worked with, and i admire the ones who are going out on a limb to change the incentive structures of science.  i have tremendous respect for those who are working within the system, and i’d still be doing it, too, if i could.  but since i’m not (at least not as editor in chief), i’m going to get some things off my chest that aren’t so great about the traditional journal system. 

these reflections are based on my subjective experience – i don’t have hard evidence to back up these impressions, so these stories should be taken for what they are, a data point filtered through the lens of my quirky mind.  i would love empirical data on these issues, but it’s hard to do empirical studies of the peer review process, because of #2. 

  1. concentration of power

in thinking about what i would do on july 1, when my four-year term as editor in chief at SPPS ended, it became very vivid to me how arbitrary our evaluation system is.  we treat lines on CVs as if the journal name is some kind of objective seal of approval.  but all it means is that one editor, after consulting with a few reviewers, decided they liked your paper.  given what we know about the unreliability of peer review, this is a pretty terrible way to confer a reward that can make or break a career.

the arbitrariness is especially vivid to me today.  yesterday, if i liked your paper, that could mean a new line on your CV in a pretty-well-respected journal.  today, if i like your paper, that and $3.95 will get you a scoop of ice cream.  the problem isn’t that no one cares what i think today, the problem is that what editors think matters too much, especially given how we treat publications on CVs.  we should care what many people think, and we shouldn’t give so much weight to one person’s idiosyncratic preferences.

editors have a lot of power.  i think most of them want to use it for good, but 1) not all of them have good intentions (there are corrupt editors), 2) well-meaning editors may lack an understanding of how their actions affect the incentive structure, and 3) well-meaning and well-informed editors make mistakes.  this would be ok if there was a mechanism for correcting editors’ errors, but because editors control some of the most valuable rewards in the scientific community (journal acceptances), people are understandably reluctant to challenge them. 

as one example: think about the reactions to the Reproducibility Project: Psychology (OSC, 2015).  the articles targeted for replication were from three specific journals.  yet very little of the discussion i saw about these results asked any questions about what these three journals are doing to fix the problems that the RP:P revealed.  it’s been almost four years since the publication of the RP:P – where are the calls for accountability?  there are a few exceptions, and Steve Lindsay was proactive about implementing new policies at Psych Science, but for the most part there was little attention to how the specific journals sampled have reponded.  it’s hard to speak truth to power, especially when power is so concentrated in a few prestige journals. 

  1. lack of transparency

which leads to the second problem – scientists couldn’t easily hold journals and editors accountable even if they had the self-destructive will to do it.  scientists can’t see which papers were rejected and why, they can’t even see the reviews for the accepted papers at most journals, they can’t evaluate whether the review process was thorough or fair.  so how would we know if there are systematic problems with a journal’s peer review?  i know of cases where it’s an open secret that a particular editor is terrible, because enough researchers in a small community have compared notes and noticed a pattern.  it shouldn’t work that way.  when you control some of the most valuable rewards in your professional world, you should be accountable for your decisions and your process.

some journals have given authors the choice to post their reviews publicly if the paper is accepted.  very few journals require this, and very, very few journals post the reviews even for rejected papers.  an editor once told me that i shouldn’t judge his journal for rejecting a replication study of its own papers, because i didn’t see the review process.  show me the review process, then.  otherwise, what you do and don’t publish is literally the only thing i have to go on.  i agree that journals don’t give readers enough information to put them in a good position to judge the journal – that seems to be part of the point.

  1. bias

it’s hard to know what counts as bias in the peer review process, because many journals and editors are pretty open about favoring papers by famous authors.  some journals actively solicit submissions from eminent people, and they don’t always indicate which articles were solicited/invited.  some editors openly state (brag?) that they take the authors’ reputation into account when evaluating submissions.  are you an ‘unproven author’?  good luck with that. 

maybe journal peer review isn’t supposed to be fair. maybe journals should be allowed to weigh factors other than the value of the manuscript.  fine – then let’s stop evaluating job candidates and promotion cases on the basis of where articles were published.  we can’t simultaneously argue that journals shouldn’t be expected to provide a level playing field, and then turn around and pretend that counting journal publications is a fair evaluation system. 

every system is biased, so accusing traditional peer review of being biased is hardly original.  the problem is that 1) we often treat a journal publication as if it very closely tracks merit, and 2) the bias in the journal system is compounded by lack of transparency and the concentration of power.  not only do some researchers get a leg up, but we can’t examine the process to look for signs of bias, and even if we could, calling out a top journal or editor would be pretty masochistic.

conclusion

it’s been a joy to be an editor.  there were a few unpleasant experiences (a story for another day), but overall i found it to be comfortable – too comfortable.  editors, journals, and societies are protected from accountability by the power they hold to make or break people’s careers, by the fact that most of their process is kept secret, and by norms that tolerate, or even encourage, a system in which the rich get richer.  surely if we were going to design a system from scratch, we could do much better.

why did i stay in the system for so long if it has so many problems?  as long as they let me have a go, i was going to try to do what i could from within, and i encourage others to do the same if they want to and have the chance.  but did i spend some of my free time fantasizing about what i would do when that option was no longer available?  absolutely.**  in part ii, i’ll describe my plans. 

* there are also downsides.  it turns out taking your cat camping is not a good idea.
** thanks, mom.