cc-by me every now and then, something happens in my twitter bubble that captures so much attention that it feels like everyone is wondering where everyone else stands on it. sometimes i am silent. sometimes i am wondering why others are silent. i've ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 


living in a powder keg and more...

living in a powder keg

cc-by me


every now and then, something happens in my twitter bubble that captures so much attention that it feels like everyone is wondering where everyone else stands on it. sometimes i am silent.  sometimes i am wondering why others are silent.  i've been on both sides of this. i've felt the disappointment when others don't speak out about something i feel strongly about, or am personally affected by.  the idea that sometimes silence can be quite loud is familiar to me, like it is to probably anyone who has been mistreated publicly, or taken a risk in speaking out about private mistreatment, or provided support to others who were taking that often difficult, always deliberate, step. when we hope that someone will be supportive, the palpable lack of support can be disillusioning.

i know that i have sometimes been that disappointing person to others. someone that people expect to speak up, take a position, express their values, who says nothing. i believe it is fair to judge people based on their actions, including what they don't do or say, especially when it is a pattern.  indeed, i hope that people judge me based on my actions (rather than, say, a rumor or stereotype they have about me or a group i belong to). i hope that someone who looks over my public track record of behavior will form an accurate impression - if they decide they don't like me or don't share my values, the best i can hope for is that this decision is based on a real difference, not a misunderstanding.  so it matters a lot to me that my behavior is an accurate representation of my values.

so then why do i sometimes stay silent?  here are some of the reasons, with some examples here and there.

  1. sometimes i am consumed by something else. like everyone, i have stuff going on in my life outside of twitter.  sometimes i don't make the effort to follow breaking news on academic twitter. (ok, real talk: this one is rare.  but i am really good at ignoring something when i intentionally decide to do so, so it's possible i decided to ignore a budding controversy, and missed when it turned into a fullblown shitshow. my friends can only be counted on to text me about it 82% of the time.)
  1. sometimes i don't think the thing that happened is worth commenting on. for example, when chris chambers made a joke about a paper around halloween in 2019, it didn't strike me as something i needed to have a position on.  the paper had a title that seemed intended to provoke, and chris took the bait.  would i have tweeted what he tweeted?  no.  but did i see anything wrong with what he tweeted?  also no.  on a scale from 1 (i could imagine saying that thing myself) to 10 (i would cut all ties with this person for what they said, and lobby to remove them from any position of influence over others), i would call this a 2.  why not say so publicly?  because chris is high status in our community so doesn't need anyone to defend him (he also expressed remorse later).  no one benefits from me saying that i didn't think chris's tweet was problematic, and saying so could upset people who did find it problematic, which i didn't want to do, because maybe they're right!  i don't know, i wasn't the target of his tweet and it's likely that if i was, i would have seen it differently.
  1. i think it's slightly problematic (e.g., a 3 or a 4 on the scale described above), and there is a highly visible campaign of opposition that expresses a position much more extreme than mine. in these situations, i often spend quite a while drafting and deleting drafts of tweets trying to express my position in a way that won't sound like i'm saying the behavior wasn't problematic at all, but doesn't make it easy for people to mistake my position for the more extreme and more popular position. so i usually end up saying nothing, because i don't trust my ability to walk this tightrope. also i often feel that many of the reactions to the initial event are at least as problematic as the initial event itself, so i feel like if i call out the initial event, i'd have to spend lots of time evaluating what else in the same discussion rises to the same level and ought to be called out.  that doesn't seem like a good use of my time, when none of the actions rise above a 3 or 4 on my subjective scale.  a recent example of this was daniel lakens's tweet playing off of an MLK quote, and some of the comments made in the discussions of that tweet.  another was ej's analogy to a father who finds out his family members are trump supporters, to describe his emotions when his friends wrote a paper he felt was misguided and didn't consult him. definitely not things i would have written and i cringed when i read them, but i don't find them as problematic as some people do.  on the other hand, i wasn't in the groups most likely to feel the impact of either of those missteps, so my personal reaction doesn't seem especially worth expressing.
  1. sometimes the action is super problematic, but the person doing the action is someone i've already written off.  sometimes i have a personal history with them, other times it's obvious from their public behavior that interacting with them is basically masochism, so as a rule i don't engage with them.  in these cases, my decision not to call them out is selfish - i am protecting my own finely-tuned mental harmony by ignoring them.  if i felt i had to pay attention to them, i would probably just quit twitter instead.  i know some don't have the luxury to ignore them, and for that reason i do everything i can in my leadership roles to limit the opportunities given to demonstrably toxic people to have a platform/influence. (if this seems draconian, i'm talking about opportunities like colloquium or symposium speakers, service on committees, awards, etc. - i'm not trying to get those people fired, i'm trying to not give them extra opportunities to treat people like shit.) another added layer of complexity in these cases is that i think some of these people's problems likely rise to the level of serious mental health challenges.  of course this is a continuum and everyone deserves some level of mercy, so we each have to consider our own principles and conscience when deciding how much shunning is the right balance to strike.  i make my decisions on this privately, and i respect that others may draw the line in different places (e.g., people who have been close friends with the person, or who know more about their mental state, etc.).
  1. sometimes i don't think the evidence is clear about what exactly happened.  in cases where the action that people are upset about is public, that's not an issue. but in other situations, things can get tricky.  we all different have priors about whom we are more likely to believe, different kinds of reports we are more likely to give weight to, different levels of confidence we need before publicly speaking out.  if you see me staying silent about an issue where the facts are in question, it's possible i have a different assessment of the facts, or a different certainty threshold for speaking out, than you. having a lot of followers comes with added responsibility to call out some kinds of behaviors, and it also comes with added responsibility to not fuck that up.  just because i don't say anything doesn't mean i don't believe the bad thing happened, but it might mean i'm not sure enough. this is not to say i never speak up when it requires trusting someone's word - i think my track record shows that i do. it's a case-by-case thing for me.

why can't i just say which of these reasons explains my silence in any given case?  in some cases (#1 and #4) i am unaware of or intentionally ignoring the issue or person.  this (sometimes deliberate and precious) state of ignorance makes it hard to explain my silence.  in other cases (#3 and #5, and sometimes #2), my position is more mixed or mild than others' and I very often respect people who have stronger positions.  in those situations, i don't think anyone needs to hear that my position is less clear-cut or less certain, and in fact i think it could do harm to say so.  

i suspect that in cases of #2 and #3, the people who are disappointed in my silence would continue to be disappointed if they knew my exact position on the issue (that is, i don't perceive the issue the way they would like me to).  i am sorry to be a disappointment, but also i accept that i am not who everyone i respect wants me to be.  in the other cases (#1, #4, and #5), i know that i am risking losing esteem in the eyes of people i respect when in fact my private belief is close to what they would hope from me, and i am sad about that.  i hope that all the times that i do state my positions publicly will be enough to outweigh the misperceptions that might stem from when i am silent for one of these reasons.  i hope that overall, my behavior makes it clear where i stand on many issues, such as sexual harassment, bullying, advisors abusing their power over graduate students, ECRs not getting credit for their work, and many more. 

ideally, i think we should judge each other based on our long-run track records of behavior, and show each other some grace when someone's voice is absent from a particular discussion.  if it becomes a pattern, or if any given instance is a dealbreaker for us, then i think it makes sense to update our perception of the person, but it won't always be the case that we can draw a clear conclusion from someone's silence in a given instance. indeed, i am sure the list above is incomplete, and people have many other reasons for not always expressing their views.  i will try to practice this myself - to be gracious and forgiving if people i hope will stand up for me or my students, colleagues, or groups i care about, are silent in any given instance.  and i will continue to strive to make my pattern of behavior a clear reflection of who i am and what is important to me.


cc-by alex holcombe


status quoi? part iii


i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

find part i of the series here (with a longer introduction)

find part ii of the series here


Part 3: Jackie Thompson, Joe Hilgard, Sophia Crüwell

Jackie Thompson

To me, failures of communication are the biggest blind spot in science. 
One aspect, notably, is science's focus on an outdated publishing model designed for an age when communication was slow, and relied on isolated scientists disseminating their findings on "sheets of pulped-up tree guts" ( We need to focus on new, more flexible ways to envision sharing of scientific inquiries-- for instance, making academic papers interactive and dynamic, or doing away with the idea of a "paper" altogether, and instead publishing individual elements of the research process (see the Octopus model by Alex Freeman;

Another massive blindspot (almost literally) is a kind of self-focused myopia -- we spend loads of energy trying to reinvent the wheel, not communicating between fields and sub-fields, when some have already solved problems that other fields struggle massively with. (How many decades were preprints popular in physics before they started catching on in other fields?) Psychology put a name to the fundamental attribution error, but as a field we still fall prey to it every day. Many psychologists (myself included) scoff when we see non-academic ventures that fail to implement basic rules of good experimental practice -- e.g., businesses that send out poorly written surveys, or government departments that claim to show their interventions worked, despite not including any control groups. Yet, we turn around and tell our scientists to run their labs without any training in management; we try to change cultures and incentives without calling on any knowledge from sociology or economics; we try to map the future of science without delving into the history of science. We have so much expertise and wisdom at our fingertips from our colleagues just beyond the fences of other fields, yet we don't think to look beyond the fences for help from our neighbors; we happily restrict ourselves to the gardening tools we already have in our own backyards. Call this myopia, call it hypocrisy -- whatever the label, it's clear that this mindset results in a huge waste of time and effort. Interdisciplinary collaborations are incredibly valuable, but not valued (at least not by the insular academic ingroups they try to bridge.) The academic community needs to embrace the value of collaboration, and learn the humility to ask "who might know more about this than I do?"

Joe Hilgard

I study aggression, and I feel like my area is five years behind everybody else regarding the replication crisis. In 2011, psychology realized that its Type I error rate was not actually 5%, that several p values between .025 and .050 is a bad sign, and that we shouldn't see 95% of papers reporting statistically-significant results. Yet when I read aggression studies, I tend to see a lot of statistical significance, often with just-significant p-values. I worry sometimes that some of our research doesn't ask "is my hypothesis true?" but rather "how hard do we have to put our thumb on the scales to get the anticipated result?".

While we're still catching up with the last crisis, I think the next crisis will be measurement. We know very little about the reliability and validity of many popular measures in experiments about aggression. We've assumed our measurements are good because, in the past, we've usually gotten the statistical significance we expected -- maybe due to our elevated Type I error rates. Research from other fields indicates that the reliability of task measures is much too poor for between-subjects work. I think we've assumed a lot of our nomological network, and when we test those assumptions I don't think we'll like what we find.

Sophia Crüwell

I think we need to stop pushing the boundaries of our collective and individual ideals. We also need to stop thinking and arguing that some deviation from those ideals is acceptable or even good in certain circumstances, such as getting a career advantage – whether for ourselves or for others. Treating our ideals as optional in this way is the beginning of losing sight of why we (are being paid to) do research in the first place: to get closer to the truth and/or to improve people's lives. This goes for any scientist, really, but metascientists are in a particularly privileged position here: at least we know that potential future academic employers will be more likely to value e.g. openness and rigour over simple publication count. I believe that we have a responsibility to use this privilege and change the conversation with every decision and in all criticisms we can reasonably make.

However, we also really need to look into incentive structures and the lived experiences of actual scientists to figure out the specific underlying flaws in the system that we have to address in order to make any of the fabulous symptomatic solutions (e.g. preregistration, data sharing, publishing open access) worth each scientist's while. Sticking to your scientific and ethical ideals is incredibly difficult if it means having to fear for your livelihood. But just waving at "the incentives" and carrying on cannot be the solution either – we need to investigate the problems we have to deal with, and we need to try to deal with them.

Therefore, my appeals (to senior people in particular) are: please stick to your ideals to make it easier for everyone else to stick to them too. And if you are in a position to materially make it easier for people to stick to our ideals, please dare to make those decisions and have those conflicts.


status quoi? - part ii


i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

find part i of the series here (with a longer introduction)

find part iii of the series here

Part II: Emma Henderson, Anne Israel, Ruben Arslan, and Hannah Moshontz

Emma Henderson

I feel safer than most to embrace open research because I’m not set on staying in academia. However my lack of trepidation is not the case for most ECRs: There’s a constant background radiation of work-based anxiety amongst those researchers who would, in an ideal world, be uncompromisingly bold in their choices. But they’re hampered by a “publish or perish” culture and a lack of sustainability and security in their jobs (if they have jobs in the first place).

The decision to embrace open research shouldn’t leave us vulnerable and looking for career opportunities elsewhere - taking our skills and enthusiasm with us. Academic freedom is the scaffolding for best research practice: unrestrained exploration, and a loyalty to data, regardless of whether it yields “desired” or “exciting” outcomes.  

As a community we have the tools, the talent, and the tenacity to change things for the better, but to sustain these changes we need fundamental reform in both employment and research evaluation. Here we need established, tenured academics to educate publishers, employers, funders, and policy makers, pointing them towards research that prizes integrity over performance. People higher up need to aim higher.

Anne Israel

I am mainly struggling with the experience that collaborative research projects across different psychological (sub-)disciplines are still rare and difficult to implement, even though there often is a substantial overlap between key research questions. When I entered into my PhD program, I thought the essence of research was to try to ask and answer smart questions in multidisciplinary teams in order to understand complex phenomena from different perspectives and to make our research understandable to the public as well as to different neighboring fields. Instead, I often feel that researchers nowadays spent a large amount of time fighting over limited resources by trying to prove that their way of doing research is the right one, that their questions are the better ones, and that their ideas are superior to those of other colleagues.

Don’t get me wrong - I am aware that collaborating with others can be quite a challenge: we learn different research practices, we speak different research dialects, and in order to work together productively we need to invest a lot of chronically scarce resources (such as money, time, and patience). However, in my opinion, not investing these resources cannot be an option, because one good integrative research project can be worth a lot more than ten isolated ones. Moreover, complex research questions require complex methods and as much competence as we can get to answer them adequately. Thus, it is about time that we overcome the constraints currently discouraging interdisciplinary work, such as outdated incentive structures that value first-authorships over teamwork, or unequal pay across different subdisciplines, to name just a few examples. We shouldn’t forget that the major goal of research is gaining a deeper understanding of important phenomena and providing the public with relevant new insights. In the end we are the people, who build the system. I hypothesize it’s worth it - let’s collect data.

Ruben Arslan

Have you ever decided not to read a friend's paper too closely (or even not at all)? 

I have. We need more public criticism of each other's work. I won't pretend I love getting reviews as an author, but I like reading others' reviews when I'm a reviewer. I often learn a great deal. Many colleagues know how to identify critical flaws in papers really well, but all that work is hidden away. The lack of criticism makes it too easy to get away with bad science. No matter how useful the tools we make, how convincing the arguments we spin and how welcoming the communities we build, good, transparent, reproducible, open science requires more time to publish fewer papers. We cannot only work on the benefits side. There need to be bigger downsides to p hacking, to ignoring genetic confounding, to salami slicing your data, to overselling, and to continuing to publish candidate gene studies in 2019 to name a few. 

Maybe these problems are called out in peer review more often now, but what do you do about people who immediately submit elsewhere without revisions? Two to three reviewers pulled from a decreasingly select pool. Roll the dice a few times and you will get through.

So, how do we change this? The main problem I see with unilaterally moving to a post-publication peer review system (flipping yourself) is that it will feel cruel and unusual to those who happen to be singled out in the beginning. I certainly felt like it was a bit unfair for two teams that I happened to read their work in a week where I was procrastinating something else. I also had mixed feelings because their open data let me write a post-publication peer review with a critical re-analysis. I do not want to deter data sharing, but then again open data loses all credibility as a signal of good intentions if nobody looks.

I thought unilaterally declaring that we want the criticism might be safer and would match critics with those receptive to criticism, so I put up an anonymous submission form and set bug bounties. I got only two submissions in the form so far and no takers on the bug bounties. 

So, I think we really need to just get going. Please don't go easy on early-career researchers either. I'm going on the job market with at least two critical commentaries on my work published, three corrections and one erratum. Despite earning a wombat for the most pessimistic prediction at this year's SIPS study prediction workshop I don't feel entirely gloomy about my prospects.

I'd feel even less gloomy if receiving criticism and self correction became more normal. Simine plans to focus on preprints that have not yet received a lot of attention, but I think there is a strong case for focusing on "big" publications too. If publication in a glamorous* journal reliably led to more scrutiny, a lot more people would change their behaviour. 

* I'd love if hiring criteria put less weight on glam and more on quality, but realistically there will not be a metrics reform any time soon and we cannot build judgments of quality into our metrics if reviews are locked away.

Hannah Moshontz

I think that being new to the field doesn't necessarily give me any insight into truly new issues that others haven't identified or started to address, but it does give me some insight into the abstract importance of issues independent of their history or causes. I also think that being somewhat naive to the history and causes of issues in the field helps me see solutions with more clarity (or naivete, depending on your perspective!). There are two issues that I think people don't pay enough attention to or that they see as too difficult to tackle, and that I see as both critical and solvable.

The first issue is that most research is not accessible to the public. We spend money, researcher time, and participant time conducting research only to have the products of that research be impossible or hard to access for other scholars, students, treatment practitioners, journalists, and the general public. In addition to the more radical steps that people can take to fundamentally change the publication system, there are simple but effective ways that people can support public access to research. For example, individual researchers can post versions of their manuscripts online. Almost all publication agreements (even those made with for-profit publishers) allow self-archiving in some form, and there are a lot of wonderful resources for researchers interested in responsibly self-archiving (e.g., SHERPA/RoMEO). If each researcher self-archived every paper they'd published by uploading it to a free repository (a process that takes just a few minutes per paper), almost all research would be accessible to the public.

The second issue is that there are known (or know-able) errors in published research. I think that having correct numbers in published scientific papers is a basic and important standard. To meet this standard, we need systems that make retroactive error correction easy and common, and that don't harm people who make or discover mistakes. There have been many exciting reforms that help prevent errors from being published, but there are still errors in the existing scientific record. Like with public access to research, there are small ways to address this big, complicated issue. For example, rather than taking an approach where errors are individually reported and individually corrected, people who edit or work for journals could adopt a systematic approach where they use automated error detection tools like statcheck to search a journal's entire archive and flag possible errors. There are many more ways that people can tackle this issue, whether they are involved in publishing or not (e.g., requesting corrections for minor errors in their own published work, financially supporting groups that work actively on this issue, like Retraction Watch).


status quoi? - part i

tired of lionization

"she's too old for breaking and too young to tame"
-kris kistofferson, from the song sister sinead

the older i get the less i understand reverence of authority or eminence.  when i was a student, i assumed that those who rose to eminence must have some special wisdom - they often acted as if they did, and others seemed to hang on their every word, so i gave them the benefit of the doubt even if i couldn't see it.  now i'm pretty convinced that there's nothing to this.  some eminent people are wise, and some are full of shit.  just like everyone else.

there are so many messages out there reinforcing the idea that high status people have so much wisdom to offer.  every time a conference stops all parallel programming for a famous person's keynote, every special issue with only invited submissions by senior people sharing their wisdom, every collection of intellectual leaders' opinions on random questions at The Edge - they all send this message.   

it's not that eminent people never have useful advice to give, or important experiences we can all learn from.  it's just that we should judge this on a case by case basis, rather than assuming it.  the knee-jerk assumption that eminent people should be listened to, detached from the actual value of what they're saying, is the problem.  if eminent people are just using their eminence to reinforce existing incentives and hierarchies (even if they do so in ways that seem benevolent and generous), rather than challenging them, then maybe we should listen to them less.  those of us who are more senior and regularly find ourselves being given too much deference should find ways to challenge this dynamic. 

i'm shocked at how often i hear grown-up people with tenured jobs and fancy titles say things like "i know he's a terrible choice but how could we have said no to Mr. Bigshot?" in contrast, i've now met dozens of early career researchers who have actually stood up to the Mr. Bigshots of their fields, or to the power structures more generally, and pointed out glaring flaws in the system.  the fact that people in precarious positions are more willing to do this than are the leaders in our field should be a wake-up call.

i'm embarrassed to say that it took me a long time to trust my own judgment and not just assume that eminent people earned their eminence.  which is why i'm so impressed by the early career researchers i meet these days who have the courage to question this.  those who trust their own perception of things, who see the problems with the status quo, and who decide for themselves who deserves their respect.  i have no idea where they got the wisdom and courage to see these things and point them out, but i am in awe.

i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

Part 1: Ivy Onyeador, Hannah Fraser, Anne Scheel, and Felix Cheung

Ivy Onyeador

There are a number of issues we need to be tackling, and for many of them, we’re paying plenty of attention, or at least engaging in lots of discussion. I think what is missing sometimes is a big picture strategy or goal that we could collectively put our efforts toward that would address the multitude of issues we’re facing.

I think we should be figuring out how to create abundance. As academics in the US, we operate in a context marked by scarcity. Some people are tenured and have lots of resources, but too many people feel they have to be hyper focused on trying to secure resources constantly. Operating in this way breeds insecurity and pettiness, narrows our vision, and pulls us away from our values and ultimate purpose. At core, I think a lot of the issues we need to tackle (e.g., inadequate pay for graduate students, adjuncts and even some professors; the unnecessarily steep competition for publications, funding, tenure track jobs, etc.; how unhappy way too many people are; many diversity and inclusion issues) have a common cause, the denominator is too small. Our initial impulse is to figure out how to operate with limited resources, and we do, but to truly address any of these issues, we need more investment. I think working to secure more resources, for instance, organizing and lobbying to increase state support in higher education, is something more academics should consider.

Hannah Fraser

Ecology is a fascinating field inhabited by passionate people who are genuinely doing their best to to understand the world around us and how to preserve it. However, there is a disconnect between the way this research is conducted and how it is described and interpreted. The vast majority of research in ecology is conducted in an exploratory manner, it's very rare that hypotheses are overtly described and when asked what their hypotheses are, ecologists often insist that they have none. However, the resultant articles are written in a way that implies that the research confirms well justified expectations and, despite the preliminary nature of the results of exploratory work, direct replications are virtually unheard of and deliberate conceptual replications are rare. Published research is treated as truth and any contradictions in the literature are attributed to environmental variation rather than the increased false discovery rate that accompanies exploratory research.

Anne Scheel

In psychology we've been used to having our cake and eating it: Discover a phenomenon and confirm your theoretical explanation for it in just one JPSP paper with a total N of less than 200! We've since learned that the cake wasn't really there, that we need larger N and that most manipulations are not as clever as we thought. But I think the full implications of the message haven't sunk in yet: Way more of what we've been taught needs to be burnt to the ground questioned and potentially rebuilt. We learned to slap layers of ill-defined, implausible, internally inconsistent (but quantifiable!) concepts onto each other, moving so far away from the real world that we fail to recognise that cookie-depleting our ego probably doesn't cause our marriages to break and that a 6-month-old infant probably doesn't have a concept of ‘good’ and ‘evil’. We invent EPFSEA* for phenomena we haven't bothered to describe in any reasonable detail, or to even establish that they're real**!

Let's go back to empiricism. Let's look at those phenomena that made us want to do science in the first place. What's going on? Is it real? Can we describe it? Can we identify necessary and sufficient conditions for it to occur? Can we manipulate it? Each of these questions is a step in a research programme that might take a lot of effort and time -- and require tools that often aren't taught in quantitative psych programmes (qualitative methods, concept formation, ...). They'll feel like baby steps that we prefer to ignore or treat as a dull check-box exercise before we can do real science, testing hypotheses. I think that most of our 'real' science is futile in lieu of the baby steps. And I worry that we're not willing to really embrace baby-step science and its consequences for our everyday research -- a system fed on a diet of illusory cake won't switch to bread crumbs easily.

PS Many others have made similar and better points before (and I’m a cake offender myself!). But I think more of us need to pay more attention to the problem.

* Extremely Premature but Fancy-Sounding Explanations with Acronyms

** Go-to example: newborn imitation

Felix Cheung

White hat bias; causal inference; and global/real world relevance.
1. White hat bias refers to the tendency to misrepresent information in ways that support a righteous goal (in the authors' mind). I think this can be seen in research on income inequality and related fields. In daily speech, income inequality carries a heavy negative connotation of unfairness, and it can seem like the 'right' thing to do to keep saying how bad income inequality is. But we need to keep in mind that the common operationalization of income inequality in research is not a measure of unfairness, but a measure of income differences (Gini). I am willing to say that some income differences can be fair and just (astronauts with years of specialized training should make more than a clerk). Of course, there are also income differences that are driven by economic injustice. The problem is that the common measure (e.g., Gini) is only a measure of income differences but not income unfairness. In short, income inequality in research is not exactly the same as income inequality in daily speech.
Prior research has found mixed results on the link between income inequality and well-being. However, it is not hard to read papers on income inequality citing only papers that found negative effects in the introduction. I have heard of anecdotes of researchers saying something along the line of "if I find that income inequality is good, there's no way I am publishing that". If we want to tackle important real world problems based on data, we must let the data speak. This is why pre-registration is so important, especially in areas that can be controversial.
2. Causal inference. I think sometimes, when we use observational studies, we can be too comfortable studying 'associations', and not causal relations. There are powerful designs within observational studies that when applied appropriately, can get us closer to causal inference. Methods, such as natural experiment, regression discontinuity design, Mendalian randomization, and convergent cross mapping, all hold promise to improve causal inference. Of course, some of these methods have already been used in psychological studies, but I would love to see more of them.
3. Psychology has a strong real world relevance, and this is partly reflected in the media attention that psychological studies can get. Many of our studies already have strong real world applications (e.g., clinical psych). However, I think we can do more. I currently work in a public health department, and I heard multiple stories of how the entire field was mobilized to tackle major health issues, such as tobacco control, vaccination, and epidemic outbreak. These efforts have saved many lives. If we want to elevate the real world relevance of our field, I believe we can do so by mobilizing our field to focus on major global issues that are having heavy impact on people's thoughts and behaviors (e.g., refugee crisis, social unrest around the globe, violations of basic human rights, denial of science [e.g., in the form of anti-vaccination or anti-climate change]). It is not going to be easy to study these topics (e.g., you cannot use college student samples to study them), and it would mean building strong collaborative partnerships with local governments and international institutions. 



flip yourself - part ii



[for flip yourself - part i see here]

we’ve recently seen a big push to make the scientific process more transparent. bringing this process out in the open can bring out the best in everyone – when we know our work will be seen by others, we’re more careful.  when others know they can check the work, they trust us more.  most of our focus has been on bringing transparency to how the research is done, but we also need transparency about how it’s evaluated – peer review has become a core part of the scientific process, and of how scientific claims get their credibility.  but peer review is far from transparent and accountable.

we can help bring more transparency and accountability to peer review by ‘flipping’ ourselves.  just like journals can flip from closed (behind a paywall) to open (open access), we can flip our reviews by spending more of our time doing reviews that everyone can see.

one way we can do this is through journals that offer open review, but we don’t need to limit ourselves to that.  thanks to preprint servers like PsyArXiv, authors can post manuscripts that anyone can access, and get feedback from anyone who takes the time to read and comment on their papers.  best of all, if the feedback is posted directly on the preprint (using a tool like, anyone reading the paper can also benefit from the reviewers’ comments.

closed review might have been necessary in the past, but technology has made open review really simple.*  sign up for a account, search your field’s preprint server or some open access journals in your field, and start commenting using  this approach to peer review is ‘open’ in multiple senses of the word.  anyone can read the reviews, but also, anyone can participate as a reviewer.  evaluation is taken out of the hands of a few gatekeepers and their selected advisors, and out from behind an impenetrable wall. 

there are several advantages and risks of open review, many of which have been discussed at length.  i’ll summarize some of the big ones here.


  1. less waste: valuable information from reviewers is shared with all readers who want to see it. reviewers don’t have to review the same manuscript multiple times for different journals.  everyone gets to benefit from the insights contained in reviews.  a few recent examples of public, post-publication reviews vividly illustrate this point: these three blog posts are full of valuable information i wouldn’t have thought of without these reviews (even though all three papers are in my areas of specialization). 
  2. more inclusive: more people with diverse viewpoints and areas of expertise can participate, including early career researchers who are often excluded from formal peer review. personal connections matter less.  this will allow for more comprehensive evaluations distributed across a team of diverse reviewers with complementary expertise and interests.
  3. more credit, better incentives: it’s easier to get recognition for public reviews, and getting recognition for one’s reviews can create more incentives to do (good) reviews.
  4. better calibration of trust in findings: when a paper is published that’s exactly in my area of expertise, i might catch most of the issues other reviewers would catch (though let’s be honest, probably not). but when we need to evaluate papers even a bit outside our areas of expertise, knowing what others in that area think of the paper can be extremely useful.  think of the professor evaluating her junior colleague for tenure based on publications in a different subfield.  or the science journalist figuring out what to make of a new paper.  or policy makers.  instead of relying just on the crude information that a paper made it through peer review, all of us can form a more graded judgment of how trustworthy the findings are, even if the paper is a bit outside our expertise.


  1. filtering out abusive comments: one benefit of the moderation provided by traditional peer review is that unprofessional reviews – abuse, harassment, bullying – can be filtered out. they sometimes aren’t, if you believe the stories you hear on social media, but there is at least the threat of an editor catching bad actors and curbing their behavior.  there are solutions to this problem in other open online communities (e.g., up- and down-voting comments, and having moderators review flagged comments).  perhaps having more eyes on the reviews will lead to a more effective system.
  2. protecting vulnerable reviewers: many people who may want to participate as reviewers in an open review system could be taking big risks – those with precarious employment, those still in training, or anyone criticizing someone much higher up in the hierarchy. the traditional peer review system allows reviewers to remain unidentified (known only to the editor), which provides more safety for these vulnerable reviewers (if they get invited to review).  open review systems should also find a way to allow reviewers to post comments without revealing their identity.  this is in tension with the desire to keep out trolls and bullies, though.  once again, i think we can look to other communities online to learn what has worked best there.  in the meantime, perhaps allowing researchers to post reviews on behalf of unidentified reviewers (much like an editor passes on comments from unidentified reviewers) may be a good stopgap.
  3. conflicts of interest: the open review system could easily be abused by authors who ask their friends to post favorable comments. conflicts of interest may be more common than we’d like to believe in the traditional system, too, and it would be a shame to exacerbate that problem.  in my opinion, all open reviews should begin with a declaration of anything that could be perceived as a conflict of interest, and there should be sanctions (e.g., downvotes or flagging) against reviewers who repeatedly fail to accurately disclose their conflicts.
  4. unequal attention: if open review is a free-for-all, some papers will get much more attention than others, and that will almost certainly be correlated with the authors’ status, among other things. one advantage of the traditional peer review system is that it guarantees at least one pair of eyeballs on every submission (though some form desk rejection letters leave wide open the possibility that those eyeballs were not aimed directly at the manuscript).  of course, we don’t know that status doesn’t affect the way a paper is treated at journals (it almost certainly does). the rich-get-richer “matthew effect” is everywhere, and it’ll be a challenge for open review.  perhaps open review will push the scientific community to more fully acknowledge this problem and develop a mechanism to deal with it.


what now? 

i’ve attended many journal clubs where someone, usually a graduate student, asks “how did this paper get into Journal X?”  we then speculate, and the rest of journal club is essentially a discussion of what we would’ve said had we been reviewers.  often the group comes up with flaws that seem to not have been dealt with during peer review, or ideas that could have greatly improved the paper.  the fact that we can’t know the paper’s peer review history, and can’t contribute our own insights as reviewers, is a huge waste.  open review can remedy this.

open review has some challenges to overcome.  it will not be a perfect system.  but neither is the traditional peer review system.  it is not uncommon to hold proposals for reform to much higher standards than current policies are held, often because we forget to ask if the status quo has any of the problems the new system might (or bigger ones).  one advantage of an open review system is that we can better track these potential problems, and identify patterns and potential solutions.  those of us who want open review to succeed will need to be vigilant, dedicated, and responsive.  open review will have to be experimental at first, subject to empirical investigation, and flexible. 

to start with, i think we need to do what that cheesy saying tells us: be the change you wish to see in the world.  here’s what i plan to do, and i invite others to join me in whatever way works for them:
-search for preprints (and possibly open access publications in journals) that are within my areas of expertise.
-prioritize manuscripts that have gotten little or no attention.
-try to keep myself blind to the authors’ identities as i read the manuscript and write my review (this will be hard, but i have years of practice holding my hand up to exactly the right place on my screen as i download papers and scroll past the title page).
-write my review: be professional – no abuse, harassment, or bullying.  stick to the science, don’t make it personal. not knowing who the authors are helps with this.
-review as much or as little as i feel qualified to evaluate and have the skills and time to do well.  i’ll still try to contribute something even if i can’t evaluate everything.
-after writing my review but before posting it, check the authors’ identities, then declare my conflicts of interest at the beginning of all reviews (if i have serious conflicts, don’t post the review).
-post my review using
-contact the author(s) to let them know i’ve posted a review.
-i may also use plaudit to give it a thumbs up to make it easier to aggregate my evaluation with others’
-post a curated list of my favorite papers every few months.

this might sound like a lot of work.  it’s not much different than what you’re probably already doing for free for commercial publishers, who take your hard work, hide it from everyone else, give you no credit or much possibility of recognition, use your input to curate a collection of their favorite articles, and then sell back access to those articles to your university library, who uses money that could otherwise go to things you need. 

the beauty of open review is that you can do just the bits that are fun or easy for you.  if you want to go through and only comment on the validity of the measures used in each study, go for it.  if you just want to look at whether the authors made appropriate claims about the external validity of their findings, knock yourself out.  if you just want to comment “IN MICE” on the titles of papers that failed to mention this aspect of their study, well, that’s already taken care of.  by splitting up the work and doing the parts we’re best at, we can do what closed peer review will rarely accomplish – vet many different aspects of the paper from many different perspectives.  and we can help shatter the illusion that there is a final, static outcome of peer review, after which all flaws have been identified.

you’re probably already doing a lot of this work.  when you read a paper for journal club, you’re probably jotting down a few notes about what you liked or found problematic.  when you read a paper for your own research, you might think of feedback that would be useful to the authors, or to other readers.  why not take a few minutes, get a account, and let the rest of the world benefit from your little nuggets of insight?  or, if you want to start off easy, stick to flagging papers you especially liked with plaudit.  every little bit helps.


want to have your paper reviewed? 

by posting a preprint on PsyArXiv, you’re signaling that the paper is ready for feedback and fair game to be commented on.  but there are a lot of papers on PsyArXiv, so we could prioritize papers by authors who especially want feedback.  if you’d like to indicate that you would particularly like open reviews on a paper you’re an author on, sign up for a account and add a page note to the first page of your manuscript with the hashtag “#PlzReviewMe”


constraints on generality

do i think this will be a magic solution?  no.  it might not even work at all – i really don’t know what to expect.  but after many years on editorial teams, executive committees of professional societies, task forces, etc., i’m done waiting for top-down change.  i believe if we start trying new things, take a bottom-up, experimental approach, and learn from our efforts, we can discover better ways to do peer review.  i don’t think change will come from above - there are too many forces holding the status quo in place.  and to be clear, i’m not abandoning the traditional system – like most people, i don’t feel i can afford to.  i’ll keep working with journals, societies, etc., especially the ones taking steps towards greater transparency and accountability.  but i’m going to spend part of my newfound free time trying out alternatives to the traditional system, and i hope others do, too.  there are many different ways to experiment with open review, and if each of us tries what we’re most comfortable with, hopefully some of us will stumble on some things that works well.


* if you have a complicated relationship with technology like i do, this guide to annotating preprints will be helpful.