i've been selected as the incoming editor in chief of Psychological Science. i am thrilled, humbled, and excited for this next chapter, and very grateful for all the people who invited, accompanied, and mentored me through all the previous chapters. i ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

 

my next chapter and more...




my next chapter

Maggie_koala

i've been selected as the incoming editor in chief of Psychological Science.  i am thrilled, humbled, and excited for this next chapter, and very grateful for all the people who invited, accompanied, and mentored me through all the previous chapters. 

i want to share the vision statement i submitted with my application for the position - it's linked below.  i wrote this a little while ago, before i knew much about the inner workings of the journal, so some of my thinking about these topics has evolved and will continue to evolve. 

in the next few months, i'll likely publish a column in the APS Observer, as well as an opening editorial, with more concrete information about what will change and what will stay the same at the journal.  the one thing i can promise is that authors will now be required to put two spaces after every period.*  

as always, this blog reflects my personal views only, and not the views of the journal, APS, my university, or any other group to which i belong.

link to vision statement [pdf]

* sadly, this is not within my powers.  also my students would find every instance of two spaces in every manuscript and correct it.

Maggie_wallabysome animals i met on magnetic island

     
 

unearned prestige

i was supposed to give a talk at the Metascience 2023 conference, but instead i am a block away in a hotel room with a (very mild, so far) case of covid. i know, so 2022.  so i typed up my talk and here it is.

very special thanks to the MetaMelb lab group for their "behind the livestream" text commentary from the conference, and to them and alex holcombe for comments on an earlier version of this talk ♥ (and also to my dinner group last night who i inadvertently exposed, and who let me take two whole leftover pizzas back to the hotel (that i intended to share with my lab, i swear!).  and to the Center for Open Science for paying for the pizzas (and also for creating a world where this blog, and my career, are possible).  if you ever get stuck in a hotel room with covid, try to make sure you have supportive friends and colleagues nearby, and two whole pizzas in your fridge.)

 

Screen Shot 2023-05-10 at 8.18.16 AM


paths to prestige

i’ve been known to be critical of eminence and prestige, but it’s really not prestige itself that i’m against. i’m not totally naïve.  one way to think about prestige is just having a good reputation.  i know we’re never going to get rid of reputations, and anyways some research is better than other research, and so deserves a better reputation. the goal shouldn’t be to eliminate that – we want good science to have a good reputation. the problem is unearned prestige. i propose there are at least two wide open paths to unearned prestige, and today i’m going to talk about how transparency can help close those.


 
Screen Shot 2023-05-10 at 3.13.59 PM
 
the first is that a specific research output, like a manuscript, is mistakenly evaluated as being good science even though it's not.  in this scenario, the journal sincerely wants and tries to select the best science, but some manuscripts that are not in fact great science are mistakenly evaluated positively.  this could happen because of active misrepresentation on the authors' part, of course, but that's not the only way.  in fact, i believe most authors sincerely believe their work is very good, and they try to present it as such and convince journals of that because they believe it.  the problem is it's often very hard for reviewers and editors to tell what's genuinely good and what just looks good.  and that's because we often don't have all the information we need.  as Clarebout is famously quoted as saying, "An article about a computational result is advertising, not scholarship.  The actual scholarship is the full software environment, code and data, that produced the result."   so of course when we have to judge research based only on the article, we're going to be wrong a lot.
 
but that's not the only way prestige can be unearned.  the other way is when there is a mismatch between a journal's reputation and its actual practices.  if a journal has a reputation for selecting for good science, but what they actually select for is something else, then plenty of research will get a reputation for being good science -- because it's published in that journal -- even when it isn't.  and this can happen even when a close reading of the article actually would have been enough to see that this isn't exemplary science.  
 
in many ways, this is a bigger loophole than the first -- the unearned prestige is bestowed on every article that makes it into the journal.  all you need to do to get this unearned prestige is figure out what these journals actually are selecting for, and be good at that.  of course that might not be easy or possible for everyone, for example if a journal is selecting for famous authors or fancy institutions.  but for some people, that will be a lot easier than producing high quality science.
 
hopefully it's clear how transparency can help close these paths to unearned prestige.  at least that first path where articles reporting shoddy science can pass for good science. requiring basic transparency -- by which i just mean the information needed to verify a researcher's claims, to poke and prod at them and see if they stand up to scrutiny -- this will help reduce the chances that individual outputs will be given a better reputation than they deserve.  
 
but what may be less obvious is that the second path to unearned prestige, through journals that have an unearned reputation for selecting good science, can also be addressed by more transparency.  but a different kind of transparency: transparency in journals' evaluation processes.
 
transparency: not just for researchers anymore
 
the efforts towards open science and transparency have been focused mostly on individual researchers' practices, and little attention has been paid to the transparency of journals' evaluation processes.  but we are often outsourcing evaluations of prestige to journals -- they get to decide which research and which researchers get a good reputation.  we hand over prestige to journals, and let them distribute prestige to researchers.  but we don't expect hardly any transparency from journals.  this is a really big problem, because it means we don't really know if journals are indeed selecting the best science, or what they are selecting for.
 
in fact, many of the "top" journals don't even claim to be selecting the best science, at least not if we mean the most accurate and careful science.  many top journals openly admit that they are selecting for impact or novelty, and few do much to really ensure the accuracy or reliability of what they publish.  if journals really cared about publishing the best science, we would see many more journals invest in things like reproducibility checks, or tests of generalizability or robustness.  but most journals, even the pickiest and richest ones, don't do this.  what's worse, when other researchers check the accuracy of their published papers, they often don't care, don't want to publish it, or are even annoyed.  that is who we are outsourcing our evaluation of prestige to.  that's not good enough.
 
what do i mean by transparency from journals?  first, i mean that their policies should be clear about what they are selecting for and these policies need to match actual practice at the journal.  there are a lot of unwritten - or at least un-shared with the public - practices that journals encourage their editors and reviewers to follow.  this was made explicit to me when i became editor in chief of a journal in my field, and within a year was accused by the publication committee of stepping on important people's toes because i was desk rejecting some powerful people's papers.  i was naive enough to be shocked, and didn't realize they were just saying the quiet part out loud.  when i stubbornly refused to believe that they were telling me not to desk reject famous people's papers, one of the members of the publication committee made it very clear.  she sent me an email saying "Just today I received a complaint from a colleague, who is a senior, highly respected, award-winning social psychologist.  He says: "About a month ago I had an experience with Simine that I found extremely distasteful.  The end result is that I will not submit another paper to SPPS as long as she is associated with the journal...""  when i offered to discuss the paper in question and my decision, it became clear that the merit of my decision was not the point.  
 
my view is that famous people should not get special treatment from journal editors, but if they do, this should be stated explicitly.  in this way, i guess PNAS's 'contributed' track, in which NAS members can arrange their own peer review process and have a 98% acceptance rate, is perhaps just a more transparent way of doing what most fancy journals are probably doing secretly.  i hadn't thought of it this way before, but i suppose that's a step in the right direction - their policy is transparent enough that people like me can shout about it on twitter, and ask them to abolish it, and that's surely better than doing it in secret.
 
at a minimum, journals should make their practices transparent to authors, and ensure that what they say they're selecting for is actually what submissions are evaluated on.  at Collabra: Psychology, the journal that i'm editor of, i try to do this by making the informal guide that i share with editors available to anyone, publicly.  
 
second, journals should make the content of peer reviews and editors' decision letters public, at least for the articles they publish ('transparent peer review').  this would give us better information to assess what journals are actually evaluating during peer review, and how thoroughly they're doing it.  
 
third, journals should not only be more welcoming of metaresearch that examines how strong their published papers are, like replications and reproducibility checks, they should actively solicit, fund, and publish such audits.  that would show a commitment to making sure they're living up to their stated aims and values.  if they want a reputation for selecting the best science, invest in the evidence necessary to back up that claim.
 
prestige: the hidden curriculum
 
taken together, what these two paths to unearned prestige mean is that there is effectively a 'hidden curriculum' for prestige.  if we think about it, who is going to be better at using these loopholes?  it is much easier to create manuscripts that look, superficially, like good science, or that look like whatever journals are actually selecting for, if you have privileged information.  to benefit from these paths, you have to understand the discrepancy between what is advertised and what actually matters.  this favors those with better connections, more status.  
 
to oversimplify a bit, this creates two classes of researchers.  the everyday researchers and the well-connected.  while the everyday researcher has to play by the official rules, and work to get a good reputation the hard way, by doing good science, the well-connected researchers get a free ride.  
 
Screen Shot 2023-05-10 at 3.15.10 PM
 
what's worse, the free ride that some researchers are getting is invisible.  the lack of transparency in individual research outputs means that we can't see if some research is incomplete or misleading, if some shortcuts have been taken, inadvertently or not.  and the fact that journals' evaluation processes aren't transparent means that we can't see if some authors are getting special treatment, or if the evaluations are emphasizing something other than what the journal claims to be selecting for.
 
indeed, this special track, the gondola ride, was made clear to me when, earlier in my career in the early days of psychology's replication crisis, when norms were changing fast and goal posts kept moving, a professor at a prestigious university said to me, in an exasperated tone, "just tell me what to do so i won't get criticized." that's when i realized that there were two tracks - one treacherous and unpredictable track for most researchers, where you have to do your best and hope it's what journals want, and one for the more privileged, where you follow a formula and things generally work out in your favor.
 
of course i'm oversimplifying, but the point i want to make is that there are unwritten rules and hidden advantages, and that open science, and open evaluation, can help bring those to light.
 
how can transparency help?
 
so, if the goal is to reduce unearned prestige, how can transparency help? i see a few ways.  first, as i've already talked about, transparency can make it easier to tell the good research from the bad, and to tell the journals that are doing this well from those that are doing it badly, in other words, closing the two loopholes to unearned prestige.  but transparency can also help strengthen the good path - the association between the good research and a good reputation.  the more information we have about what was done, the more calibrated our evaluations can be.  in other words, transparency doesn't guarantee credibility, transparency guarantees* we'll get the credibility we deserve (* void where not paired with scrutiny).  but this is only a possibility that transparency opens up - it doesn't actually guarantee anything.  we have to actually use the transparency to poke and prod at the research to see how solid it is.  
 
of course there are a lot of messy details, such as what we mean by 'good science', what we want to reward, and so on.  and one important caveat is that we need to be careful about how selecting for the best individuals, or individual outputs, may not be what's best for science as a whole.  i don't have time to go into this, but google "chickens" and "group selection", or Richard McElreath's talk "Science is like a chicken coop", or Leo Tiokhin and colleagues' paper "Shifting the level of selection in science".
 
Aggressive_chicken
an aggressive chicken
 
 
transparency is scary
 
now i want to shift gears and talk about how people react to the shifting norms towards greater transparency in science, and i want to distinguish between two superficially similar reactions.
 
first, any change in norms can lead to uncertainty during the transition, and to startup costs related to adjusting to the new norms.  this is true even if the change is welcome and good.  we've all gotten used to playing by one set of rules, and we've invested resources into learning how to play that game, and now the rules are changing.  in the short run, this makes things harder.
 
and this short-term cost is greatest for the everyday researchers, especially those with fewer resources and less good connections.  it's harder to know what the new rules are, and to spend the time and money needed to develop the new skills.  and the sunk costs invested into the old system will be harder to bear.  but i also want to emphasize that this is a short term problem, and i believe it's one that can be addressed by making accessible and usable tools, templates, training materials, and so on.  and as much as changes to the system are painful, the old system is worse, especially, in my opinion, for lower status researchers.  it's just hugely unfair.
 
second, another reason the shift towards transparency is scary is that making things more transparent will shake up the status quo.  to the extent that some prestige is unearned, that some people have been benefiting from hidden rules and advantages, greater transparency means that some people and findings will stand to lose prestige.  and of course this fear is greatest among those who have been doing very well in the old system - especially those who came by that prestige unfairly.
 
so, when we hear "transparency is scary", we should ask which of these two fears is likely behind that feeling.  is it the very legitimate fear of the short-term costs and uncertainty that less well-resourced researchers are most vulnerable to?  or is it the more existential fear of those with unearned prestige afraid of losing it?  those who don't want the rules to change, or even to be said out loud, because they've been benefiting from all the opaqueness?
 
and one of the most frustrating things i've seen is when the prestigious group uses the most vulnerable groups as cover to protect their advantage.  there are very legitimate fears about changing the rules of the game, but those fears are addressable, and, i think, much less treacherous for the average researcher than the costs of the old system, and they should not be used as a shield to perpetuate inequities and unfair practices.  
 
what can we do?
 
so what can we do to make the change towards more transparency less painful for everyday researchers?  we can make the new rules explicit and transparent, to make things more predictable and reduce the hidden curriculum.  and of course, we want to make those new rules fair and equitable.  in addition, we should make new rules for increased transparency not just for researchers, but also for journals.  indeed, i think that hidden practices in journal peer review are a huge source of inequity.  given how much power we give to journals to decide who gets a good reputation (and a job, and a grant, and a raise, and awards), we should expect a lot more transparency from them.
 
these points are perhaps obvious, so i want to end on a few maybe less obvious things we can do to increase transparency, and do so in a way that is as painless as possible for researchers who want to do the right thing.  
 
first, perhaps counterintuitively, we should get to a point where transparency is required, rather than optional, as fast as possible.  i agree with brian nosek's approach, illustrated in this pyramid he made, that we need to go through all of these other steps before we make transparency required.  but let's hurry up.  because optional transparency is inequitable, and unsustainable.
 
Pyramid_required
 
let me illustrate with an anecdote.  imagine you're a journal editor and you're evaluating two papers.  paper 1 is transparently reported, and you can see that the conclusions are not well-supported by the evidence.  maybe you can tell that the results aren't robust to other reasonable ways of analyzing the data, or that they made an error in their statistical analysis code, or you can tell that what the authors say was planned, or interpret as a confirmatory test, was not actually what was planned.  in other words, the transparency allows you to see that the conclusions are on shaky ground.  then there's paper 2, which opted out of transparency, and just tells you a neat and tidy story where everything works out.  as an editor, you're stuck.  of course you're going to tell the authors of paper 1 that their conclusions should be better calibrated to the evidence, but what do you say to the authors of paper 2?  you can give them the benefit of the doubt and take their claims at face value, which will have the effect of punishing the authors of paper 1 for their transparency, and of selecting transparent research, and researchers, out of the literature and the field.  or you can not give the authors of paper 2 the benefit of the doubt, and tell them that without the information needed to verify their claims, you won't believe them.  first of all, that'll make you a really unpopular editor (and possibly not an editor for much longer, if the authors of paper 2 are powerful), but more importantly, if that's what you believe, then you are de facto mandating transparency, and it's not great to do that only in unwritten practice.  you should just make that official policy.
 
a system where transparency is optional cannot function, not for very long.  and it will always favor those who choose to be less transparent, who know how to sell their work.  the most equitable system is one in which you can't opt out of having your work judged on its merits, poked and prodded at, scrutinized.  transparency levels the playing field because it forces everyone to give their critics ammunition.  but only if it's required.
 
also relevant, i recently read tom hostler's paper "the invisible workload of open research", which argues that "open research practices present a novel type of academic labour with high potential to be mismeasured or made invisible by workload models, raising expectations to even more unrealistic levels."  i agree, but especially if researchers who engage in open practices have to compete with researchers who don't.  if everyone has to be transparent (again, not sharing everything all the time, but sharing to the extent necessary for others to verify/scrutinize your claims), these problems are greatly reduced.
 
until transparency is required, though, there is something each of us can do, and it's to demand a minimum level of transparency when reviewing for journals.  this is the idea behind the Peer Reviewers' Openness (PRO) initiative - it capitalizes on the power reviewers have to change the system.  if a journal wants you to donate your time, it is completely reasonable to ask for the information you need to evaluate the paper.  of course the authors of this initiative have thought about the exceptions and grey areas.  i encourage you to read it and consider signing up.
 
finally, another more radical thing we can do to make science more transparent and more equitable is to take back control of how we allocate prestige.  we don't need to wait around for journals to reform themselves and open themselves up to scrutiny and accountability.  in fact, there's no reason to think journals will ever do that - most of them are in the business of chasing impact and profits, so why would we expect them to change?  it's time to move on.
 
Gondola
sorry metamelb, i couldn't part with the gondola
     
 

living in a powder keg

Pelicans
cc-by me

 

every now and then, something happens in my twitter bubble that captures so much attention that it feels like everyone is wondering where everyone else stands on it. sometimes i am silent.  sometimes i am wondering why others are silent.  i've been on both sides of this. i've felt the disappointment when others don't speak out about something i feel strongly about, or am personally affected by.  the idea that sometimes silence can be quite loud is familiar to me, like it is to probably anyone who has been mistreated publicly, or taken a risk in speaking out about private mistreatment, or provided support to others who were taking that often difficult, always deliberate, step. when we hope that someone will be supportive, the palpable lack of support can be disillusioning.

i know that i have sometimes been that disappointing person to others. someone that people expect to speak up, take a position, express their values, who says nothing. i believe it is fair to judge people based on their actions, including what they don't do or say, especially when it is a pattern.  indeed, i hope that people judge me based on my actions (rather than, say, a rumor or stereotype they have about me or a group i belong to). i hope that someone who looks over my public track record of behavior will form an accurate impression - if they decide they don't like me or don't share my values, the best i can hope for is that this decision is based on a real difference, not a misunderstanding.  so it matters a lot to me that my behavior is an accurate representation of my values.

so then why do i sometimes stay silent?  here are some of the reasons, with some examples here and there.

  1. sometimes i am consumed by something else. like everyone, i have stuff going on in my life outside of twitter.  sometimes i don't make the effort to follow breaking news on academic twitter. (ok, real talk: this one is rare.  but i am really good at ignoring something when i intentionally decide to do so, so it's possible i decided to ignore a budding controversy, and missed when it turned into a fullblown shitshow. my friends can only be counted on to text me about it 82% of the time.)
  1. sometimes i don't think the thing that happened is worth commenting on. for example, when chris chambers made a joke about a paper around halloween in 2019, it didn't strike me as something i needed to have a position on.  the paper had a title that seemed intended to provoke, and chris took the bait.  would i have tweeted what he tweeted?  no.  but did i see anything wrong with what he tweeted?  also no.  on a scale from 1 (i could imagine saying that thing myself) to 10 (i would cut all ties with this person for what they said, and lobby to remove them from any position of influence over others), i would call this a 2.  why not say so publicly?  because chris is high status in our community so doesn't need anyone to defend him (he also expressed remorse later).  no one benefits from me saying that i didn't think chris's tweet was problematic, and saying so could upset people who did find it problematic, which i didn't want to do, because maybe they're right!  i don't know, i wasn't the target of his tweet and it's likely that if i was, i would have seen it differently.
  1. i think it's slightly problematic (e.g., a 3 or a 4 on the scale described above), and there is a highly visible campaign of opposition that expresses a position much more extreme than mine. in these situations, i often spend quite a while drafting and deleting drafts of tweets trying to express my position in a way that won't sound like i'm saying the behavior wasn't problematic at all, but doesn't make it easy for people to mistake my position for the more extreme and more popular position. so i usually end up saying nothing, because i don't trust my ability to walk this tightrope. also i often feel that many of the reactions to the initial event are at least as problematic as the initial event itself, so i feel like if i call out the initial event, i'd have to spend lots of time evaluating what else in the same discussion rises to the same level and ought to be called out.  that doesn't seem like a good use of my time, when none of the actions rise above a 3 or 4 on my subjective scale.  a recent example of this was daniel lakens's tweet playing off of an MLK quote, and some of the comments made in the discussions of that tweet.  another was ej's analogy to a father who finds out his family members are trump supporters, to describe his emotions when his friends wrote a paper he felt was misguided and didn't consult him. definitely not things i would have written and i cringed when i read them, but i don't find them as problematic as some people do.  on the other hand, i wasn't in the groups most likely to feel the impact of either of those missteps, so my personal reaction doesn't seem especially worth expressing.
  1. sometimes the action is super problematic, but the person doing the action is someone i've already written off.  sometimes i have a personal history with them, other times it's obvious from their public behavior that interacting with them is basically masochism, so as a rule i don't engage with them.  in these cases, my decision not to call them out is selfish - i am protecting my own finely-tuned mental harmony by ignoring them.  if i felt i had to pay attention to them, i would probably just quit twitter instead.  i know some don't have the luxury to ignore them, and for that reason i do everything i can in my leadership roles to limit the opportunities given to demonstrably toxic people to have a platform/influence. (if this seems draconian, i'm talking about opportunities like colloquium or symposium speakers, service on committees, awards, etc. - i'm not trying to get those people fired, i'm trying to not give them extra opportunities to treat people like shit.) another added layer of complexity in these cases is that i think some of these people's problems likely rise to the level of serious mental health challenges.  of course this is a continuum and everyone deserves some level of mercy, so we each have to consider our own principles and conscience when deciding how much shunning is the right balance to strike.  i make my decisions on this privately, and i respect that others may draw the line in different places (e.g., people who have been close friends with the person, or who know more about their mental state, etc.).
  1. sometimes i don't think the evidence is clear about what exactly happened.  in cases where the action that people are upset about is public, that's not an issue. but in other situations, things can get tricky.  we all different have priors about whom we are more likely to believe, different kinds of reports we are more likely to give weight to, different levels of confidence we need before publicly speaking out.  if you see me staying silent about an issue where the facts are in question, it's possible i have a different assessment of the facts, or a different certainty threshold for speaking out, than you. having a lot of followers comes with added responsibility to call out some kinds of behaviors, and it also comes with added responsibility to not fuck that up.  just because i don't say anything doesn't mean i don't believe the bad thing happened, but it might mean i'm not sure enough. this is not to say i never speak up when it requires trusting someone's word - i think my track record shows that i do. it's a case-by-case thing for me.

why can't i just say which of these reasons explains my silence in any given case?  in some cases (#1 and #4) i am unaware of or intentionally ignoring the issue or person.  this (sometimes deliberate and precious) state of ignorance makes it hard to explain my silence.  in other cases (#3 and #5, and sometimes #2), my position is more mixed or mild than others' and I very often respect people who have stronger positions.  in those situations, i don't think anyone needs to hear that my position is less clear-cut or less certain, and in fact i think it could do harm to say so.  

i suspect that in cases of #2 and #3, the people who are disappointed in my silence would continue to be disappointed if they knew my exact position on the issue (that is, i don't perceive the issue the way they would like me to).  i am sorry to be a disappointment, but also i accept that i am not who everyone i respect wants me to be.  in the other cases (#1, #4, and #5), i know that i am risking losing esteem in the eyes of people i respect when in fact my private belief is close to what they would hope from me, and i am sad about that.  i hope that all the times that i do state my positions publicly will be enough to outweigh the misperceptions that might stem from when i am silent for one of these reasons.  i hope that overall, my behavior makes it clear where i stand on many issues, such as sexual harassment, bullying, advisors abusing their power over graduate students, ECRs not getting credit for their work, and many more. 

ideally, i think we should judge each other based on our long-run track records of behavior, and show each other some grace when someone's voice is absent from a particular discussion.  if it becomes a pattern, or if any given instance is a dealbreaker for us, then i think it makes sense to update our perception of the person, but it won't always be the case that we can draw a clear conclusion from someone's silence in a given instance. indeed, i am sure the list above is incomplete, and people have many other reasons for not always expressing their views.  i will try to practice this myself - to be gracious and forgiving if people i hope will stand up for me or my students, colleagues, or groups i care about, are silent in any given instance.  and i will continue to strive to make my pattern of behavior a clear reflection of who i am and what is important to me.

 

Kangaroo
cc-by alex holcombe

     
 

status quoi? part iii

Oryx1

i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

find part i of the series here (with a longer introduction)

find part ii of the series here

 

Part 3: Jackie Thompson, Joe Hilgard, Sophia Crüwell


Jackie Thompson

To me, failures of communication are the biggest blind spot in science. 
One aspect, notably, is science's focus on an outdated publishing model designed for an age when communication was slow, and relied on isolated scientists disseminating their findings on "sheets of pulped-up tree guts" (https://thehardestscience.com/2019/07/). We need to focus on new, more flexible ways to envision sharing of scientific inquiries-- for instance, making academic papers interactive and dynamic, or doing away with the idea of a "paper" altogether, and instead publishing individual elements of the research process (see the Octopus model by Alex Freeman; https://t.co/BPCarBGhZZ)

Another massive blindspot (almost literally) is a kind of self-focused myopia -- we spend loads of energy trying to reinvent the wheel, not communicating between fields and sub-fields, when some have already solved problems that other fields struggle massively with. (How many decades were preprints popular in physics before they started catching on in other fields?) Psychology put a name to the fundamental attribution error, but as a field we still fall prey to it every day. Many psychologists (myself included) scoff when we see non-academic ventures that fail to implement basic rules of good experimental practice -- e.g., businesses that send out poorly written surveys, or government departments that claim to show their interventions worked, despite not including any control groups. Yet, we turn around and tell our scientists to run their labs without any training in management; we try to change cultures and incentives without calling on any knowledge from sociology or economics; we try to map the future of science without delving into the history of science. We have so much expertise and wisdom at our fingertips from our colleagues just beyond the fences of other fields, yet we don't think to look beyond the fences for help from our neighbors; we happily restrict ourselves to the gardening tools we already have in our own backyards. Call this myopia, call it hypocrisy -- whatever the label, it's clear that this mindset results in a huge waste of time and effort. Interdisciplinary collaborations are incredibly valuable, but not valued (at least not by the insular academic ingroups they try to bridge.) The academic community needs to embrace the value of collaboration, and learn the humility to ask "who might know more about this than I do?"

 
Joe Hilgard

I study aggression, and I feel like my area is five years behind everybody else regarding the replication crisis. In 2011, psychology realized that its Type I error rate was not actually 5%, that several p values between .025 and .050 is a bad sign, and that we shouldn't see 95% of papers reporting statistically-significant results. Yet when I read aggression studies, I tend to see a lot of statistical significance, often with just-significant p-values. I worry sometimes that some of our research doesn't ask "is my hypothesis true?" but rather "how hard do we have to put our thumb on the scales to get the anticipated result?".

While we're still catching up with the last crisis, I think the next crisis will be measurement. We know very little about the reliability and validity of many popular measures in experiments about aggression. We've assumed our measurements are good because, in the past, we've usually gotten the statistical significance we expected -- maybe due to our elevated Type I error rates. Research from other fields indicates that the reliability of task measures is much too poor for between-subjects work. I think we've assumed a lot of our nomological network, and when we test those assumptions I don't think we'll like what we find.


Sophia Crüwell

I think we need to stop pushing the boundaries of our collective and individual ideals. We also need to stop thinking and arguing that some deviation from those ideals is acceptable or even good in certain circumstances, such as getting a career advantage – whether for ourselves or for others. Treating our ideals as optional in this way is the beginning of losing sight of why we (are being paid to) do research in the first place: to get closer to the truth and/or to improve people's lives. This goes for any scientist, really, but metascientists are in a particularly privileged position here: at least we know that potential future academic employers will be more likely to value e.g. openness and rigour over simple publication count. I believe that we have a responsibility to use this privilege and change the conversation with every decision and in all criticisms we can reasonably make.

However, we also really need to look into incentive structures and the lived experiences of actual scientists to figure out the specific underlying flaws in the system that we have to address in order to make any of the fabulous symptomatic solutions (e.g. preregistration, data sharing, publishing open access) worth each scientist's while. Sticking to your scientific and ethical ideals is incredibly difficult if it means having to fear for your livelihood. But just waving at "the incentives" and carrying on cannot be the solution either – we need to investigate the problems we have to deal with, and we need to try to deal with them.

Therefore, my appeals (to senior people in particular) are: please stick to your ideals to make it easier for everyone else to stick to them too. And if you are in a position to materially make it easier for people to stick to our ideals, please dare to make those decisions and have those conflicts.

 

status quoi? - part ii

Rhino9

i wanted to write a series of blog posts featuring a few of the people i've met who are challenging the conventional wisdom and inspiring others.  but instead of telling you what i think of them, i wanted to give them a chance to share their insights in their own words.  i contacted a few early career researchers i've had the chance to get to know who have impressed me, and who are not affiliated with me or my lab (though i am collaborating on group projects with some of them).  there are many more role models than those featured here, and i encourage you to join me in amplifying them and their messages, however you can.

i asked each of these people "what are the blind spots in your field - what issues should we be tackling that we aren't paying enough attention to?"  here are their answers, in three parts, which i will post in three separate blog posts this week.

find part i of the series here (with a longer introduction)

find part iii of the series here


Part II: Emma Henderson, Anne Israel, Ruben Arslan, and Hannah Moshontz

Emma Henderson

I feel safer than most to embrace open research because I’m not set on staying in academia. However my lack of trepidation is not the case for most ECRs: There’s a constant background radiation of work-based anxiety amongst those researchers who would, in an ideal world, be uncompromisingly bold in their choices. But they’re hampered by a “publish or perish” culture and a lack of sustainability and security in their jobs (if they have jobs in the first place).

The decision to embrace open research shouldn’t leave us vulnerable and looking for career opportunities elsewhere - taking our skills and enthusiasm with us. Academic freedom is the scaffolding for best research practice: unrestrained exploration, and a loyalty to data, regardless of whether it yields “desired” or “exciting” outcomes.  

As a community we have the tools, the talent, and the tenacity to change things for the better, but to sustain these changes we need fundamental reform in both employment and research evaluation. Here we need established, tenured academics to educate publishers, employers, funders, and policy makers, pointing them towards research that prizes integrity over performance. People higher up need to aim higher.


Anne Israel

I am mainly struggling with the experience that collaborative research projects across different psychological (sub-)disciplines are still rare and difficult to implement, even though there often is a substantial overlap between key research questions. When I entered into my PhD program, I thought the essence of research was to try to ask and answer smart questions in multidisciplinary teams in order to understand complex phenomena from different perspectives and to make our research understandable to the public as well as to different neighboring fields. Instead, I often feel that researchers nowadays spent a large amount of time fighting over limited resources by trying to prove that their way of doing research is the right one, that their questions are the better ones, and that their ideas are superior to those of other colleagues.

Don’t get me wrong - I am aware that collaborating with others can be quite a challenge: we learn different research practices, we speak different research dialects, and in order to work together productively we need to invest a lot of chronically scarce resources (such as money, time, and patience). However, in my opinion, not investing these resources cannot be an option, because one good integrative research project can be worth a lot more than ten isolated ones. Moreover, complex research questions require complex methods and as much competence as we can get to answer them adequately. Thus, it is about time that we overcome the constraints currently discouraging interdisciplinary work, such as outdated incentive structures that value first-authorships over teamwork, or unequal pay across different subdisciplines, to name just a few examples. We shouldn’t forget that the major goal of research is gaining a deeper understanding of important phenomena and providing the public with relevant new insights. In the end we are the people, who build the system. I hypothesize it’s worth it - let’s collect data.


Ruben Arslan

Have you ever decided not to read a friend's paper too closely (or even not at all)? 

I have. We need more public criticism of each other's work. I won't pretend I love getting reviews as an author, but I like reading others' reviews when I'm a reviewer. I often learn a great deal. Many colleagues know how to identify critical flaws in papers really well, but all that work is hidden away. The lack of criticism makes it too easy to get away with bad science. No matter how useful the tools we make, how convincing the arguments we spin and how welcoming the communities we build, good, transparent, reproducible, open science requires more time to publish fewer papers. We cannot only work on the benefits side. There need to be bigger downsides to p hacking, to ignoring genetic confounding, to salami slicing your data, to overselling, and to continuing to publish candidate gene studies in 2019 to name a few. 

Maybe these problems are called out in peer review more often now, but what do you do about people who immediately submit elsewhere without revisions? Two to three reviewers pulled from a decreasingly select pool. Roll the dice a few times and you will get through.

So, how do we change this? The main problem I see with unilaterally moving to a post-publication peer review system (flipping yourself) is that it will feel cruel and unusual to those who happen to be singled out in the beginning. I certainly felt like it was a bit unfair for two teams that I happened to read their work in a week where I was procrastinating something else. I also had mixed feelings because their open data let me write a post-publication peer review with a critical re-analysis. I do not want to deter data sharing, but then again open data loses all credibility as a signal of good intentions if nobody looks.

I thought unilaterally declaring that we want the criticism might be safer and would match critics with those receptive to criticism, so I put up an anonymous submission form and set bug bounties. I got only two submissions in the form so far and no takers on the bug bounties. 

So, I think we really need to just get going. Please don't go easy on early-career researchers either. I'm going on the job market with at least two critical commentaries on my work published, three corrections and one erratum. Despite earning a wombat for the most pessimistic prediction at this year's SIPS study prediction workshop I don't feel entirely gloomy about my prospects.

I'd feel even less gloomy if receiving criticism and self correction became more normal. Simine plans to focus on preprints that have not yet received a lot of attention, but I think there is a strong case for focusing on "big" publications too. If publication in a glamorous* journal reliably led to more scrutiny, a lot more people would change their behaviour. 

* I'd love if hiring criteria put less weight on glam and more on quality, but realistically there will not be a metrics reform any time soon and we cannot build judgments of quality into our metrics if reviews are locked away.

Hannah Moshontz

I think that being new to the field doesn't necessarily give me any insight into truly new issues that others haven't identified or started to address, but it does give me some insight into the abstract importance of issues independent of their history or causes. I also think that being somewhat naive to the history and causes of issues in the field helps me see solutions with more clarity (or naivete, depending on your perspective!). There are two issues that I think people don't pay enough attention to or that they see as too difficult to tackle, and that I see as both critical and solvable.

The first issue is that most research is not accessible to the public. We spend money, researcher time, and participant time conducting research only to have the products of that research be impossible or hard to access for other scholars, students, treatment practitioners, journalists, and the general public. In addition to the more radical steps that people can take to fundamentally change the publication system, there are simple but effective ways that people can support public access to research. For example, individual researchers can post versions of their manuscripts online. Almost all publication agreements (even those made with for-profit publishers) allow self-archiving in some form, and there are a lot of wonderful resources for researchers interested in responsibly self-archiving (e.g., SHERPA/RoMEO). If each researcher self-archived every paper they'd published by uploading it to a free repository (a process that takes just a few minutes per paper), almost all research would be accessible to the public.

The second issue is that there are known (or know-able) errors in published research. I think that having correct numbers in published scientific papers is a basic and important standard. To meet this standard, we need systems that make retroactive error correction easy and common, and that don't harm people who make or discover mistakes. There have been many exciting reforms that help prevent errors from being published, but there are still errors in the existing scientific record. Like with public access to research, there are small ways to address this big, complicated issue. For example, rather than taking an approach where errors are individually reported and individually corrected, people who edit or work for journals could adopt a systematic approach where they use automated error detection tools like statcheck to search a journal's entire archive and flag possible errors. There are many more ways that people can tackle this issue, whether they are involved in publishing or not (e.g., requesting corrections for minor errors in their own published work, financially supporting groups that work actively on this issue, like Retraction Watch).