The pernicious problem of the perfect paper
This was originally written on 7 July 2020.
Insisting that studies should not have errors or weaknesses handicaps scientific progress in organizational research. Let me explain.
Within this field, it seems bad form to point out the errors or limitations in prior studies. This is especially so when it comes to problems with the operationalization of the theory or weaknesses in the study design. You find rare instances of papers' pointing out a previous paper's mistakes but rarer still find papers' citing previous paper's empirical limitations. Yet this second kind of truth-telling is more important, because empirical limitations are more common than out-and-out mistakes.
In practical terms, you rarely read a paper by Carlson that says, "I am trying to do what Alicedottir & Bobson did but with a better research design." Saying this implies that Alicedottir & Bobson's paper wasn't perfect. If they had discussed their paper's limitations in their paper then it would be uncontroversial for Carlson to say that they were addressing those limitations. Indeed this would look like a contribution--Carlson figured out how to do something that had bedeviled earlier researchers! But because Alicedottir & Bobson didn't discuss their study's limitations, Carlson has to cannot just say that his paper addresses them. He has to first call attention to the limitation, along the way implying that Alicedottir & Bobson were either too thick to notice it or too mercenary to acknowledge it.
For Carlson, what should be a banal advancement of the science requires calling someone else out on their scientific practice. That's a very different rhetorical move. It feels all the riskier the higher status Alicedottir and/or Bobson are, and the more accepted their study is. Normal-science progress gets tangled up with critiques of scientists. Junior academics are too cautious about calling out their seniors, but you can understand their reticence.
If said call-outs are so rare, what does "progress" look like? I think this problem helps drive the "no one has looked at x" motivations of so many papers. This is a cardinal rather than an ordinal critique of the literature. It lets the author make a contribution without having to say anything bad about what came before. Carlson can write, "There's all this previous work out there, such as Alicedottir & Bobson, and it's of course amazing, but they didn't look at this other thing. Not that there's anything wrong with their not having looked at it. There's an infinite number of things to look at, after all."
I can imagine a counter-argument, namely that people actually frame their papers this way because of the field's unhealthy stress on theoretical innovation rather than on theoretical testing or strengthening. Perhaps--but whence that stress? There's a chicken-and-egg debate there that doesn't much interest me. Even if the habitual emphasis on innovation explains the general tendency, I don't think it explains why people hesitate in individual cases to criticize how prior theories were tested.
The Frontier of Knowledge
This approach ramifies through and taints other sections of the research paper. To see how, I need to reference a hoary metaphor: the frontier of knowledge.
I think of scientific progress like a line on a map, moving across and revealing once-unknown territories. (It'd be better to call it a high-dimensional manifold in a yet-higher-dimensional space, but come on.) Studies are like individual expeditions, advancing beyond that line (and thus redrawing it) along part of its length. In aggregate, that line advances. Advance comes in fits and starts, and weird salients can develop. Think of the state of physics versus the state of medicine in 1800! Any given advance fills in part of the map, but vast unexplored territories loom beyond the frontier. Contrast the first gringo to see the Grand Canyon to the first gringo to navigate its length. (And take my use of "gringo" as shorthand for all of the issues pertaining to the social structure of knowledge I'm eliding here.) It's a big deal to report back to people, "My dudes, there's a GIANT FREAKING CANYON out there!" That advances the frontier. And it poses the obvious follow-up: "Someone needs to explore that canyon!"
I like this metaphor because it underlines how all studies have limitations, and why it's fine to acknowledge them. "We have zero idea where this canyon ends. We weren't expecting it, and we didn't have enough supplies to explore it further." "We were actually studying migrating birds, and stumbled on this canyon while following them. We're not competent to do geology." And so on. The key point here is, do we publish this explorer's report? The answer to that question does not hinge on these limitations; it hinges on the importance of the contribution relative to these limitations. If we think the discovery of that canyon is a big deal, we're happy to publish it, even if the explorer can't tell us how the canyon was formed or where it leads. A major reason to publish the discovery is to call attention to this spot where they've advanced the frontier and rally people to explore the land beyond. But we rally people by calling attention to the unexplored frontier. "Who can say where this canyon leads, or what lies within?"
This is what a research paper is supposed to do:
- Define the frontier of knowledge. What have people discovered thus far? Notice that you're defining roughly the frontier where you are advancing, not the entire frontier.
- Describe your advance. Where did you go, and what did you find?
- Describe the territory ahead. Where couldn't you go, and why? Where do you think it would be best to explore next?
This metaphor reinforces the idea that science is, or is supposed to be, a collective, cumulative endeavor. We celebrate the advances and we implicitly recognize that each advance is partial. That partiality is not a mistake; it was the best people could do, and doing more is what keeps us going.
When papers have to be perfect, this all breaks down. The literature review of a "no one has looked at _x_" paper flails when trying to define the frontier. The author wants simultaneously to anchor their idea in an existing literature and to not bad-mouth that literature. Thus they cannot point to a few studies (i.e., a point on the map) and say, "Here. I am setting off from here." Instead they have to sift through a ton of related ideas and explain why their idea is slightly different. Often--and this is where the high dimensionality of the actual frontier becomes important--they have to explain why the work of an entirely separate population of explorers also didn't discover their idea. It would seem that this exercise would uncover a small niche, but the poor author also has to explain why this niche, hemmed in by so many similar and more famous discoveries, is also a big deal. It's a thankless task, and it helps to explain why so many literature reviews feel like the author fronting that they have Read the Right Things. Graduate students can be forgiven for scratching their heads about the point of literature reviews.
This is a shame, because literature reviews aren't pointless. (If you'd told me, seventeen years ago, that I'd write that sentence...) A good literature review tells the reader where you're setting out from. A good literature review explains where the frontier is, and why people think it's important to advance it. And a good literature review is short. I can't be the only person who found literature reviews the hardest part of any paper to write, because it felt like I was trying to distill all of my graduate education into six or seven pages. But that was never the right tack. I just needed to tell people what I was riffing off of, and why.
Thus literature reviews; but I actually think the problem of the perfect paper is most evident in the discussion section. Here is where you may balk: why am I complaining about how are papers do not acknowledge their limitations? We're all but required to discuss limitations! But the dirty, open secret is that most papers' limitations sections are total bullshit.
I use "bullshit" here in the sense that the philosopher Harry G. Frankfurt defined: an utterance that is unconnected to a concern with the truth. As an example, Frankfurt recounts Wittgenstein talking to a friend, Fania Pascal, who has had her tonsils removed. Pascal says she feels like a dog that's been run over, to Wittgenstein says, "You don't know what a dog that has been run over feels like." Assume Wittgenstein isn't just a coot; assume he thinks Pascal is bullshitting him.
[Why] does it strike him that way? It does so, I believe, because he perceives what Pascal says as being...unconnected to a concern with the truth. Her statement is not germane to the enterprise of describing reality. She does not even think she knows, except in the vaguest way, how a run-over dog feels. Her description of her own feeling is, accordingly, something that she is merely making up....
It is just this lack of connection to a concern with truth--this indifference to how things really are--that I regard as the essence of bullshit.
I posit that the vast majority of limitations that authors mention in published studies are this type of bullshit. They do not care whether and how these stated limitations affect their results; they care whether the reader (more specifically, the reviewer) is persuaded that they have devoted sufficient attention to limitations, as the norms of the field obligate them to.
How many times have you read a paper where the authors list as a limitation that they only looked at one organization, or one country? This is a bit like saying you only did one research project. Literally any study could be said to have this limitation, so listing it smacks of ceremonial compliance. Or how about the limitation that they haven't demonstrated the "mechanisms" that cause the effect? If the contribution is sufficient then that isn't a limitation; that's grounds for future research. Citing that as a limitation is the research equivalent of saying you're too nice or you work too hard in a job interview when you're asked about your weaknesses.
If limitations are often bullshit, the "future research" section is usually bullshit. To see what I mean, select ten articles, at least ten years old, by reasonably productive authors. Note the future research projects they propose in those articles. Search their CVs for those projects. I wager you'll largely search in vain. All future research needn't be done by the researcher who proposes it, of course. It is still striking how rarely researchers seem to care whether research ideas they propose are borne out. It is as though they are indifferent to whether the ideas they propose are really true. It is as though they are bullshitting us.
I don't think this is malicious bullshit. I've spewed bullshit myself, mostly in early papers. I didn't know what else to do! I was supposed to write the paper as though my study had no flaws or limitations, so obviously I couldn't suggest the most obvious types of future research: redo the study with better data, formally justify a way to measure something instead of relying on an ad hoc technique, and so on. Instead I was encouraged to think about applications of the idea or "approach" I'd developed in the paper. By definition, these could no more advance the frontiers of knowledge than the paper itself had done.
It's actually quite striking that theoretical approaches in organizational research aren't overturned so much as they're exhausted. Neoinstitutionalism, organizational ecology, much ado about status...they seem eventually to collapse under the weight of their encrusted scope conditions, boundary cases, elaborations, extensions, and theoretical curlicues. Yet along the way, the theories' proponents rarely addressed limitations that the theories had at the start. Once, after reading a mountain of organizational ecology, a graduate student asked me, "How would you measure structural inertia? Like, is there a way to rank organizations by how inertial they are?" It seems like a blindingly obvious thing to work on, but nearly no one has.
(I'm picking on org ecology here because I think its adherents do a better job of testing, critiquing, and building on its theories than most others. In grad school I heard a neoinstitutionalist sassing org ecology by noting how "all their studies ask the same question and get the same result." Sixteen years later I think, isn't that what we call replication?)
The key point I want to make here is that if you don't describe your advancing of the frontier of knowledge--the second of the three things a research paper should do--then the first and third things become, if not impossible, then ceremonial. The first time I understood one of my papers as trying to correct and build on a specific set of prior studies, I found that the lit review and discussion sections just...fell into place.
Where do we go from here?
This rant would be bitter and pointless without some recommendations. As it happens, I have several. Rather, I have one, but it applies to two different groups.
First, as researchers, we need to be honest about the limitations of our studies. We need to state what we can know and what we can't. When we point these things out, people can see how far they should trust our findings and where they could work on them. The ground truth is that all studies have limitations, just as all exploration is finite. Explaining where we did not go is useful in its own light, because it focuses our successors' attention on where they should explore next. Anyone who thinks that building on prior work devalues the original needs to go back to science school. When Newton wrote that "If I have seen farther, it is because I am standing on the shoulders of giants," no one said, "That Galileo--what an idiot!"
Second, as reviewers, we need to evaluate manuscripts on the magnitude of their contributions, and assess the relative value of their contributions and limitations. This just amounts to not punishing authors for their honesty. I find myself writing a lot of reviews of this type lately. I often point out that there is an empirical weakness of the paper, but not one that I think should block publication. Rather, I think the authors should be explicit about the limitation, both to qualify how we treat the contribution and to suggest how we could improve on it in the future.
Perhaps you think this would open the floodgates to mediocre or flawed research. I disagree. Right now, it seems that a lot of articles are evaluated on whether they contain flaws. If the reviewers do not see too many (And remember that authors have incentives to couch their studies so as to minimize the apparent limitations!), then accepts often follow. In such a regime, my suggestion opens the floodgates. But I am also calling for us to identify the contribution, and weigh these things. I have read many papers that make no real contribution, which I would thus have rejected despite their lack of glaring flaws. Too many of the "no one has looked at x" papers fit this bill.
I don't want to see more mediocre papers; I want to see fewer perfect papers. Because real science isn't perfect.