What an unusual Monday I had this week… It began with a normal trek to Starbucks to get some work done, only to find that my paper for the Journal of Experimental Social Psychology’s (JESP) special issue on replicable methods had finally appeared in press online (somewhat expected). Dr. Roy Baumeister had apparently enjoyed my paper so much that he wrote a paper of his own, expanding on my proposal (very much not expected). My weird Monday ended with a surprise visit from a stray dog, and having it keep me awake all night until I could drop it off at animal control (it was not tagged or microchipped) the next day, but that is another story. Anyways, back to Dr. Baumeister…
I am a Nobody, in the community of Social Psychologists–with a capital “N”. My H-Index is 6, and I don’t yet have even 100 citations to my work (but I am so close now, y’all! :P). Baumeister, by comparison, is a HUGE Somebody; he has an H-Index of 141, and over 100,000 citations to his work–the first page of his Google Scholar profile has articles all cited over 1000 times!!! Suffice to say, when Somebody writes the following of a Nobody’s paper, Nobody takes notice:
I will particularly elaborate Sakaluk’s (in this issue) proposal that the optimal model is to explore small, confirm big. (p. 1)
But after my feeling of surprise passed, I read his paper, and I realized that Dr. Baumeister had misunderstood–and therefore misconstrued–some of the more important points of my proposal. I’m therefore using this blog post to set the record straight about my vision for the Exploring Small, Confirming Big approach, and what parts of Dr. Baumeister’s construal of Exploring Small, Confirming Big that I disagree with.
First Thing is First: Civil Disclosures and Disclaimers
I recently made a big hullabaloo about the need for civility in the replicability discourse, so I think it is only fitting that I begin my commentary here with some civil disclosures and disclaimers:
- I have a deep respect for Dr. Baumeister as a scholar. Is his ego-depletion research looking a bit shaky these days? Sure. Even so, Dr. Baumeister has spent his career as, in my opinion, one of the–if not the–most talented writers in psychological science, and whose scholarly contributions have influenced numerous domains of study. Take away ego-depletion, and you are still left with a scholar who co-wrote and first-authored The Need To Belong review (cited > 10,000 times!!!), the Bad Is Stronger Than Good review (cited > 3,000 times), single-handedly wrote the book Meanings Of Life (cited > 1,700 times), and coauthored the chapter introducing Sociometer Theory (cited > 1,400 times). Dr. Baumeister’s considerable influence, in other words, reaches far beyond the singleton domain of self-regulation, and as such, I think people should not be so hasty to drag his name through the mud.
- I am flattered that Dr. Baumeister thought well-enough of my Exploring Small, Confirming Big approach, to elaborate on it in a paper of his own.
- After reading his paper, and despite points 1. and 2., I suspect that Dr. Baumeister and I have very large differences in opinion about how replicable research ought to proceed.
- Dr. Baumeister’s misconstrual of my position re: Exploring Small, Confirming Big, is probably attributable to a number of factors. For example, I think that it is both likely that there were portions of the paper I could have written better, and also potentially portions of the paper that Dr. Baumeister could have read more carefully (or reached out to contact me for clarification). What I’m trying to say here, is that I accept a portion of the responsibility for Dr. Baumeister’s misunderstanding. No hard feelings; quite the opposite really–I probably owe him a “thank you”, because I suspect my paper will be more widely read now…
Anyways, with that aside, let’s dive into it.
What Is Exploring Small, Confirming Big?
The Exploring Small, Confirming Big approach is an excruciatingly simple idea. Here it is:
Step 1: Collect some data using as small of a sample as you want, and analyze it however the heck you want. Play around with “outlier” criteria; run your analyses while trying a hundred different covariates; omit any conditions you suspect may not have “worked”. Simply stated: p-hack till’ your heart’s content–I don’t really care. Just keep track of what “worked”, report it all, and be prepared to replicate exactly what you settled on, in a pre-registered larger study in…
Step 2: After pre-registering the exploratory model from Step 1, collect/attain a REALLY large n dataset. Either collect it yourself or collaborate with folks (in which case, only collect the variables necessary for your pre-registered model), or analyze some pre-existing large n source of data that you don’t control, like a meta-analytic dataset, social media data, or a national survey–just choose an approach that will allow you to test your exact pre-registered model. Run your exact pre-registered model that you developed in Step 1.
Step 3: Write up the results of both samples from Step 1 and Step 2 in the same paper. Submit for publication. Bob’s your uncle.
What About Dr. Baumeister’s Construal of Exploring Small, Confirming Big Do I Object To?
Dr. Baumeister misconstrues two important, but different aspects of my Exploring Small, Confirming Big proposal:
(1) Why Exploring Small is Important in The First Place
Dr. Baumeister writes:
A series of small studies can build and refine a hypothesis much more thoroughly than a single large study. That is Sakaluk’s point, and I agree. The field will be better off to continue using that method to let ideas emerge and improve over the course of the data collection process. (p. 2)
This is not my point–I don’t discuss hypothesis refinement in the paper at all. But I actually think that large studies would be much more useful for hypothesis refinement, as all else being equal, you are much more likely to find more nuanced effects (e.g., indirect effects, moderated effects, conditional indirect effects, etc.,) with larger samples than with smaller ones. Instead, what I wrote of exploring with small studies was this:
Exploring Small, Confirming Big will therefore leave room for serendipity and exploration in the research process, through the flexibility allowed by the Exploring Small stage in which psychologists can explore their data, and reveal unexpected effects.
In other words, sometimes you don’t exactly know what you’re expecting from your data. Perhaps you simply think a given phenomenon is interesting and don’t yet know where you ought to start. In such a circumstance, it might be foolish to throw a ton of resources (money, time, person-power) to accumulate a large sample, or to put yourself in a corner by pre-registering analyses when you aren’t yet familiar with the type of data you’ll be working with, or what effect(s) you will even test. Or perhaps you hail from a discipline where research is driven by exploratory questions, rather than explicit hypotheses (as is often the case in my interdisciplinary home of sexuality research). In cases such as these, it’s nice to have some flexibility with what data you look at, and how, so that you can narrow in on an effect that you think is interesting, important, and/or promising.
Implicit in my encouragement of other researchers to explore their data, and then pre-register a larger confirmatory study, is a respect for others’ scholarly autonomy, with respect to balancing the risks of non-replicabilty with the rewards of increased efficiency/cost effectiveness. Exploratory research is a gamble; you might think you’ve found an effect in a conveniently small sample, with a particular analytic approach, only to have it evaporate when you attempt to replicate it. In order to maximize the probability of an exploratory effect to replicate, researchers will therefore need to police themselves from crossing the line of using researcher degrees of freedom, into abusing researcher degrees of freedom. There is a lot of “grey” area in between those extremes, and I think, with my approach, it has to be a personal choice how much someone cares to gamble with the possibility of tricking themselves into pursuing a non-replicable effect. Because in the end, it’s that person’s obligation to attempt to replicate their effect with a more restrictive confirmatory approach.
And that brings me to…
(2) Why It Is Important for the Same People to Explore Small and Confirm Big, and Report Both in the Same Paper
Dr. Baumeister concludes his paper with the following recommendation:
My main recommendation is that social psychology as a field should embrace Sakaluk’s (in this issue) suggestion to explore small and confirm big — and should separate those into different parts of the grand research process, probably to be done by separate research groups and laboratories, and published separately. (Emphasis added, p. 6).
Nope. Nopenopenope. NOPE! To follow this suggestion would be, in my opinion, a disastrous mistake. Having one group of scholars conduct exploratory research (“The Explorers”), and another group conduct confirmatory research (“The Confirmers”), and each publishing separately would have a number of unfortunate consequences.
For starters, I think most are acquainted with the saying:
“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”–Alberto Brandolini
I think that deliberately and explicitly separating folks into Explorers and Confirmers will create a factory of psychological bullshit-generation, and the Confirmers will be left with the more considerable task of sifting through it. And let’s be clear: in today’s psychology, the role of Confirmer appears to be–at least to me–an utterly thankless job. In order to do their job well, a Confirmer would need to invest many more resources than a given Explorer did, in order to accrue an adequate sample to provide a strong confirmatory test of the Explorer’s exploratory effect. If the Confirmer confirms the Explorer’s exploratory finding, all the glory will go to the Explorer for having uncovered such a clever phenomenon–what a creative and brilliant mind! But if the Confirmer fails to confirm the Explorer’s exploratory effect? They will get called names like “second-stringer”, “bad experimenter”, and “boring-Confimer-face” (I made the last one up myself). Their motives for replicating the effect in the first place will be put under the microscope, and their confirmation study will suffer criticisms that the Explorer never had to face.
So in Dr. Baumeister’s vision, the Explorers should get to have all the fun chasing their own ideas, using resource-friendly imprecise designs and methods, and reap all the glory when their effects replicate. And what happens if/when their effects don’t replicate? How do you distinguish between a thoughtful explorer, and someone who just throws a hundred knives at a wall to see what sticks? The Confirmers, alternatively, have to take time away from their own ideas, in order to invest a ton of their own resources to replicate someone else’s effects, and if they are successful, they might be lucky to get their replication published in a second-tier journal, and if they are not successful, they win the prize of getting yelled at by all the prestige-winning Explorers who thought the Confirmers were lesser-than in the first place.
Sorry (not sorry), that arrangement just sounds plain exploitative, not to mention really bad for science.
Why do I think my proposed arrangement will work better? I will offer what I think is a relatively safe characterization of the exploring and confirming processes: exploring is fun, but mistake-prone, whereas confirming is responsible, but a logistical pain. My vision for Exploring Small, Confirming Big is that by requiring both processes to be reported on in the same article, carried out by the same people, authors can only reap the professional rewards (publications, books, jobs) of having fun (exploring), if–and only if–they simultaneously accept the responsibility to do the less-exciting work of attempting to replicate their own exploratory findings. Everyone shares in the fun; everyone shares in the responsibility. And everyone shares in the risk. If you cant replicate your own exploratory finding, someone else probably won’t be able to either (and you might be tempted to blame them, for any number of specious reasons, instead of your unreliable effect). Therefore your non-replicating effect probably shouldn’t make it into the literature, and somebody else shouldn’t waste their time and money attempting to replicate it.
Okay. I’ve droned on long enough. I may or may not post again later with some more thoughts on Dr. Baumeister’s paper, but this is probably more than sufficient to make clear what I intended with Exploring Small, Confirming Big (and why).