9. A Disagreement with Dr. Baumeister on his Construal of Exploring Small, Confirming Big

What an unusual Monday I had this week… It began with a normal trek to Starbucks to get some work done, only to find that my paper for the Journal of Experimental Social Psychology’s (JESP) special issue on replicable methods had finally appeared in press online (somewhat expected). Dr. Roy Baumeister had apparently enjoyed my paper so much that he wrote a paper of his own, expanding on my proposal (very much not expected). My weird Monday ended with a surprise visit from a stray dog, and having it keep me awake all night until I could drop it off at animal control (it was not tagged or microchipped) the next day, but that is another story. Anyways, back to Dr. Baumeister…

I am a Nobody, in the community of Social Psychologists–with a capital “N”. My H-Index is 6, and I don’t yet have even 100 citations to my work (but I am so close now, y’all! :P). Baumeister, by comparison, is a HUGE Somebody; he has an H-Index of 141, and over 100,000 citations to his work–the first page of his Google Scholar profile has articles all cited over 1000 times!!! Suffice to say, when Somebody writes the following of a Nobody’s paper, Nobody takes notice:

I will particularly elaborate Sakaluk’s (in this issue) proposal that the optimal model is to explore small, confirm big. (p. 1)

But after my feeling of surprise passed, I read his paper, and I realized that Dr. Baumeister had misunderstood–and therefore misconstrued–some of the more important points of my proposal. I’m therefore using this blog post to set the record straight about my vision for the Exploring Small, Confirming Big approach, and what parts of Dr. Baumeister’s construal of Exploring Small, Confirming Big that I disagree with. 

First Thing is First:  Civil Disclosures and Disclaimers

I recently made a big hullabaloo about the need for civility in the replicability discourse, so I think it is only fitting that I begin my commentary here with some civil disclosures and disclaimers:

  1. I have a deep respect for Dr. Baumeister as a scholar. Is his ego-depletion research looking a bit shaky these days? Sure. Even so, Dr. Baumeister has spent his career as, in my opinion, one of the–if not the–most talented writers in psychological science, and whose scholarly contributions have influenced numerous domains of study. Take away ego-depletion, and you are still left with a scholar who co-wrote and first-authored The Need To Belong review (cited > 10,000 times!!!), the Bad Is Stronger Than Good review (cited > 3,000 times), single-handedly wrote the book Meanings Of Life (cited > 1,700 times), and coauthored the chapter introducing Sociometer Theory (cited > 1,400 times). Dr. Baumeister’s considerable influence, in other words, reaches far beyond the singleton domain of self-regulation, and as such, I think people should not be so hasty to drag his name through the mud.
  2. I am flattered that Dr. Baumeister thought well-enough of my Exploring Small, Confirming Big approach, to elaborate on it in a paper of his own.
  3. After reading his paper, and despite points 1. and 2., I suspect that Dr. Baumeister and I have very large differences in opinion about how replicable research ought to proceed.
  4. Dr. Baumeister’s misconstrual of my position re: Exploring Small, Confirming Big, is probably attributable to a number of factors. For example, I think that it is both likely that there were portions of the paper I could have written better, and also potentially portions of the paper that Dr. Baumeister could have read more carefully (or reached out to contact me for clarification). What I’m trying to say here, is that  I accept a portion of the responsibility for Dr. Baumeister’s misunderstanding. No hard feelings; quite the opposite really–I probably owe him a “thank you”, because I suspect my paper will be more widely read now…

Anyways, with that aside, let’s dive into it.

What Is Exploring Small, Confirming Big?

The Exploring Small, Confirming Big approach is an excruciatingly simple idea. Here it is:

Step 1: Collect some data using as small of a sample as you want, and analyze it however the heck you want. Play around with “outlier” criteria; run your analyses while trying a hundred different covariates; omit any conditions you suspect may not have “worked”. Simply statedp-hack till’ your heart’s content–I don’t really care. Just keep track of what “worked”, report it all, and be prepared to replicate exactly what you settled on, in a pre-registered larger study in…

Step 2: After pre-registering the exploratory model from Step 1, collect/attain a REALLY large n dataset. Either collect it yourself or collaborate with folks (in which case, only collect the variables necessary for your pre-registered model), or analyze some pre-existing large n source of data that you don’t control, like a meta-analytic dataset, social media data, or a national survey–just choose an approach that will allow you to test your exact pre-registered model. Run your exact pre-registered model that you developed in Step 1.

Step 3: Write up the results of both samples from Step 1 and Step 2 in the same paper. Submit for publication. Bob’s your uncle.

What About Dr. Baumeister’s Construal of Exploring Small, Confirming Big Do I Object To?

Dr. Baumeister misconstrues two important, but different aspects of my Exploring Small, Confirming Big proposal:

(1) Why Exploring Small is Important in The First Place

Dr. Baumeister writes:

A series of small studies can build and refine a hypothesis much more thoroughly than a single large study. That is Sakaluk’s point, and I agree. The field will be better off to continue using that method to let ideas emerge and improve over the course of the data collection process. (p. 2)

This is not my point–I don’t discuss hypothesis refinement in the paper at all. But I actually think that large studies would be much more useful for hypothesis refinement, as all else being equal, you are much more likely to find more nuanced effects (e.g., indirect effects, moderated effects, conditional indirect effects, etc.,) with larger samples than with smaller ones. Instead, what I wrote of exploring with small studies was this:

Exploring Small, Confirming Big will therefore leave room for serendipity and exploration in the research process, through the flexibility allowed by the Exploring Small stage in which psychologists can explore their data, and reveal unexpected effects.

In other words, sometimes you don’t exactly know what you’re expecting from your data. Perhaps you simply think a given phenomenon is interesting and don’t yet know where you ought to start. In such a circumstance, it might be foolish to throw a ton of resources (money, time, person-power) to accumulate a large sample, or to put yourself in a corner by pre-registering analyses when you aren’t yet familiar with the type of data you’ll be working with, or what effect(s) you will even test. Or perhaps you hail from a discipline where research is driven by exploratory questions, rather than explicit hypotheses (as is often the case in my interdisciplinary home of sexuality research). In cases such as these, it’s nice to have some flexibility with what data you look at, and how, so that you can narrow in on an effect that you think is interesting, important, and/or promising.

Implicit in my encouragement of other researchers to explore their data, and then pre-register a larger confirmatory study, is a respect for others’ scholarly autonomy, with respect to balancing the risks of non-replicabilty with the rewards of increased efficiency/cost effectiveness. Exploratory research is a gamble; you might think you’ve found an effect in a  conveniently small sample, with a particular analytic approach, only to have it evaporate when you attempt to replicate it. In order to maximize the probability of an exploratory effect to replicate, researchers will therefore need to police themselves from crossing the line of using researcher degrees of freedom, into abusing researcher degrees of freedom. There is a lot of “grey” area in between those extremes, and I think, with my approach, it has to be a personal choice how much someone cares to gamble with the possibility of tricking themselves into pursuing a non-replicable effect. Because in the end, it’s that person’s obligation to attempt to replicate their effect with a more restrictive confirmatory approach.

And that brings me to…

(2) Why It Is Important for the Same People to Explore Small and Confirm Big, and Report Both in the Same Paper

Dr. Baumeister concludes his paper with the following recommendation:

My main recommendation is that social psychology as a field should embrace Sakaluk’s (in this issue) suggestion to explore small and confirm big — and should separate those into different parts of the grand research process, probably to be done by separate research groups and laboratories, and published separately. (Emphasis added, p. 6).

Nope. Nopenopenope. NOPE! To follow this suggestion would be, in my opinion, a disastrous mistake. Having one group of scholars conduct exploratory research (“The Explorers”), and another group conduct confirmatory research (“The Confirmers”), and each publishing separately would have a number of unfortunate consequences.

For starters, I think most are acquainted with the saying:

“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”–Alberto Brandolini

I think that deliberately and explicitly separating folks into Explorers and Confirmers will create a factory of psychological bullshit-generation, and the Confirmers will be left with the more considerable task of sifting through it. And let’s be clear: in today’s psychology, the role of Confirmer appears to be–at least to me–an utterly thankless job. In order to do their job well, a Confirmer would need to invest many more resources than a given Explorer did, in order to accrue an adequate sample to provide a strong confirmatory test of the Explorer’s exploratory effect. If the Confirmer confirms the Explorer’s exploratory finding, all the glory will go to the Explorer for having uncovered such a clever phenomenon–what a creative and brilliant mind! But if the Confirmer fails to confirm the Explorer’s exploratory effect? They will get called names like “second-stringer”, “bad experimenter”, and “boring-Confimer-face” (I made the last one up myself). Their motives for replicating the effect in the first place will be put under the microscope, and their confirmation study will suffer criticisms that the Explorer never had to face.

So in Dr. Baumeister’s vision, the Explorers should get to have all the fun chasing their own ideas, using resource-friendly imprecise designs and methods, and reap all the glory when their effects replicate. And what happens if/when their effects don’t replicate? How do you distinguish between a thoughtful explorer, and someone who just throws a hundred knives at a wall to see what sticks? The Confirmers, alternatively, have to take time away from their own ideas, in order to invest a ton of their own resources to replicate someone else’s effects, and if they are successful, they might be lucky to get their replication published in a second-tier journal, and if they are not successful, they win the prize of getting yelled at by all the prestige-winning Explorers who thought the Confirmers were lesser-than in the first place.

Sorry (not sorry), that arrangement just sounds plain exploitative, not to mention really bad for science.

Why do I think my proposed arrangement will work better? I will offer what I think is a relatively safe characterization of the exploring and confirming processes: exploring is fun, but mistake-prone, whereas confirming is responsible, but a logistical pain. My vision for Exploring Small, Confirming Big is that by requiring both processes to be reported on in the same article, carried out by the same people, authors can only reap the professional rewards (publications, books, jobs) of having fun (exploring), if–and only if–they simultaneously accept the responsibility to do the less-exciting work of attempting to replicate their own exploratory findings.  Everyone shares in the fun; everyone shares in the responsibility. And everyone shares in the risk. If you cant replicate your own exploratory finding, someone else probably won’t be able to either (and you might be tempted to blame them, for any number of specious reasons, instead of your unreliable effect). Therefore your non-replicating effect probably shouldn’t make it into the literature, and somebody else shouldn’t waste their time and money attempting to replicate it.

Okay. I’ve droned on long enough. I may or may not post again later with some more thoughts on Dr. Baumeister’s paper, but this is probably more than sufficient to make clear what I intended with Exploring Small, Confirming Big (and why).

 

 

Advertisements

11 thoughts on “9. A Disagreement with Dr. Baumeister on his Construal of Exploring Small, Confirming Big

  1. A question about exploring small, confirming big that I didn’t notice in the paper — what becomes of the Confirming Big effort when the original exploratory result does not hold up? Do the authors publish the exploratory result and its confirmatory failure-to-replicate? Do they mothball the whole thing? Researchers could rapidly lose interest in this approach if the large-sample results render the whole thing difficult to publish.

    I’m thinking in particular of Will Gervais’ post on Dr. Wide Net vs. Dr. Power: http://willgervais.com/blog/2016/2/10/casting-a-wide-net. Under certain conditions, it seems, a one-step Confirming Big strategy may outperform the two-step Explore Small, Confirm Big approach.

    Like

    • My preference would be to see it published, but of course, that would require journals to get on board with publishing papers with initially significant, but ultimately un-replicable effects. Yes, under certain circumstances Confirming Big right off the hop would be more efficient, but as I say in the post, I think it’s ultimately the prerogative of the individual researcher as to when and how they gamble with their own time and resources.

      Like

      • This can be solved using Registered Reports (https://osf.io/8mpji/wiki/home/). The researcher could conduct their exploratory studies (unregistered) and then submit them as pilot experiments as part of a Stage 1 RR proposing the larger confirmatory study. If it passes Stage 1 review then the authors can then conduct the larger study and the journal commits to publishing the outcomes regardless of whether they confirm the hypotheses generated by the exploratory studies. If the question is important and the methods are sound then getting an answer is important, regardless of what that answer is.

        Liked by 2 people

    • We have a paper in press where we explored small and attempted to confirm big: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2698221 . Now, we did not begin the research process with this idea in mind (explained in the paper), and our 3 “confirm big” attempts yielded a whole lot of nothing. Very eye opening. We have another project we are running right now where we are recruiting a large number of dating couples, randomly sorting them into 2 data sets, and doing some “guided” exploring in one data set, and attempting to confirm in the second data set (in between we will report results of exploratory analyses and pre-reg the confirmatory hypotheses: https://osf.io/4xcpy/). So, overall I am on board with the message in John’s paper, and I think it is important to attempt to publish the final result (even if the confirmatory analyses fail to confirm).

      Liked by 1 person

  2. It is rare to see civility, as many fight this whole thing as a holy ideological war. Kudos.

    Who should do the confirmation?
    I see your point, and baumeister’s

    For Sakaluk’s point:
    1) The original researcher are most likely to be versed in exactly what is to be done. protocol minor details etc. Thus, they have an advantage.
    2) Procedural. It is hard enough to get anything done anyway. Let alone multiple people and papers.

    For baumeister’s point of different people doing the confirmation:
    1) Like math and medicine, exploration and careful verification might be two branches and different kinds of expertise.
    One might even argue that the majority of psychologists do not have what it takes to do this nail biting job of hard verification. My feeling is that most errors/p-hacking etc. are innocent. Just a lack of full understanding of statistics etc.
    It is just easier to have people dedicated to hard nosed verification. Besides stats, they will have to be amply aware on how meticulous etc. a replication needs to be. An issue that is currently in hot debating.
    2) Those differences are not just of expertise. It is also attitude. And motivation / career path.

    A middle ground would be to require special confirmation advisers etc. where the original team must get somehow supervised and advised when doing the crucial confirmation study.

    Cheers,

    Like

  3. I wonder about the change of culture that might be required in order to get an exploratory study to actually have any predictive value for a larger confirmatory one, cf. your point about use vs abuse of researcher DFs. For one thing, it’s not clear where the use/abuse boundary lies: within a lab, with multiple junior researchers competing for a shot at the prestige of a big confirmatory study, the one who is prepared to hide the greatest number of failed statistical tests before they hit gold will have an advantage.

    I also wanted to check if I’ve read correctly your point about larger confirmatory studies allowing for the detection of nuanced effects (moderation, etc). It seems to me that if I do detect such an effect, in my study that was designed to confirm the non-nuanced effect, then the “nuanced” one is still only exploratory, and you need a new confirmatory cycle before you can publish it.

    Like

    • Yes, for such a nuanced hypothesis, if you explored in the confirmatory dataset to get the effect, you’d need yet another confirmatory data set to Confirm Big for that new effect. But actually, I think if you were interested in such an effect, I’d probably advocate to start with a super large confirmatory study in the first place.

      Like

  4. most excellent and enlightening blog, thanks! I fully agree with you on your second point and disagree with you on your first point: I don’t feel Baumeister has misconstrued ‘starting small’ at all (unless i likewise misunderstood you :-)). He sees ‘hypothesis refinement’ as ‘lettings ideas emerge and improve’, which I feel reflects your paper’s “serendipity and exploration”. In contrast, you claim in this blog hypothesis refinement means ‘nuanced effects (e.g., indirect effects, moderated effects, conditional indirect effects)’, which is rather different. Do does this boils down to a simply terminology difference?

    Like

  5. “I think that deliberately and explicitly separating folks into Explorers and Confirmers will create a factory of psychological bullshit-generation, and the Confirmers will be left with the more considerable task of sifting through it”

    I think this has already happened in the last few decades or so (bullshit generation) and only now are researchers attempting to replicate and publish findings (confirmers).

    Regardless of whether this has already happened, i reason it is important to prevent this scenario from happening (or being sustained). Thank you for your thoughts on exploring small, confirming big by the *same* researchers. I totally agree, and i haven’t heard it (enough) anywhere else before.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s