1. StacksExchange: My New Favorite Form of Open Science

This is my first post, on a personal blog where I hope to occasionally share my thoughts on psychological and sexual science. My focus will often be on methods and statistics, but I may occasionally discuss specific articles that I find interesting. There are many reasons for why one might start a blog such as this one. My motivation, more or less, is that I have found it really educational to read the blogs of other psychologists, and hope that I might somehow contribute to similar effect by sharing my own experiences. So with that…

Recently, I have been experiencing considerable Open Science ambivalence. Specifically, though I am a big believer in the the principles of Open Science, I have found myself struggling by the practicing of Open Science. For example, I have had an OSF account for nearly the whole academic year, but I just haven’t found it a good time to “take the plunge”, beyond the single project that I have posted there. Perhaps trying to change the organizational framework I use for my research during my dissertation and job-search year wasn’t the most well thought out plan. I’m hopeful that when I start my upcoming postdoctoral position, that I will be able to start making the transition.

My sad, barren OSF account...
My sad, barren OSF account…

While I wait for the moment when I am able to reconcile my principles and practices of Open Science, I have begun participating in StacksExchange. In a nutshell, Stacks is a conglomerate of community-based Q&A sites, dedicated to particular topics. Good questions and answers are up-voted by community members, and members receive “reputation” points and achievements for their participation. CrossValidated and StacksOverflow are two Stacks sites for questions about statistics and programming, respectively.

So far, my experience–both asking and answering questions–has been great. I was initially drawn to CrossValidated to find out why my Twitter-friend and I were getting different results when trying to reproduce a fixed-effect meta-analysis. He was using an inverse variance-weighted regression via lm command in R, and I was using the metafor package:


#Example effect sizes (Zr) and their variances (Var.Zr)
Zr=c(0.40899751573207, 0.642375660599786, 0.422758893133794, 0.189381484725166, 0.296383997626162, 1.13842812733114, 0.378870633563455, 0.0993220992102355, 0.963032044806266, 0.239365330127599, 0.0888299642229334, -0.0200304369445284, 0.13575164548242)
Var.Zr= c(0.0035055542000746, 0.0199503483709882, 0.00855309074487157, 0.00855309074487157, 0.00175746924428822, 0.00428789253512213, 0.00321861958555767, 0.0128715037777864, 0.0369214051243578, 0.00226132045285203, 0.0020106142336006, 0.00861326442721792, 0.00575705238917674)

#my friend's approach to estimating fixed-effect meta-analysis model
fixed.friend=lm(Zr~1, weights=I(1/Var.Zr))

#my approach to estimating fixed-effect meta-analysis model
library(metafor)
fixed.john=rma(yi=Zr, vi=Var.Zr, method="FE")

#compare results
summary(fixed.friend)
summary(fixed.john)

We’d get the same estimated effect, but very different standard errors, and therefore very different p-values, and confidence intervals. After posting on CrossValidated, I got an answer from Wolfgang Viechtbauer, creator of the metafor package (cool!!!), who thoroughly explained the source of the discrepancy; it turns out the way lm handles the inverse variance-weighting is not congruent with typical meta-analytic models. Later, I was able to share my this new knowledge with Daniël Lakens, who was using the lm method to demonstrate the effects of publication bias in a forthcoming meta-analysis on precognition.

Suffice to say, I was pretty amazed at the rapid transmission of high-quality information. From me, to CrossValidated (and Wolfgang), back to me, and then out to people I know. Maybe I shouldn’t be that impressed–as Daniël put it, “this is how we learn!”–but I couldn’t help but feel like this was a particularly awesome way to learn from one another. And what I think I like most about the Stacks system is that if anyone else wanders into meta-analysis, and recognizes a disparity between the standard errors that  lm and other bona fide meta-analysis packages provide, they can do a simple Google search (e.g., “lm metafor different“) and find my question and Wolfgang’s answer (i.e., the second link). In this way, participation on Stacks might be a longer-lasting way of helping one another than over Twitter, where a good question and answer might become buried by subsequent Twitter activity.

Since my initial foray on CrossValidated, I’ve tried to be active on the site every couple of days, answering questions here and there, or asking more of my own. I even recently posted a “bounty”–a way of sacrificing some of your reputation points to draw more attention to a question of yours. I’ve found answering questions on the site to be quite enjoyable. Even therapeutic. Whether you are full of questions, answers, or both, I encourage you to check out the Stacks site if you’re looking for a new way to get Open Science-y.

My much busier CrossValidated profile
My much busier CrossValidated profile
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s