In 2009, a paper by Ed Vul, Christine Harris, Piotr Winkielman, and Harold Pashler sent shockwaves through the field of social neuroscience. Its provocative original title, "Voodoo Correlations in Social Neuroscience," pointed to a serious methodological flaw in many popular fMRI studies.

These studies had reported astonishingly high correlations (e.g., r = 0.85 or 0.90) between brain activity in a specific region and complex personality traits like social anxiety or empathy. To methodologists, these numbers seemed "puzzlingly high"—too good to be true.

The Core Problem: Non-Independent Analysis

The "voodoo" was not magic, but a statistical error called **non-independent analysis** (or "circular analysis").

Here's a simple analogy:

Imagine you have 10,000 students and you want to see if the height of their pinky finger correlates with their test scores. You first scan all 10,000 students and select *only* the 50 students who happen to have the strongest (positive or negative) correlation. Then, you calculate the average correlation *only for those 50 students* and report that number.

Of course, the correlation will be extremely high! You've guaranteed it by pre-selecting for it. You've used the same data to *select* your group and to *test* your group. This is circular logic.

How This Applied to fMRI

In fMRI, a "voxel" is like a 3D pixel of the brain. A single scan contains hundreds of thousands of voxels. The flawed method was:

  1. Researchers would test all 100,000+ voxels to see which ones correlated with a personality score (e.g., empathy).
  2. They would set a threshold (e.g., p < 0.01) and select *only* the tiny fraction of voxels that passed this test.
  3. They would then calculate the correlation *only within those selected voxels* and report that (now artificially high) number.

This method inflates correlations, making random noise look like a powerful and significant finding. The correct way is to use one dataset to *find* your region of interest and a completely separate, independent dataset to *test* the correlation within that region.

The Impact and Legacy

The "Voodoo Correlations" paper was a crucial part of the broader "replication crisis" in psychology and science. It forced the field to become much more rigorous about its statistical methods. It championed practices that are now becoming standard, including:

  • Pre-registration: Committing to your analysis plan *before* you collect or look at the data.
  • Independent Analysis: Always using separate data for discovery and for testing.
  • Healthy Skepticism: Understanding that if a finding looks too good to be true, it probably is.

While the debate was contentious, it ultimately helped push neuroscience toward better, more reliable, and more transparent science.