In late 2012, a "cadre of shoemakers & artists in Berlin, Germany, who make ridiculously comfortable, Bauhaus-inspired shoes" decided to test a hypothesis. They'd noticed that some of their customers in the United States were experiencing delays and other problems receiving their shipments. They began to wonder why. Could it be the packing tape they used? The tape prominently featured the name of their brand: ATHEIST.
To test their hypothesis about the packing tape, the shoemakers decided to send two packages to each of 89 people in the United States. For each recipient, one package featured the Atheist-branded packing tape and one used generic packing tape. All packages were shipped the same day. Yet the Atheist-branded packages took an average of three days longer to reach their destinations. And while only one package from the generic-tape set went missing, nine of the Atheist-branded packages went missing. These (statistically significant) results were recently released on their website.
Is this evidence of discrimination against atheists by U.S. postal workers? It wouldn't be too surprising. As I discussed in a post a few months back, atheists are among the most distrusted groups in America. Even if postal workers weren't deliberately chucking Atheist-branded packages into remote corners of sorting facilities or tossing them off delivery trucks, they may have unconsciously flagged them as more suspicious and therefore worthy of extra scrutiny.
Critics have pointed out that the study doesn't entirely rule out some reasonable alternatives (something Atheist discusses in the comments section of its post outlining the experiment). Perhaps the delays occurred in customs rather than in the hands of USPS. Perhaps the problem was a result of simply having text on the packing tape, and doesn't reflect any atheism-specific prejudice. (It would be nice to see replications of the study with additional control groups, such as "faith"-branded packing tape.)
Atheist says it has now stopped using branded shipping tape on all orders sent to the U.S. and that "delivery times are already improving."
What I like best about the study isn't its methodological rigor (which falls a bit short of perfection), or the results (which support some unsettling conclusions); it's the fact that a group of non-scientists had a hypothesis and decided to test it in a systematic and scientific way. What a great example of citizen science.
Yet I worry when findings that haven't passed the gauntlet of peer-review find their way to non-specialists (say, through The Economist or Time.com). Can empirical results be adequately evaluated without a full, paper-sized report? Are most readers in a position to do so?
As another recent example of pre-publication science gone wild, consider a blog post by Rolf Zwaan reporting his failure to replicate a well-known study by Vohs and Schooler (2008), which found that reducing belief in free will led people to cheat more often on a subsequent task.
Zwaan found no such effect, and his unpublished findings and discussion were picked up by Jerry Coyne, whose readership extends well beyond psychology and academia. Like the Atheist Shoe study, Zwaan's blog-reported findings might leave people wondering what to believe, especially if they don't appreciate the expert scrutiny that unpublished studies have yet to undergo.
To their credit, the folks at Atheist have teamed up with an (unnamed) university professor and plan to pursue their research with the aim of publishing in a peer-reviewed journal. And there's a good chance Rolf Zwaan, a psychology professor himself, will similarly publish his failure to replicate Schooler and Vohs (2008).
But when and why should scientific claims be presented in forums that aren't peer-reviewed? And what role can and should the interested public play in the evaluation and dissemination of science, both pre- and post-peer review?
You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo