The social and behavioral sciences, some say, have a “replication crisis.”

When researchers attempt to reproduce results of published papers, sometimes the original conclusions do not hold up. Replication studies are conducted using the same methods and standards laid out in the published paper. They use new participants and often a larger sample size. Researchers have more confidence in studies that replicate. And studies that don’t replicate? Not just their results, but the results of any studies that cite them are now shaded in doubt. With a significant portion of studies failing to replicate in large scale replication projects*, social and behavioral scientists have valid concerns about the future of their disciplines.

Is there any way to know which studies will replicate? Could clues be found…

  • in a paper’s abstract?
  • in the statement of the problem?
  • in the elicitation process?
  • in the methodology?
  • in the interpretation of the data?

Replication studies have asked their participants to assign a probability to the question of replication.

You, critical reader, are wondering how this data is collected.

There are two main methods for this meta-research, both involving prediction and replication.

One method is a survey administered to experts who read papers and register their guesses as to whether each will replicate.

The other method is a prediction market, which, a bit like sports betting, asks market forecasters to assign a chance of a study’s replication.

It turns out that crowdsourced predictions have pretty good accuracy (with markets having the edge over surveys)*: at least two out of three times they identify which studies will or will not replicate. This suggests that there is something fundamental about the research papers themselves could help identify which will successfully replicate. Our collective reaction to a scientific claim may tell us more about the reliability of a claim than the claim itself.

*For details and data, read this page from RM Team Member Domenico Viganola.