Common Questions

Replication Markets is one part of the larger DARPA SCORE program to assign confidence scores to social & behavioral science claims. (See also this Wired Article.)

SCORE sponsors the claim selection, the replications, two crowd forecasting teams, and (TBD) machine learning.

They were selected for SCORE by the Center for Open Science. They identified about 30,000 candidate studies from the target journals and time period (2009-2018), and narrowed those down to 3,000 eligible for forecasting. Criteria include whether they have at least one inferential test, contain quantitative measurements on humans, have sufficiently identifiable claims, and whether the authors can be reached.

The Center for Open Science selected studies from 62 journals. For a complete list, click here.

Science is about testing one’s ideas. Therefore, a valid scientific finding must be reproducible: if you do the same thing, you are supposed to get the same result, at least statistically, and ideally (but rarely) always.  As we use the terms:

A “reproduction” is an attempt to do get  the same result with the original data — this can fail due to errors, lack of data, or missing steps.  

An attempt to test the original idea or analysis with other found data is a “data replication“.  For example, using newer GDP data for an economics study. 

A “direct replication” is an attempt to test by collecting new data using the same procedures.

Finally, a successful replication is “simply” getting a statistically significant result in the same direction as the original claim. But “simply” hides a lot: see FAQ & post on high-quality replications.

Take a look at our Recommended Reading to learn more about replication.

Short answer: A good-faith, high-power attempt to reproduce a previously-observed finding.

Slightly longer answer: A good-faith attempt to reproduce a previously-observed finding with a sample large enough to find the effect if there.

And if you really want to get into the weeds, read more about high quality replication!

The advent of the internet has made it possible for large numbers of people to contribute to a collective project in a timely manner. Wikipedia is one of the most famous examples. Website product reviews, community news sites, and prediction markets are other types of crowdsourcing. The power of crowdsourcing builds on the wisdom of crowds, which has been shown to be more effective in producing the true answer in comparison to an elite group. Particularly, when knowledge is widely distributed and hard to locate, crowdsourcing has been shown to be surprisingly effective. 

Yes! Our project is being conducted in the spirit of open science and transparency, and therefore de-identified data will be published when the project concludes. We are also pre-registering our work with the Center for Open Science. 

As for what we do with personally identifying data, we do cover this information in the informed consent details available on the registration page: https://predict.replicationmarkets.com/main/#!/users/register

Our forecasters bet on the chance that a research claim will replicate. They have the opportunity to read the original paper and discuss their analyses, and can adjust their predictions until the market closes. The most accurate forecasters will earn money. 

Prior to the markets, there are also private surveys on the same claims, with a separate prize pool.

Prediction markets are an alternative to surveys for predicting outcomes of events in politics, sports, research replication studies, and other domains

In a prediction market, participants invest points to say how likely they think different outcomes are. Forecasters who more accurately predict the true outcome receive a gain and those who are less accurate incur a loss. So, participants have an incentive to speak up when they know, and stay quiet when they don’t. 

Unlike surveys, there is no extra averaging step, just the current estimate (market value). Markets tend to outperform simple averaging; they roughly match sophisticated averaging.

To learn more about prediction markets for science, see our publications listed here.

Markets divide a Round’s total prize (about $9,000) among winning shares of all that Round’s resolved claims. We expect 100 claims to resolve, about 10 per Round, in mid-2020.A total of $150,000 is expected. Surveys pay out multiple prizes of $20 – $80 about monthly.

For more details, read the Explanation of Payouts.

In this project:

A claim is an assertion in social science, such as “Imagining eating M&Ms 30 times makes people eat fewer M&Ms.” Typically, a claim is disputed or in doubt, and we want to know if it will replicate when tested. Participants forecast replications using both surveys and markets.

Each claim has a market where forecasters invest points to move the market estimate (usually a probability) up or down. That is equivalent to buying and selling shares that “pay 1 point if the replication succeeds” or “pay 1 point if it fails”. The price of the shares traded in a market is usually supposed to reflect the crowd’s belief about the probability that a claim will be replicated. We show it as a probability.

We also use “the market” (or “the survey”), to mean the collection of all market (or survey) claims, for example “this claim is now on the market”. We will also have markets (and surveys) on other kinds of claims like effect sizes, or overall replication rates.

Prediction markets have historically performed better than surveys for crowdsourced information. 

At RM, we take a survey of forecasts for each claim, and use that as a ground truth to assess the accuracy of the corresponding market.

 

In Round 0, you will start with 100 points to play. In all subsequent rounds, you will start with 300 points.

To learn how to earn more points to play with, see Points Distribution.

SUMMARY FOR HUMANS:

  • No purchase necessary to win.
  • You must be 18yo and not affiliated with the project.
  • You may be participate even if declining or ineligible for prizes.
  • One user, one account, except by written agreement.
  • Follow the rules. Play nice. Don’t cheat. Don’t try to lose.
  • Contest begins 12-AUG-2019 and runs approximately through 1-JUL-2020.
  • Contest will forecast 3,000 claims from social & behavioral science.
  • Forecasts will be in Rounds of 300 claims, roughly every 4 weeks.
  • Each Round has:
    • about a week for private surveys in Batches of 10 claims
    • about two weeks for public markets on all 300 together
  • Only about 5% of claims will resolve. Out of 3000 total claims:
    • 100 will resolve by direct replication.
    • 150 will resolve by data replication.
    • Markets only pay out for direct replications.
  • Neither RM nor forecasters will know which claims resolve.
  • A total of about US $150,000 in prizes will be awarded.
    • $900 per resolving market, paid on resolution, proportional to winning shares/round. Total ~US$90,000.
    • $160 per survey Batch, paid per Round to the Top 4. Total ~US$48,000.
    • Remainder reserved for additional incentives, and adjustments.
  • Prizes are paid by Google Wallet Pay.
  • Recipients may be required to verify identity.
  • Taxes and fees are paid by recipients.
  • Usual liability limitations, and final authority with Sponsor.
  • Surveys pay out per round, to the Top 4

Want to receive updates about Replication Markets? Share your contact information below.

© 2019 All rights Reserved

We’re sorry to see you go! Please visit our social media sites.

This site uses cookies to provide you with a better browsing experience.

Visit our Privacy Policy for more information.