Control, treat, repeat: Replicating an experiment on uncertainty


Reading time
5 mins
Control, treat, repeat: Replicating an experiment on uncertainty

How does ambiguity over payoffs affect cooperation in allocation games? Whatever be our finding regarding the former, is statistical significance sufficient to deem it valid?

Luigi Butera and John List’s paper titled “An Economic Approach to Alleviate the Crises of Confidence in Science: With An Application to the Public Goods Game” addresses the first question through an experiment with four repeated public goods games, and then proposes a way to incentivize replications of such experiments to increase their robustness.

Given that statistical significance assigned to the results of a single experiment can mislead, replication is often preached in the field of economics; nonetheless, it is rarely practiced. The authors acknowledge the lack of incentives for the original experimenters and prospective replicators as a reason for this two-facedness. They also cite as a reason the lopsided perspective towards replications as a threat to a study’s credibility rather than as a potential underpinning.

Having diagnosed the causes, they prescribe a remedy: an incentive-based mechanism to encourage replications. Not keeping with the trend in the field, they do put their suggested mechanism into practice - by making their manuscript available online as a working paper, committing to never submit it to any journal, and offering co-authorship to other scholars for a second paper in exchange for replicating the experiment they conduct.

The experiment involves one of the most studied domains in the literature, the provision of public goods, while introducing the less studied uncertainty. Although uncertainty has been reckoned certain in the domain, very few studies include uncertainty in stakeholders’ payoffs from investing in public goods’ provision. Even those that do so have an element of certainty, such as known probabilities of different payoffs, deviating from the real-world situation. The authors do not reveal the marginal per capita returns from investment to the participants, thus incorporating Knightian uncertainty, which more closely resembles reality.

Overall, the paper bridges not just two ends, but two gaps: in experimentation and its reproduction. It lets its readers into an unexplored avenue, where they can find themselves wanting to explore further.

Contributions: Noise-proof?

The experiment setup is a multiple-round game where participants allocate tokens between the private and public account. Players are provided a noisy signal about the returns to their contribution. The authors find that cooperation is higher under private uncertainty, especially when the quality of the public good is unknown. Cooperation breaks down over time with complete certainty, while the decline is lower with higher uncertainty. They reveal that upward biases in signals increase contributions, in contrast to downward biases. They express their surprise over this counterintuitive result, as downward biases are more representative of true values. Including uncertainty increases contributions for both directions of biases. Uncertainty also increases contributions within and between returns.

The study corrects for differences among groups and observations censored by the experiment’s design. It also rules out the possibility of participants’ confusion playing a role. The narrative is compelling, given the comprehensive list of scenarios considered and potential roadblocks averted.

Take it from the top

Lower expected returns for replicators due to lack of novelty and a risk of reduced priors for experimenters due to failed trials discourage replication despite its well-established importance. The authors recognize the multiple forms of replication, in terms of using the same datasets or exploring the same question with a different methodology, but choose to focus on employing the same design.

They share Maniadis et al.’s (2014) Bayesian approach to compute priors post a failed and successful replication to illustrate the hindrances and often ignored merits respectively. They then list the steps they pursue to avail of these merits by calling upon replicators.

Whether the replicators echo or debunk the study’s findings is yet to be seen, but the authors sure do hit the bull’s eye in encouraging their involvement. 

Conclusion

That uncertainty over outcomes increases cooperation may come across as striking. However, those averse to free riding now have the outcomes to blame besides free riders in the event of a loss, thus partly mitigating their aversion and encouraging their participation. One would hardly expect a framework assuming certainty to survive scrutiny. As such, unlike previous research, the study does not dodge but rather embraces uncertainty. In doing so, it opens doors to more authentic simulations of actuality.

The study risks the veracity of its experimental results by offering them on a platter to replicators in lieu of an opportunity to enhance their reliability. After all, a second take helps avoid underselling facts and overselling falsehoods. More so, it endeavors to serve the greater good, in that it hopes the practice of replication among economists, is, well, replicated. 

Related reading:

Guidelines for analytical method validation: How to avoid irreproducible results and retractions

Why are replication studies so rarely published?

Be the first to clap

for this article

Published on: Jul 04, 2017

Passionate about helping ESL authors and students publish quality research, which motivated her to teach and develop content for them.
See more from Hema Thakur

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.