University of Limerick Institutional Repository

Crowdsourcing hypothesis tests: making transparent how design choices shape research results

DSpace Repository

Show simple item record Landy, Justin F. Jia, Miaolei (Liam) Ding, Isabel L. Viganola, Domenico Tierney, Warren Dreber, Anna Johannesson, Magnus Pfeiffer, Thomas Ebersole, Charles R. Gronau, Quentin F. Ly, Alexander van den Bergh, Don Marsman, Maaten Derks, Koen Wagenmakers, Eric-Jan Proctor, Andrew Bartels, Daniel M. Bauman, Christopher W. Brady, William J. Cheung, Felix Cimpian, Andrei Dohle, Simone Donnellan, Brent M. Hahn, Adam Hall, Michael P. Jiménez-Leal, William Johnson, David J. Lucas, Richard E. Monin, Benoit Montealegre, Andres Mullen, Elizabeth Pang, Jun Ray, Jennifer Reinero, Diego A. Reynolds, Jesse Sowden, Walter Storage, Daniel Su, Runkun Tworek, Christina M. Van Bavel, Jay J. Walco, Daniel Wills, Julian Xu, Xiaobing Yam, Chi Kai Yang, Xiaoyu Cunningham, William A. Schweinsberg, Martin Urwitz, Molly Uhlmann, Eric Luis 2020-02-04T19:48:48Z 2020-02-04T19:48:48Z 2019
dc.description peer-reviewed en_US
dc.description.abstract To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. en_US
dc.language.iso eng en_US
dc.publisher American Psychological Association en_US
dc.relation.ispartofseries Psychological Bulletin; 146 (5), pp. 451-479
dc.rights © American Psychological Association, 2019. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author's permission en_US
dc.subject crowdsourcing en_US
dc.subject scientific transparency en_US
dc.subject stimulus sampling en_US
dc.subject forecasting en_US
dc.subject conceptual replications en_US
dc.subject research robustness en_US
dc.title Crowdsourcing hypothesis tests: making transparent how design choices shape research results en_US
dc.type info:eu-repo/semantics/article en_US
dc.type.supercollection all_ul_research en_US
dc.type.supercollection ul_published_reviewed en_US
dc.identifier.doi 10.1037/bul0000220
dc.rights.accessrights info:eu-repo/semantics/openAccess en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search ULIR


My Account