Subjective Evidence Evaluation Survey For Many-Analysts Studies
Alexandra Sarafoglou, Suzanne Hoogeveen, Don Van Den Bergh, Balazs Aczel, Casper J Albers, Tim Althoff, Rotem Botvinik-Nezer, Niko Busch, Andrea Michael Cataldo, Berna Devezer, Noah N'Djaye Nikolai Van Dongen, Anna Dreber, Eiko I Fried, Rink Hoekstra, Sabine Hoffmann, Felix Holzmeister, Juergen Huber, Nick Huntington-Klein, John P.A. Ioannidis, Magnus Johannesson, Michael Kirchler, Eric Loken, Jan-Francois Mangin, Dora Matzke, Albert J. Menkveld, Gustav Nilsonne, Don Van Ravenzwaaij, Martin Schweinsberg, Hannah Schulz-Kümpel, David Shanks, Daniel J. Simons, Barbara A. Spellman, Andrea Helena Stoevenbelt, Barnabas Szaszi, Darinka Trübutschek, Francis Tuerlinckx, Eric Luis Uhlmann, Wolf Vanpaemel, Jelte M. Wicherts, Eric-Jan Wagenmakers
January, 2024
Abstract
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same data set by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g., effect size) provided by each analysis team. Although informative about the range of plausible effects in a data set, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item Subjective Evidence Evaluation Survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.
Publication
Royal Society Open Science
MSCA Research Fellow
I am a cognitive neuroscientist, trying to understand how our brain generates and stores subjective experience. Beyond that, I am also a newly minted mother *2.