Online Data Quality

Platforms (such as MTurk, Prolific, CloudResearch) and Panels (such as Lucid, Dynata, Qualtrics, etc.) have become the leading method for participant recruitment for online research, especially in social and behavioral sciences. However, because study participants are not monitored, data quality might not be optimal for research, which can hamper the reliability and validity of research findings. In this project I examine the data quality of different platforms and panels, on key aspects of data quality, and explore the factors that impact the data quality to understand what can researchers do to ensure the data quality of their findings ex-ante. 

Research papers

Peer, E., Rothschild, D., Gordon, A., Evernden, Z., & Damer, E. (2022). Data quality of platforms and panels for online behavioral researchBehavior Research Methods, 1.

Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral researchJournal of experimental social psychology70, 153-163.

Chandler, J., Paolacci, G., Peer, E., Mueller, P., & Ratliff, K. A. (2015). Using nonnaive participants can reduce effect sizesPsychological science26(7), 1131-1139.

Working papers

Peer, E., Rothschild, D., Gordon, A (2023). Platform over procedure: Online platforms that pre-vet participants yield higher data quality without sacrificing diversity, https://osf.io/preprints/psyarxiv/buzwn