Peer, E. & Babad, E., 2014.
The Doctor Fox research (1973) rerevisited:“Educational seduction” ruled out. Journal of Educational Psychology, 106(1), pp.36-45. Available at: .
Publisher's VersionAbstract
In their study about the Dr. Fox lecture, Naftulin, Ware, and Donnelly (1973) claimed that an expressive speaker who delivered an attractive lecture devoid of any content could seduce students into believing that they had learned something significant. Over the decades, the study has been (and still is) cited hundreds of times and used by opponents of the measurement of student evaluations of teachers (SET) as empirical proof for the lack of validity of SET. In an attempt to formulate an alternative explanation of the findings, we replicated the 1973 study, using the original video of the lecture and following the exact methodology of the original study. The alternative explanations tested on several samples of students included (a) acquiescence bias (via a reversed questionnaire and a cognitive remedy); (b) ignorance bias (participants’ lack of familiarity with the lecture content); (c) status/prestige bias (presentation of the speaker as a world authority); and (d) a direct measurement of students’ reports about their presumed learning. The Dr. Fox effect was indeed consistently replicated in all samples. However, the originally proposed notion of educational seduction leading to presumable (illusory) student learning was ruled out by the empirical findings: Students indeed enjoyed the entertaining lecture, but they had not been seduced into believing they had learned. We discuss the relevance of metacognitive considerations to the inclusion of self-reported learning in this study, and to the wider issue of the incorporation of student learning in the contemporary measurement of SET.
Peer, E., Vosgerau, J. & Acquisti, A., 2014.
Reputation as a sufficient condition for data quality on Amazon Mechanical Turk.
Behavior Research Methods, 46(4), pp.1023-1031. Available at: .
Publisher's VersionAbstract
Data quality is one of the major concerns of using crowdsourcing websites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment 1, we found that high-reputation workers rarely failed ACQs and provided higher-quality data than did low-reputation workers; ACQs improved data quality only for low-reputation workers, and only in some cases. Experiment 2 corroborated these findings and also showed that more productive high-reputation workers produce the highest-quality data. We concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using ACQs, which may lead to selection bias if participants who fail ACQs are excluded post-hoc.
Peer, E., Acquisti, A. & Shalvi, S., 2014.
''I cheated, but only a little'': Partial confessions to unethical behavior. Journal of Personality and Social Psychology, 106(2), p.202. Available at: .
Publisher's VersionAbstractConfessions are people’s way of coming clean, sharing unethical acts with others. Although confessions are traditionally viewed as categorical—one either comes clean or not—people often confess to only part of their transgression. Such partial confessions may seem attractive, because they offer an opportunity to relieve one’s guilt without having to own up to the full consequences of the transgression. In this article, we explored the occurrence, antecedents, consequences, and everyday prevalence of partial confessions. Using a novel experimental design, we found a high frequency of partial confessions, especially among people cheating to the full extent possible. People found partial confessions attractive because they (correctly) expected partial confessions to be more believable than not confessing. People failed, however, to anticipate the emotional costs associated with partially confessing. In fact, partial confessions made people feel worse than not confessing or fully confessing, a finding corroborated in a laboratory setting as well as in a study assessing people’s everyday confessions. It seems that although partial confessions seem attractive, they come at an emotional cost.
Bar-Hillel, M., Peer, E. & Acquisti, A., 2014.
"Heads or Tails?" - A Reachability Bias in Binary Choice. Journal of Experimental Psychology: Learning, Memory and Cognition, 40(6), pp.1656-1663. Available at: .
Publisher's VersionAbstractWhen asked to mentally simulate coin tosses, people generate sequences that differ systematically from those generated by fair coins. It has been rarely noted that this divergence is apparent already in the very 1st mental toss. Analysis of several existing data sets reveals that about 80% of respondents start their sequence with Heads. We attributed this to the linguistic convention describing coin toss outcomes as 'Heads or Tails,' not vice versa. However, our subsequent experiments found the 'first-toss' bias reversible under minor changes in the experimental setup, such as mentioning Tails before Heads in the instructions. We offer a comprehensive account in terms of a novel response bias, which we call reachability. It is more general than the 1st-toss bias, and it reflects the relative ease of reaching 1 option compared to its alternative in any binary choice context. When faced with a choice between 2 options (e.g., Heads and Tails, when 'tossing' mental coins), whichever of th