Gamliel, E. & Peer, E., 2017.
The Average Fuel-Efficiency Fallacy: Overestimation of Average Fuel Efficiency and How It Can Lead to Biased Decisions. Journal of Behavioral Decision Making, (2), p.435. Available at: .
Publisher's VersionAbstractPeople falsely believe that equal increases in vehicles' fuel efficiency (e.g., miles per gallon (MPG)) will result in equal fuel savings. Whereas previous research on this “MPG illusion” has focused on people's biased choices of upgrading vehicle models, it has not examined a more common situation, namely, estimating a given vehicle's fuel efficiency based on the average of two efficiency values (e.g., in the city and on highways). In such situations, we find an additional bias in people's judgment and choice, the average fuel‐efficiency fallacy, in which people falsely believe that the combined fuel efficiency (e.g., of city and highway MPG) is a simple—instead of a harmonic—mean of the two values. Owing to the curvilinear relationship between fuel efficiency and fuel consumption, the combined fuel‐efficiency value would always be lower than the simple average, resulting in consistent overestimations of the actual fuel efficiency. In a series of studies, we demonstrate how this fallacy of overestimating combined fuel efficiency leads to suboptimal choices between vehicles. In addition, we find that the solution prescribed for the MPG illusion—using gallons per 100 miles—does reduce, but not eliminate, the average fuel‐efficiency fallacy, and that comprehension of the gallons per 100 miles measure is a precursory condition for this nudge to have any effect.
Peer, E. et al., 2017.
Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, p.153. Available at: .
Publisher's VersionAbstractThe success of Amazon Mechanical Turk (MTurk) as an online research platform has come at a price: MTurk has suffered from slowing rates of population replenishment, and growing participant non-naivety. Recently, a number of alternative platforms have emerged, offering capabilities similar to MTurk but providing access to new and more naive populations. After surveying several options, we empirically examined two such platforms, CrowdFlower (CF) and Prolific Academic (ProA). In two studies, we found that participants on both platforms were more naive and less dishonest compared to MTurk participants. Across the three platforms, CF provided the best response rate, but CF participants failed more attention-check questions and did not reproduce known effects replicated on ProA and MTurk. Moreover, ProA participants produced data quality that was higher than CF's and comparable to MTurk's. ProA and CF participants were also much more diverse than participants from MTurk.