A common dilemma in regulation is determining how much trust authorities can place in people’s self-reports, especially in regulatory contexts where the incentive to cheat is very high. In such contexts, regulators, who are typically risk averse, do not readily confer trust, resulting worldwide in excessive requirements when applying for permits, licenses, and the like. Studies in behavioral ethics have suggested that asking people to ex-ante pledge to behave ethically can reduce their level of dishonesty and noncompliance. However, pledges might also backfire by allowing more people to cheat with no real sanctions. Additionally, pledges’ effects have only been studied in one-shot decision making, and they may only have a short-term effect that could decay in the long run, leading to an overall erosion of trust. We explored the interaction of pledges with sanctions and the decay of their effects on people’s honesty by manipulating whether pledges were accompanied by sanctions (fines) and testing their impact on sequential, repeated ethical decisions. We found that pledges considerably and consistently reduced dishonesty, and this effect was not crowded out by the presence of fines. Furthermore, pledges seem to exert an effect on most people, including those who are relatively less inclined to follow rules and norms. We conclude that pledges could be an effective tool for the behavioral regulation of dishonesty, reduce the regulatory burden, and build a more trusting relationship between government and the public, even in areas where incentives and opportunities to cheat are high.
Behaviorally informed policies of interventions in choice architecture are growingly used to nudge people towards socially desirable behaviors. While consumers are usually the target of those nudges, businesses often serve as “nudging agents" on behalf of government regulation, or may be the target of governmental nudges themselves. Businesses’ support of such behavioral policies might be critical for their implementation, but the perceptions of managers towards nudges has never been directly assessed. We distinguish between government-to-business (G2B) nudges vs. government-to-business-to-consumer (G2B2C) nudges and provide first evidence of business managers’ attitudes towards such interventions. We discover an overall high level of support for nudges, and in particular for G2B nudges, with variations between types of nudges, the domain that they operate in, and whether they benefit the business or the consumer.
Nudges are simple and effective means to help people make decisions that could benefit themselves or society. However, effects of nudges are limited to local maxima, as they are almost always designed with the “average” person in mind, instead of being customized to different individuals. Such “nudge personalization” has been advocated before, but its actual potency and feasibility has never been systematically investigated. Using the ubiquitous domain of online password nudges as our testbed, we present a novel approach that utilizes people’s decision-making style to personalize the online nudge they receive. In two large-scale studies, we show how and when personalized nudges can lead to considerably stronger and more secure passwords, compared to administering “one-size-fits-all” nudges. We discuss the implications of this findings and how more efforts by researchers and policy-makers should and could be made to guarantee that each individual is nudged in a way most right for them.
Attitudes of public groups towards behavioral policy interventions (or nudges) can be important for both the policy makers who design and deploy nudges, and to researchers who try to understand when and why some nudges are supported while others are not. Until now, research on public attitudes towards nudges has focused on either state- or country-level comparisons, or on correlations with individual-level traits, and has neglected to study how different social groups (such as minorities) might view nudges. Using a large and representative sample, we tested the attitudes of two distinct minority groups in Israel (Israeli Arabs and Ultra-Orthodox Jews), and discovered that nudges that operated against a minority group’s held social norms, promoting a more general societal goal not aligned with the group’s norms, were often less supported by minorities. Contrary to expectations, these differences could not be explained by differences in trust in the government applying these nudges. We discuss implications for public policy and for the research and applications of behavioral interventions.
Malleability of preferences is a central tenet of behavioral decision theory. How malleable preferences really are, however, is a topic of debate. Do preference reversals imply preference construction? We argue that to claim preferences are construed, a demonstration of more extreme preference malleability than simple preference reversals is required: absolute preference sign changes within participants. If respondents value a prospect positively in 1 condition but negatively in a different condition, preferences cannot be considered stable. Such absolute preference sign changes are possible under uncertainty. In 2 incentive‐compatible experiments, we found participants were willing to pay to take part in a gamble and also demanded to be compensated to take part in a subsequent gamble with identical outcomes and probabilities. Such absolute preference sign changes within participants led to simultaneous risk aversion and risk seeking for the same risky prospect, suggesting that, at least in the domain of risky decisions, consumers' preferences are indeed malleable and construed.
Self-images are among the most prevalent forms of content shared on social media streams. Face-morphs are images digitally created by combining facial pictures of different individuals. In the case of self-morphs, a person’s own picture is combined with that of another individual. Prior research has shown that even when individuals do not recognize themselves in self-morphs, they tend to trust self-morphed faces more, and judge them more favorably. Thus, self-morphs may be used online as covert forms of targeted marketing–for instance, using consumers’ pictures from social media streams to create self-morphs, and inserting the resulting self-morphs in promotional campaigns targeted at those consumers. The usage of this type of personal data for highly targeted influence without individuals' awareness, and the type of opaque effect such artifacts may have on individuals' attitudes and behaviors, raise potential issues of consumer privacy and autonomy. However, no research to date has examined the feasibility of using self-morphs for such applications. Research on self-morphs has focused on artificial laboratory settings, raising questions regarding the practical, in-the-wild applicability of reported self-morph effects. In three experiments, we examine whether self-morphs could affect individuals' attitudes or even promote products/services, using a combination of experimental designs and dependent variables. Across the experiments, we test both designs and variables that had been used in previous research in this area and new ones that had not. Questioning prior research, however, we find no evidence that end-users react more …
The article presents a study on both the objective and relative risks involved with privacy decision making. Topics include the impact of changes in the objective risk of disclosure and the impact of changes in the relative perceptions of risk of disclosure on hypothetical and actual consumer privacy choices, a decrease in objective risk going from hypothetical to actual choice settings, and an increase in relative risk going from hypothetical to actual choice settings.
With the worldwide implementation of students' evaluation of teaching (SET), faculty attitudes and trust in students' feedback as well as possible defensive (i.e., self-protective) motivations seem most relevant to the facilitation of the primary organizational goal of SET, namely, teaching improvement. A questionnaire-administered to 2241 faculty members of all ranks in two dozen varied institutions-measured positive attitudes and trust, on the one hand, and beliefs in salient negative faculty SET myths, on the other hand. The most widely-held negative attitudes concerned student fallibilities: vindictiveness; lack of maturity; and negative evaluations of low-achieving students. Despite believing in myths, more than half of the respondents reported trusting SET, thought that it accurately reflected their teaching performance, and considered SET-based feedback useful. A derived index comparing self-evaluations to reported students' evaluations demonstrated that more than a third of th
People falsely believe that equal increases in vehicles' fuel efficiency (e.g., miles per gallon (MPG)) will result in equal fuel savings. Whereas previous research on this “MPG illusion” has focused on people's biased choices of upgrading vehicle models, it has not examined a more common situation, namely, estimating a given vehicle's fuel efficiency based on the average of two efficiency values (e.g., in the city and on highways). In such situations, we find an additional bias in people's judgment and choice, the average fuel‐efficiency fallacy, in which people falsely believe that the combined fuel efficiency (e.g., of city and highway MPG) is a simple—instead of a harmonic—mean of the two values. Owing to the curvilinear relationship between fuel efficiency and fuel consumption, the combined fuel‐efficiency value would always be lower than the simple average, resulting in consistent overestimations of the actual fuel efficiency. In a series of studies, we demonstrate how this fallacy of overestimating combined fuel efficiency leads to suboptimal choices between vehicles. In addition, we find that the solution prescribed for the MPG illusion—using gallons per 100 miles—does reduce, but not eliminate, the average fuel‐efficiency fallacy, and that comprehension of the gallons per 100 miles measure is a precursory condition for this nudge to have any effect.
The success of Amazon Mechanical Turk (MTurk) as an online research platform has come at a price: MTurk has suffered from slowing rates of population replenishment, and growing participant non-naivety. Recently, a number of alternative platforms have emerged, offering capabilities similar to MTurk but providing access to new and more naive populations. After surveying several options, we empirically examined two such platforms, CrowdFlower (CF) and Prolific Academic (ProA). In two studies, we found that participants on both platforms were more naive and less dishonest compared to MTurk participants. Across the three platforms, CF provided the best response rate, but CF participants failed more attention-check questions and did not reproduce known effects replicated on ProA and MTurk. Moreover, ProA participants produced data quality that was higher than CF's and comparable to MTurk's. ProA and CF participants were also much more diverse than participants from MTurk.
The Security Behavior Intentions Scale (SeBIS) measures the computer security attitudes of end-users. Because intentions are a prerequisite for planned behavior, the scale could therefore be useful for predicting users' computer security behaviors. We performed three experiments to identify correlations between each of SeBIS's four sub-scales and relevant computer security behaviors. We found that testing high on the awareness sub-scale correlated with correctly identifying a phishing website; testing high on the passwords sub-scale correlated with creating passwords that could not be quickly cracked; testing high on the updating sub-scale correlated with applying software updates; and testing high on the securement sub-scale correlated with smartphone lock screen usage (e.g., PINs). Our results indicate that SeBIS predicts certain computer security behaviors and that it is a reliable and valid tool that should be used in future research.
This paper aims to examine how reversibility in disclosing personal information – that is, having (vs not having) to option to later revise or retract personal information – can impact consumers’ willingness to divulge personal information. Three studies examined how informing consumers they may (reversible condition) or may not (irreversible condition) revise their personal information in the future affected their propensity to disclose personal information, compared to a control condition. Study 1 (which included three experiments with different time intervals between initial and revised disclosure) showed that consumers disclose less in both the reversible and irreversible conditions, compared to the control condition. Studies 2 and 3 showed that this is because consumers treat reversibility as a cue to the sensitivity of the information they are asked to divulge, and that leads them to disclose less when reversibility or irreversibility is made explicitly salient beforehand. As many marketers are interested in hoarding consumers’ personal information, privacy advocates call for methods that would ensure careful and well-informed disclosure. Offering reversibility to a decision to disclose personal information, or merely pointing out the irreversibility of that decision, can make consumers reevaluate the sensitivity of the situation, leading to more careful disclosures. Although previous research on reversibility in consumer behavior focused on product return policies and showed that reversibility increases purchases, none have studied how reversibility affects self-disclosure and how it can decrease it.
While individual differences in decision-making have been examined within the social sciences for several decades, they have only recently begun to be applied by computer scientists to examine privacy and security attitudes (and ultimately behaviors). Specifically, several researchers have shown how different online privacy decisions are correlated with the "Big Five" personality traits. In this paper, we show that the five factor model is actually a weak predictor of privacy attitudes, and that other well-studied individual differences in the psychology literature are much stronger predictors. Based on this result, we introduce the new paradigm of psychographic targeting of privacy and security mitigations: we believe that the next frontier in privacy and security research will be to tailor mitigations to users' individual differences. We explore the extensive work on choice architecture and "nudges," and discuss the possible ways it could be leveraged to improve security outcomes by personalizing privacy and security mitigations to specific user traits.
Although researchers often assume their participants are naive to experimental materials, this is not always the case. We investigated how prior exposure to a task affects subsequent experimental results. Participants in this study completed the same set of 12 experimental tasks at two points in time, first as a part of the Many Labs replication project and again a few days, a week, or a month later. Effect sizes were markedly lower in the second wave than in the first. The reduction was most pronounced when participants were assigned to a different condition in the second wave. We discuss the methodological implications of these findings.
While individual differences in decision-making have been examined within the social sciences for several decades, this research has only recently begun to be applied by computer scientists to examine privacy and security attitudes (and ultimately behaviors). Specifically, several researchers have shown how different online privacy decisions are correlated with the "Big Five" personality traits. However, in our own research, we show that the five factor model is actually a weak predictor of privacy preferences and behaviors, and that other well-studied individual differences in the psychology literature are much stronger predictors. We describe the results of several experiments that showed how decision-making style and risk-taking attitudes are strong predictors of privacy attitudes, as well as a new scale that we developed to measure security behavior intentions. Finally, we show that privacy and security attitudes are correlated, but orthogonal.
Despite the plethora of security advice and online education materials offered to end-users, there exists no standard measurement tool for end-user security behaviors. We present the creation of such a tool. We surveyed the most common computer security advice that experts offer to end-users in order to construct a set of Likert scale questions to probe the extent to which respondents claim to follow this advice. Using these questions, we iteratively surveyed a pool of 3,619 computer users to refine our question set such that each question was applicable to a large percentage of the population, exhibited adequate variance between respondents, and had high reliability (i.e., desirable psychometric properties). After performing both exploratory and confirmatory factor analysis, we identified a 16-item scale consisting of four sub-scales that measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness.
In their study about the Dr. Fox lecture, Naftulin, Ware, and Donnelly (1973) claimed that an expressive speaker who delivered an attractive lecture devoid of any content could seduce students into believing that they had learned something significant. Over the decades, the study has been (and still is) cited hundreds of times and used by opponents of the measurement of student evaluations of teachers (SET) as empirical proof for the lack of validity of SET. In an attempt to formulate an alternative explanation of the findings, we replicated the 1973 study, using the original video of the lecture and following the exact methodology of the original study. The alternative explanations tested on several samples of students included (a) acquiescence bias (via a reversed questionnaire and a cognitive remedy); (b) ignorance bias (participants’ lack of familiarity with the lecture content); (c) status/prestige bias (presentation of the speaker as a world authority); and (d) a direct measurement of students’ reports about their presumed learning. The Dr. Fox effect was indeed consistently replicated in all samples. However, the originally proposed notion of educational seduction leading to presumable (illusory) student learning was ruled out by the empirical findings: Students indeed enjoyed the entertaining lecture, but they had not been seduced into believing they had learned. We discuss the relevance of metacognitive considerations to the inclusion of self-reported learning in this study, and to the wider issue of the incorporation of student learning in the contemporary measurement of SET.
Data quality is one of the major concerns of using crowdsourcing websites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment 1, we found that high-reputation workers rarely failed ACQs and provided higher-quality data than did low-reputation workers; ACQs improved data quality only for low-reputation workers, and only in some cases. Experiment 2 corroborated these findings and also showed that more productive high-reputation workers produce the highest-quality data. We concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using ACQs, which may lead to selection bias if participants who fail ACQs are excluded post-hoc.