Native advertising of online content, such as articles embedded within news websites, is a covert attempt by marketers to affect consumer attitudes and behavior. Because such marketing can have detrimental consequences for consumers, regulators worldwide have begun mandating that disclosures accompany marketing content. Despite these mandated disclosures, studies repeatedly find that consumers still fail to detect native ads even when they include various disclosure labels. We argue that the failure of these and other such disclosures, (e.g., software licensing), results from consumers becoming so habituated to these notices that they fail to recognize or use them effectively. We propose an improved form of disclosure for native ads requiring explicit identification of the name of the company or marketing agent paying for the non-original content. Identified disclosure can be more effective because it is more salient and can vary between ads and platforms. In two studies, we show how adding identified disclosures to native advertising increases detection rates significantly and consistently. We also discuss important implications arising from using smart disclosures for consumer protection.
Researchers are increasingly using online audiences to conduct studies and surveys, but there is still considerable uncertainty about the quality of the available audience options. We first engage with the research community to explore which attributes of data quality are most critical to researchers: comprehension, attention, honesty, and reliability. We then explore differences in these data quality aspects between online platforms (where participants choose tasks) and online panels (where respondents are assigned surveys). We find that most audiences suffer concerning deficits, especially in comprehension and attention, except for Prolific, which appears superior to the other options on these and other parameters. The disappointing results from the other platforms and panels should alarm researchers who may be aggregating over serious noise in the quality of their results. Additionally, we provide a framework for ongoing investigation into the ever-changing nature of what aspects of data quality are important to researchers, and how the evolving set of audiences performs on these key aspects.
A common dilemma in regulation is determining how much trust authorities can place in people’s self-reports, especially in regulatory contexts where the incentive to cheat is very high. In such contexts, regulators, who are typically risk averse, do not readily confer trust, resulting worldwide in excessive requirements when applying for permits, licenses, and the like. Studies in behavioral ethics have suggested that asking people to ex-ante pledge to behave ethically can reduce their level of dishonesty and noncompliance. However, pledges might also backfire by allowing more people to cheat with no real sanctions. Additionally, pledges’ effects have only been studied in one-shot decision making, and they may only have a short-term effect that could decay in the long run, leading to an overall erosion of trust. We explored the interaction of pledges with sanctions and the decay of their effects on people’s honesty by manipulating whether pledges were accompanied by sanctions (fines) and testing their impact on sequential, repeated ethical decisions. We found that pledges considerably and consistently reduced dishonesty, and this effect was not crowded out by the presence of fines. Furthermore, pledges seem to exert an effect on most people, including those who are relatively less inclined to follow rules and norms. We conclude that pledges could be an effective tool for the behavioral regulation of dishonesty, reduce the regulatory burden, and build a more trusting relationship between government and the public, even in areas where incentives and opportunities to cheat are high.
The MPG illusion and the time-saving bias both show that people misjudge the gains from increases in efficiency or speed, because people falsely believe that efficiency and speed are linearly related to consumption (e.g., gallons of fuel or journey time). This efficiency-consumption gap (ECG) has been demonstrated consistently in various situations. In parallel, people have also been found to show a diminished sensitivity to increases in magnitudes when judged under separate vs. joint evaluation modes (SE vs. JE). We show that these “two wrongs can make a right”: when people judge efficiency upgrades under SE mode, their subjective judgments follow a concave curve that closely resembles the curvilinear pattern of efficiency upgrades, making their preferences (artificially) less biased than they are under JE. In two studies we show that when asked for their willingness-to-pay (WTP) for upgrading products or services in two (a smaller vs. a larger) upgrade options, WTPs are less different in SE vs. JE modes. This means that people are exhibiting lower sensitivity to the upgrade size under SE which leads to a de-biasing effect. We show that because JE follow a linear trend, it yields biased preferences for efficiency measures, but not for consumption measures. In contrast, SE yield biased preferences for consumption, but not for efficiency measures.
Behaviorally informed policies of interventions in choice architecture are growingly used to nudge people towards socially desirable behaviors. While consumers are usually the target of those nudges, businesses often serve as “nudging agents" on behalf of government regulation, or may be the target of governmental nudges themselves. Businesses’ support of such behavioral policies might be critical for their implementation, but the perceptions of managers towards nudges has never been directly assessed. We distinguish between government-to-business (G2B) nudges vs. government-to-business-to-consumer (G2B2C) nudges and provide first evidence of business managers’ attitudes towards such interventions. We discover an overall high level of support for nudges, and in particular for G2B nudges, with variations between types of nudges, the domain that they operate in, and whether they benefit the business or the consumer.
Nudges are simple and effective means to help people make decisions that could benefit themselves or society. However, effects of nudges are limited to local maxima, as they are almost always designed with the “average” person in mind, instead of being customized to different individuals. Such “nudge personalization” has been advocated before, but its actual potency and feasibility has never been systematically investigated. Using the ubiquitous domain of online password nudges as our testbed, we present a novel approach that utilizes people’s decision-making style to personalize the online nudge they receive. In two large-scale studies, we show how and when personalized nudges can lead to considerably stronger and more secure passwords, compared to administering “one-size-fits-all” nudges. We discuss the implications of this findings and how more efforts by researchers and policy-makers should and could be made to guarantee that each individual is nudged in a way most right for them.
Attitudes of public groups towards behavioral policy interventions (or nudges) can be important for both the policy makers who design and deploy nudges, and to researchers who try to understand when and why some nudges are supported while others are not. Until now, research on public attitudes towards nudges has focused on either state- or country-level comparisons, or on correlations with individual-level traits, and has neglected to study how different social groups (such as minorities) might view nudges. Using a large and representative sample, we tested the attitudes of two distinct minority groups in Israel (Israeli Arabs and Ultra-Orthodox Jews), and discovered that nudges that operated against a minority group’s held social norms, promoting a more general societal goal not aligned with the group’s norms, were often less supported by minorities. Contrary to expectations, these differences could not be explained by differences in trust in the government applying these nudges. We discuss implications for public policy and for the research and applications of behavioral interventions.
Malleability of preferences is a central tenet of behavioral decision theory. How malleable preferences really are, however, is a topic of debate. Do preference reversals imply preference construction? We argue that to claim preferences are construed, a demonstration of more extreme preference malleability than simple preference reversals is required: absolute preference sign changes within participants. If respondents value a prospect positively in 1 condition but negatively in a different condition, preferences cannot be considered stable. Such absolute preference sign changes are possible under uncertainty. In 2 incentive‐compatible experiments, we found participants were willing to pay to take part in a gamble and also demanded to be compensated to take part in a subsequent gamble with identical outcomes and probabilities. Such absolute preference sign changes within participants led to simultaneous risk aversion and risk seeking for the same risky prospect, suggesting that, at least in the domain of risky decisions, consumers' preferences are indeed malleable and construed.
Self-images are among the most prevalent forms of content shared on social media streams. Face-morphs are images digitally created by combining facial pictures of different individuals. In the case of self-morphs, a person’s own picture is combined with that of another individual. Prior research has shown that even when individuals do not recognize themselves in self-morphs, they tend to trust self-morphed faces more, and judge them more favorably. Thus, self-morphs may be used online as covert forms of targeted marketing–for instance, using consumers’ pictures from social media streams to create self-morphs, and inserting the resulting self-morphs in promotional campaigns targeted at those consumers. The usage of this type of personal data for highly targeted influence without individuals' awareness, and the type of opaque effect such artifacts may have on individuals' attitudes and behaviors, raise potential issues of consumer privacy and autonomy. However, no research to date has examined the feasibility of using self-morphs for such applications. Research on self-morphs has focused on artificial laboratory settings, raising questions regarding the practical, in-the-wild applicability of reported self-morph effects. In three experiments, we examine whether self-morphs could affect individuals' attitudes or even promote products/services, using a combination of experimental designs and dependent variables. Across the experiments, we test both designs and variables that had been used in previous research in this area and new ones that had not. Questioning prior research, however, we find no evidence that end-users react more …
The article presents a study on both the objective and relative risks involved with privacy decision making. Topics include the impact of changes in the objective risk of disclosure and the impact of changes in the relative perceptions of risk of disclosure on hypothetical and actual consumer privacy choices, a decrease in objective risk going from hypothetical to actual choice settings, and an increase in relative risk going from hypothetical to actual choice settings.
With the worldwide implementation of students' evaluation of teaching (SET), faculty attitudes and trust in students' feedback as well as possible defensive (i.e., self-protective) motivations seem most relevant to the facilitation of the primary organizational goal of SET, namely, teaching improvement. A questionnaire-administered to 2241 faculty members of all ranks in two dozen varied institutions-measured positive attitudes and trust, on the one hand, and beliefs in salient negative faculty SET myths, on the other hand. The most widely-held negative attitudes concerned student fallibilities: vindictiveness; lack of maturity; and negative evaluations of low-achieving students. Despite believing in myths, more than half of the respondents reported trusting SET, thought that it accurately reflected their teaching performance, and considered SET-based feedback useful. A derived index comparing self-evaluations to reported students' evaluations demonstrated that more than a third of th
People falsely believe that equal increases in vehicles' fuel efficiency (e.g., miles per gallon (MPG)) will result in equal fuel savings. Whereas previous research on this “MPG illusion” has focused on people's biased choices of upgrading vehicle models, it has not examined a more common situation, namely, estimating a given vehicle's fuel efficiency based on the average of two efficiency values (e.g., in the city and on highways). In such situations, we find an additional bias in people's judgment and choice, the average fuel‐efficiency fallacy, in which people falsely believe that the combined fuel efficiency (e.g., of city and highway MPG) is a simple—instead of a harmonic—mean of the two values. Owing to the curvilinear relationship between fuel efficiency and fuel consumption, the combined fuel‐efficiency value would always be lower than the simple average, resulting in consistent overestimations of the actual fuel efficiency. In a series of studies, we demonstrate how this fallacy of overestimating combined fuel efficiency leads to suboptimal choices between vehicles. In addition, we find that the solution prescribed for the MPG illusion—using gallons per 100 miles—does reduce, but not eliminate, the average fuel‐efficiency fallacy, and that comprehension of the gallons per 100 miles measure is a precursory condition for this nudge to have any effect.
The success of Amazon Mechanical Turk (MTurk) as an online research platform has come at a price: MTurk has suffered from slowing rates of population replenishment, and growing participant non-naivety. Recently, a number of alternative platforms have emerged, offering capabilities similar to MTurk but providing access to new and more naive populations. After surveying several options, we empirically examined two such platforms, CrowdFlower (CF) and Prolific Academic (ProA). In two studies, we found that participants on both platforms were more naive and less dishonest compared to MTurk participants. Across the three platforms, CF provided the best response rate, but CF participants failed more attention-check questions and did not reproduce known effects replicated on ProA and MTurk. Moreover, ProA participants produced data quality that was higher than CF's and comparable to MTurk's. ProA and CF participants were also much more diverse than participants from MTurk.
The Security Behavior Intentions Scale (SeBIS) measures the computer security attitudes of end-users. Because intentions are a prerequisite for planned behavior, the scale could therefore be useful for predicting users' computer security behaviors. We performed three experiments to identify correlations between each of SeBIS's four sub-scales and relevant computer security behaviors. We found that testing high on the awareness sub-scale correlated with correctly identifying a phishing website; testing high on the passwords sub-scale correlated with creating passwords that could not be quickly cracked; testing high on the updating sub-scale correlated with applying software updates; and testing high on the securement sub-scale correlated with smartphone lock screen usage (e.g., PINs). Our results indicate that SeBIS predicts certain computer security behaviors and that it is a reliable and valid tool that should be used in future research.
This paper aims to examine how reversibility in disclosing personal information – that is, having (vs not having) to option to later revise or retract personal information – can impact consumers’ willingness to divulge personal information. Three studies examined how informing consumers they may (reversible condition) or may not (irreversible condition) revise their personal information in the future affected their propensity to disclose personal information, compared to a control condition. Study 1 (which included three experiments with different time intervals between initial and revised disclosure) showed that consumers disclose less in both the reversible and irreversible conditions, compared to the control condition. Studies 2 and 3 showed that this is because consumers treat reversibility as a cue to the sensitivity of the information they are asked to divulge, and that leads them to disclose less when reversibility or irreversibility is made explicitly salient beforehand. As many marketers are interested in hoarding consumers’ personal information, privacy advocates call for methods that would ensure careful and well-informed disclosure. Offering reversibility to a decision to disclose personal information, or merely pointing out the irreversibility of that decision, can make consumers reevaluate the sensitivity of the situation, leading to more careful disclosures. Although previous research on reversibility in consumer behavior focused on product return policies and showed that reversibility increases purchases, none have studied how reversibility affects self-disclosure and how it can decrease it.
While individual differences in decision-making have been examined within the social sciences for several decades, they have only recently begun to be applied by computer scientists to examine privacy and security attitudes (and ultimately behaviors). Specifically, several researchers have shown how different online privacy decisions are correlated with the "Big Five" personality traits. In this paper, we show that the five factor model is actually a weak predictor of privacy attitudes, and that other well-studied individual differences in the psychology literature are much stronger predictors. Based on this result, we introduce the new paradigm of psychographic targeting of privacy and security mitigations: we believe that the next frontier in privacy and security research will be to tailor mitigations to users' individual differences. We explore the extensive work on choice architecture and "nudges," and discuss the possible ways it could be leveraged to improve security outcomes by personalizing privacy and security mitigations to specific user traits.
Although researchers often assume their participants are naive to experimental materials, this is not always the case. We investigated how prior exposure to a task affects subsequent experimental results. Participants in this study completed the same set of 12 experimental tasks at two points in time, first as a part of the Many Labs replication project and again a few days, a week, or a month later. Effect sizes were markedly lower in the second wave than in the first. The reduction was most pronounced when participants were assigned to a different condition in the second wave. We discuss the methodological implications of these findings.
While individual differences in decision-making have been examined within the social sciences for several decades, this research has only recently begun to be applied by computer scientists to examine privacy and security attitudes (and ultimately behaviors). Specifically, several researchers have shown how different online privacy decisions are correlated with the "Big Five" personality traits. However, in our own research, we show that the five factor model is actually a weak predictor of privacy preferences and behaviors, and that other well-studied individual differences in the psychology literature are much stronger predictors. We describe the results of several experiments that showed how decision-making style and risk-taking attitudes are strong predictors of privacy attitudes, as well as a new scale that we developed to measure security behavior intentions. Finally, we show that privacy and security attitudes are correlated, but orthogonal.