"I Know I'm Being Observed:" Video Interventions to Educate Users about Targeted Advertising on Facebook

Recent work explores how to educate and encourage users to protect their online privacy. We tested the efficacy of short videos for educating users about targeted advertising on Facebook. We designed a video that utilized an emotional appeal to explain risks associated with targeted advertising (fear appeal), and which demonstrated how to use the associated ad privacy settings (digital literacy). We also designed a version of this video which additionally showed the viewer their personal Facebook ad profile, facilitating personal reflection on how they are currently being profiled (reflective learning). We conducted an experiment (n = 127) in which participants watched a randomly assigned video and measured the impact over the following 10 weeks. We found that these videos significantly increased user engagement with Facebook advertising preferences, especially for those who viewed the reflective learning content. However, those who only watched the fear appeal content were more likely to disengage with Facebook as a whole.

demonstrated how to use the associated ad privacy settings (digital literacy).We also designed a version of this video which additionally showed the viewer their personal Facebook ad profle, facilitating personal refection on how they are currently being profled (refective learning).We conducted an experiment (n = 127) in which participants watched a randomly assigned video and measured the impact over the following 10 weeks.We found that these videos signifcantly increased user engagement with Facebook advertising preferences, especially for those who viewed the refective learning content.However, those who only watched the fear appeal content were more likely to disengage with Facebook as a whole.

INTRODUCTION
Social media platforms have been widely adopted and are used by 72% of adults in the United States [6], including 84% of those aged 18-29 years [6].Users leverage these platforms to facilitate their social, professional, and civil interactions, with roughly half of U.S. adults even using social media to get their news [82].Through their own and friends' social media activities, users (often implicitly) reveal their interests, personal traits, and even their state of health [24].
In recent years, backlash [75] has erupted around social media platforms profling users, using their data to compile algorithms that infer fne-grained interests and characteristics such as race, religion, or socioeconomic status.For example, some have expressed concern over how artifcial intelligence (AI) techniques (e.g., deep learning) in platforms profle users to show targeted advertising [69].Privacy threats such as these are directly due to fne-grained, automated, AI-driven, and unpredictable user profling enabled by data collection [12].Such targeted advertising can cause harms such as discrimination [43,77], emotional manipulation [76], and political persuasion [71].In fact, research shows that people who are already isolated can feel even more so by seeing certain types of content (which could be manipulated purposefully through profling) [55].Fears of infuencing political outcomes have risen through public revelations of microtargeting used to sway political opinions [9].Taken to an extreme, some have argued that microtargeting could be used as a tool to undermine democracy or engage in psychological warfare [59].It is difcult for users to anticipate how their information will be used for targeting and could thus be a privacy threat.Despite these concerns, AI-driven user profling is crucial to providing value by creating a personalized experience on platforms like Facebook.
Privacy research emphasizes how being aware of threats and having control over one's information is essential for users to engage in online activity [47].Users with low digital literacy may lack the knowledge of how to use technology in a way that would give them a sense of control over their privacy [62,96].Thus, we aim to create interventions that will increase users' digital literacy, increasing their knowledge of managing the privacy threats resulting from user profling.However, digital literacy is not enough to motivate behavior change -prior research suggests that digitally literate users may understand how to manage settings, but not necessarily have the motivation for doing so [88].
Thus, users may also need to be convinced of the existence and importance of privacy threats.
There have been some promising avenues of research into motivating privacy-preserving behaviors, such as through storytelling.Presenting stories about peers who have been compromised has been shown to encourage users to take their privacy more seriously [87].Other work has focused on explaining privacy in a more accessible medium, such as through privacy comics [38] or privacy labels [36] (akin to nutrition labels).In short, scholars are now examining how particular media's afordances and aesthetic qualities can be leveraged to infuence people's privacy-preserving behaviors efectively.
Within the realm of videos, Stein et al. [72] examined diferent genres of online YouTube videos that seek to educate and persuade the public regarding Facebook and users' privacy.The authors found that videos utilize one of three approaches: 1) digital literacy -providing knowledge of how platforms work and how to modify settings; 2) fear appeal -making an emotional appeal to users by showing the ramifcations of not protecting one's privacy; and 3) refective learning -an educational approach where one refects on their past undesirable behavior, which motivates improved behavior in the future.Several of these approaches tap into emotional responses in an attempt to leave an impression on the viewer.Although these YouTube videos may have many likes and views, it is unclear which approaches, if any, increase digital literacy or lead to behavioral change.
Extending this line of research, we designed and created two educational videos to encourage user engagement in privacy-protecting behaviors by incorporating persuasive elements.We also focused on the three persuasive approaches of digital literacy, fear appeal, and refective learning.Both videos showed viewers how to change their ad profling privacy settings (digital literacy).Both videos motivated viewers to care about their privacy with regard to targeted advertising using an emotional appeal to explain the risks associated with targeted advertising (fear appeal).Finally, to maximize persuasive elements that motivate viewers to care about ad targeting privacy, in one video, we additionally show the viewer their personal Facebook ad profle, facilitating personal refection on how they are currently being profled (refective learning).We refer to the video using digital literacy and fear appeal as the fear appeal or FA video.This video is readily deployed to viewers with little to no overhead.We refer to the video utilizing digital literacy, fear appeal, and refective learning as the fear appeal with refective learning or FA+RL video.These videos included personally tailored content generated from the viewer's online data.For example, the information Facebook has inferred about the viewer is directly presented in the video to help the viewer refect on Facebook's pervasive tracking and inference activities.They encourage more engagement, but require more overhead to implement in the real world (i.e., to gather information inferred by platforms from past digital footprints and injecting this into a personalized video).Thus, we tested these two videos to see how much is gained by personalization.Our research questions are as follows.
RQ1: How does exposure to the fear appeal video intervention about targeted advertising on Facebook afect privacy-protective attitudes and behaviors?RQ2: How does exposure to the refective learning video intervention about targeted advertising on Facebook afect privacy-protective attitudes and behaviors?
We conducted an experiment in which N = 127 participants were randomly assigned to one of the two afective video interventions or a control video.Longitudinal behavioral and attitudinal data was collected from the viewer to determine the impact of these videos.We found that exposure to videos increased privacy-protective attitudes and behaviors.However, each of the videos led to slightly diferent behaviors.While a fear appeal only approach led viewers to be more concerned with avoiding unwanted incoming information, the fear appeal with refective learning approach appeared to heighten awareness of being profled and increased engagement with crucial advertising settings.These fndings indicate that the videos target similar attitudes and concerns.However, personalization through the added refective learning approach increases users' engagement with their preferences, while a fear appeal-only approach tends to reduce their interaction with Facebook as a whole.This reduced engagement could potentially be a downside for the platform and users.The results of our research are an initial step towards understanding how afective appeals can shape human behavior and attitudes.Furthermore, our fndings provide a more nuanced understanding of how diferent afective approaches can prime the user to be more aware of diferent types of privacy violations, motivating action to protect against them.We conclude by discussing the implications of this work and opportunities to improve online privacy education.

BACKGROUND AND RELATED WORK
In this section, we present prior work related to this research.First, we explore the privacy harms of targeted advertising on social media.Next, we summarize research encouraging online privacy protection, divided into two broad categories: Behavioral Nudges and Educational Interventions.We then examine research that examines the role of persuasion in privacy attitudes and behaviors, specifcally in relation to Protection Motivation Theory (PMT).This presents an opportunity for us to explore how to incorporate those persuasive elements into an intervention.

Targeted Advertising in Social Media
Targeted advertising is a leading economic driver of the growth of the modern Web, including social media platforms [49].Companies like Facebook use strategies to track their users' online activities across sites, and even devices [13,28,31,68].While online platforms tout targeted advertising as a net beneft for both advertisers and consumers, there is evidence that this pervasive tracking of user activities can also lead to discriminatory practices [15,23,95].
In light of these practices and the pervasive use of social media platforms, most people in the United States worry about the security of their personal and private data [7].Yet, social media users rarely seem to act on those privacy concerns, a phenomenon often referred to as the privacy paradox [8].
This paradox may arise from a knowledge gap.Indeed, research shows that many users lack the knowledge needed to take control of their privacy [63,97].Existing tools and preferences have also been found to have signifcant usability issues [45,46].Due to these limitations and shortcomings, Johnson et al. found that those who actively manage their online advertising settings tend to be more technologically literate than the average internet user [33].However, a simple question remains on how to efectively educate users about targeted advertising on social media to help them make informed decisions.

Privacy Education and Motivation
We summarize relevant prior research on helping people understand and be motivated to manage their privacy.We review the literature for privacy and security-related educational video interventions.We also describe the work that has been done in interface-based education and motivation, often in the form of nudges or default settings.

Educational videos.
Previous work has investigated using educational video interventions as a long-term method for promoting secure behavior [2-4, 11, 21, 22].Specifcally, Albayram et al. found that efective risk communication and self-efcacy themes in videos were most likely to infuence people's intention to adopt multi-factor authentication [2].In more recent work, Albayram fnds that using a fear appeal approach with educational instructions signifcantly increases the likelihood of viewers adopting secure smartphone authentication schemes and password managers [3,4].Das et al. also fnd that video-based interventions are efective methods for promoting secure behavior [21,22].Our work similarly applies a video-based approach but to a diferent subject, namely targeted advertising on social media platforms.

2.2.2
Nudges embedded in the platform.Prior work has also investigated using "nudges" to address user's lack of motivation or knowledge gaps to manage their privacy.These "nudges" originated from behavioral economics and are essentially a change in the "choice architecture" that "nudge" individuals to act in a particular direction while still allowing user autonomy to make other choices [79].In the security and privacy feld, "nudges" are typically employed in using secure/private default settings or in-the-moment interventions [1,40,70,74,[84][85][86].While "nudges" have generally been shown to have some efectiveness, some approaches are more efective than others [84,85].Specifcally, nudges that efectively communicate risks increase the likelihood that users take action to protect themselves [74].While Facebook has implemented some privacy-related nudges, Kroll et al. found that these nudges do not lead to signifcant changes in user behavior [40].They suggested improving the persuasiveness of nudges as one avenue of investigation.Our research investigates how to increase persuasiveness toward a more privacy-preserving experience on social media.

Persuasion and Protection Motivation Theory
Fear appeals have been applied to a variety of domains including promoting health behavior [99] and in combating climate change [16].Investigating their efectiveness has led to mixed results [56], with some even suggesting that fear appeals may cause more harm than good [73] (such as inducing anxiety in the case of climate change [18]).However, when coupled with messages of self-efcacy, meta-analyses on the use of fear appeals have shown that it can be efective at infuencing behavior change without inducing anxiety.[78,99].
The persuasion elements of our video design were drawn from prior work identifying the persuasion tactics used in privacy education videos [72], but also draws from Protection Motivation Theory (PMT) [67,98].There are two primary components to PMT: Threat Appraisal and Coping Appraisal.Threat appraisal consists of evaluating the severity and the vulnerability of a potential threat, while coping appraisal consists of evaluating the efcacy of a given response and the self-efcacy of enacting the given response in combination with the cost of the response.Users typically use two types of sources of information for threat and coping appraisal: Environmental and Intrapersonal.Environmental information includes external persuasion and observational learning, while intrapersonal information includes difering personality variables and prior experience.Both types of information are used by individuals in the appraisal process.
PMT originally emerged as a way to help individuals protect themselves against health hazards [67,98].More recently, PMT has been applied to other felds, including usable security [94] and privacy [51].While not explicitly drawing from PMT, the use of privacy experts or peers fts within the environmental information model of threat appraisal.Much privacy research has examined how the type of information used afects the persuasiveness of an intervention, such as the credibility or trustworthiness of a source or message.While the expert status of a sender or agent is typically the most dominant consideration [5,34], studies also cite the role of social trust and peer infuence on privacy behavior [41,89,101].Users are often infuenced by peers in their privacy behaviors, such as choosing appropriate privacy settings, [41] relying on social cues, particularly from privacy experts and friends [27].Turning to experts or informed members of a user's social group may be natural considering that many may be unaware of how to remedy a privacy violation [101] Emotional appeals, such as fear appeals, are not as well studied.Pfefer et al. [64] found that scare tactics and real-life stories of security threats may be inefective in changing behavior unless combined with actionable advice.Using PMT, this suggests that a change in the threat appraisal (i.e., increased fear of threat) without an increase in self-or response efcacy is unlikely to motivate change.
Recent work has investigated directly applying PMT in the creation of persuasive media to promote secure behavior [3,32,81].In the privacy and social media domain, Meier et al. used PMT to create a persuasive tool to motivate users to be more privacy aware on social media [51].Meier et al. specifcally investigated using a fear appeal approach and a norm-based approach, fnding that neither fear appeal nor normative appeal led to changes in privacy protection motivation; however, changes in threat perception and perceived efectiveness of privacy protection led to behavior change.
Our study contributes to this growing body of work by using PMT to strengthen the persuasiveness of video interventions for online privacy and security threats.In particular, we apply the PMT construct of threat perception by using a fear appeal.We increase the perceived efectiveness of privacy protections through digital literacy.In addition, to leverage the personal sources of information that PMT posits are useful to perform threat and coping appraisals, we used a refective learning approach to further improve the persuasiveness of the videos.Refective learning is an educational process that encourages users to refect on their own experiences to inform their future behavior.This process may increase the user's self-efcacy and, in turn, the coping appraisal.Along these lines, prior work has investigated how personalizing persuasive games for individual users increases the likelihood of motivating behaviour change [35,57,58].Refective learning approaches have also been investigated in the security and privacy space in the form of refective writing [42].They further confrm the efectiveness of refective learning in persuasive media.Given the popular nature of privacy videos on social media platforms [72], our study seeks to examine whether a combined fear appeal, digital literacy, and refective learning strategy may indeed be efective in improving the efcacy of video-based education.In particular, research has shown that the efectiveness of video-based education is limited by its passive nature [14].Using fear appeals, we aim to trigger a more active response to the educational materials presented in our videos.Furthermore, refective learning is meant to explicitly direct viewers to actively refect upon the educational materials presented.Together, these solutions aim to increase viewers' motivation and efcacy in actively consuming educational materials.
Thus, in this work, we explore how to efectively educate users about targeted advertising in social media and persuade them to take action.To that end, we applied the best practices identifed in the health and security felds to test whether these are also efective methods for persuading users to act in more privacypreserving ways.To do this, we designed interventions that focused on motivating behavior change using a fear appeal approach by explicitly informing users about the risks of targeted advertising.The videos also included themes of self-efcacy by giving stepby-step instructions on managing one's privacy settings regarding targeted advertising on Facebook.

VIDEO DESIGN
We designed two educational video interventions and used an additional control video.We describe the design of the videos including the persuasive elements that were included.

Fear Appeal
The primary persausive element in the video was a hypothetical, but relateable, fear appeal scenario used to activate viewers' privacy concerns.It featured an example of how Facebook advertising could be used to discriminate against people like the viewer.Specifcally, employment advertisements were being shown to people associated with interest categories representing nearby universities, but not including the user's own university.The fear appeal was designed to enhance user's perception that profling could allow someone to be targeted or excluded.To ensure that viewers internalize the possibility that they could be vulnerable to ad targeting, we designed and tested diferent storylines.We developed three videos with diferent storylines and pilot-tested them (n=11) to identify the most persuasive one.These pilot tests were conducted in a large private university where the student body is associated with a religious slant.Each video depicted some sort of discrimination as a result of ad targeting but with diferent reasons (and associated consequence).An initial video only explained how microtargeting could theoretically lead to discrimination, without stating a concrete example.A second video discussed a real-life example of gender discrimination in employment advertisements [26].The third and fnal script included a hypothetical example of a student from their university who did not see relevant job opportunities and wonders if this may be due to the school's religious afliation.The phenomenon of religious discrimination in employment is well documented [25,30,65,80,83].While the scenario provided in this video was not based on a specifc occurrence, given the reality of religious discrimination in employment and the university's religious afliation, we theorized that students may be particularly persuaded that ad targeting could have privacy implications.
Pilot participants were shown one of the three videos followed by a semi-structured interview that allowed researchers to gauge participants' engagement and perceived likelihood that the video would trigger attitudinal or behavioral change.All four of the participants who were shown the video without any example scenario mentioned not being persuaded that microtargeting was a problem they should be concerned about.One participant who viewed the gender discrimination scenario could not imagine being in a similar scenario, while the remaining two did not feel that the specifc type of job ads they would miss out on were harmful enough for them to warrant making changes to their settings.Finally, three of the four participants who viewed the narrative about religious discrimination based on university afliation mentioned the scenario made them want to view their ad settings and be more concerned about ad profling in general.Results from this pilot and the full study (more fully described in subsequent sections) suggest that viewers did not perceive this fear appeal to be a real occurence, but rather a hypothetical example of something that could occur.In the pilot and full study, participants were asked for suggestions on how to improve the video they watched.Many participants referred to the "hypothetical" (P110) story and had suggestions for the "example" (P124).
Our main takeaway based on the pilot was that participants were more likely to want to change their Facebook usage if they felt it was possible that they could be the victim of the discrimination depicted in the video.The scenario of religious afliation with the university was the most plausible.In essence, this scenario was an efective fear appeal, as it increased users' threat perception and vulnerability, and was more likely to motivate behavior change.

Digital Literacy
To complement the fear appeal component, our videos included information to increase digital literacy by informing participants about Facebook's data collection method and how these data are used to categorize Facebook users for advertising purposes.The videos taught viewers where to fnd their advertising privacy settings and how to use them to limit ad profling.The digital literacy portion of the videos focused primarily on Facebook's advertising sensitive topics page, profling information such as education and relationship status, and the user's inferred interest categories.The goal of this section of the video was to increase users' sense of self-efcacy by demonstrating how Facebook's ad settings can be used to limit threats from ad profling.The fear appeal content and digital literacy content were combined into a single video of 4 minutes and 8 seconds.We refer to this video as the FA video.

Refective Learning
To further increase the persuasiveness of the video, we created additionally personalized content featuring the participant's own profle information and inferred interests used to profle them on their Facebook account.This allowed the viewer to refect on their own personal information and experiences.Namely, the video makes them aware of their past behavior (i.e., the inferred interests and information they have allowed to remain associated with their profle), allowing them to refect on whether they would want to change their behavior in the future (e.g., to remove or change the interests and information on their profle).This content was combined with the fear appeal and digital literacy to form a second video.We expect this content to provide additional information to users when performing the threat appraisal (refecting on likelihood of threat afecting them) and coping appraisal (will making changes to my settings have any real efect).We refer to this as the FA+RL video.Although this condition incorporates an additional element of persuasion, it requires personalization based on user data, which may be challenging to implement in the real world and requires consent.Thus, we are interested in whether adding personalization increases privacy outcomes sufciently to justify this additional complexity.
The refective learning component of the video specifcally calls attention to potentially sensitive interest categories, namely political or religious categories, which are known to be sensitive [66].We collected a dataset of over 100,000 interests on the Facebook advertising interface to identify political and religious interests.These interests allow advertisers to target custom audiences on Facebook.This custom audience selection feld uses string matching to match available interests to the text an advertiser enters into the audience selection feld.Using web automation scripts, we collected the dataset through this feature.We then accessed various online sources to create a dataset of political keywords (e.g., political terms, issues, parties, and fgures) and religious keywords (e.g., religious groups, organizations, and beliefs).These datasets comprised entries in the Wikipedia and YouGov online encyclopedia, including religious and political organizations, fgures, and beliefs.[90][91][92][93]100].We used the Fuzzywuzzy Python library to match strings between the Facebook interest dataset and our religious and political keywords [19].Fuzzywuzzy calculated the Levenshtein distance between each interest in our interest dataset and each religious or political keyword and provided a token score from 0 to 100, with 100 being a perfect match.Interests with a score greater than 90 were labeled religious or political according to the keyword to which they were closest.Two researchers manually validated these identifed interests.The resulting dataset contained 415 political interests and 213 religious interests.The personalized video was created in real time using the viewer's own data from Facebook by categorizing the viewer's interests (using string matching against our labeled religious and political keyword datasets) and inserting the political or religious strings with a 90% similarity or higher into the video.The voice-over in the video explicitly mentions that the content being shown in the video was personalized to them using data obtained from their own Facebook account.This video was 5 minutes and 1 second long (and the same length for all participants assigned to watch this video).
The fnal scripts and screenshots from the videos are found in Appendix B and C.

METHODOLOGY
We conducted a user study to test the efectiveness of our video interventions.The participants came to our lab and were randomly assigned to watch one of two designed videos (FA or FA+RL), or a control (C) video.We selected a control video to compare the efectiveness of our educational intervention videos against a baseline.We chose an emotionally neutral video of similar length as our interventions.It did not reference social networks, technology, or privacy, nor incorporate any persuasive video design elements.We ultimately chose a nature video about Antarctica produced by National Geographic [53].This video was 5 minutes and 3 seconds long After the lab study, participants were invited to participate in multiple follow-up surveys over the next 10 weeks.Here, we explain the detailed study design.

Independent and Dependent
Variables.To answer our research questions, we assigned participants to one of our three video conditions: control (C), fear appeal (FA), and fear appeal with refective learning (FA+RL).To detect changes in Facebook users' privacy-protective attitudes and behaviors as a result of these interventions, we used a combination of interview questions, survey items, and data collection to measure attitudes and actual behavioral data.
From an attitudinal perspective, we wanted to understand the opinions of the participants about the video that was watched, their intentions to use Facebook, and their concerns about Facebook data collection practices.To capture quantitative changes in privacy behavior, we also used existing scales from the literature, such as the vaguebooking scale developed by Berryman et al. [10] and self-disclosure [17].The vaguebooking scale probed whether participants post vague or ambiguous content to allude to something else on social media, to attract attention from friends, or to speak their minds without direct reference.This scale was measured as an average across three items, each measured on a scale from 1 to 7. The use of vaguebooking was not mentioned in the video.However, we anticipated that this could potentially be a method users would employ to protect their privacy.Self-disclosure questions measured participants' proclivities to share information about themselves, their feelings, personal beliefs and opinions, their close relationships, concerns, and fears on Facebook.
To enable targeted advertising, Facebook infers a user's interests based on their activity on and of the platform.A user can view and opt out of a specifc inferred interest.We captured the number of interest categories associated with the user's account and the number of interest categories the user had removed.We also recorded whether users enabled or disabled advertising preferences, such as whether they were targeted based on profle information (employer, job title, education, and relationship status) or if they chose to opt out of viewing ads related to sensitive topics (alcohol, parenting, pets, and social issues, elections or politics) as seen in Figures 2 and 3.In addition to the changes in settings, we also collected behavioral data on Facebook to uncover changes to a user's interactions on Facebook after the intervention.These behavioral data included aggregate activity data, including the number of daily likes, comments, and posts for 10 weeks prior to the intervention and 10 weeks after the intervention.

Recruitment and Ethical Facebook Data Collection
We recruited active Facebook users through advertisements distributed throughout a large university campus in the United States from February to May 2022.Flyers were distributed in the student center and across campus to ensure we reached students from various disciplines.We also used university email lists and word of mouth to increase participation.To be eligible to participate, participants were required to currently be using Facebook at least once a week.Participants were compensated $20 for the initial in-person lab study and $10 for each of the three follow-up surveys they completed.Participants were not required to participate in follow-up surveys and could withdraw from the study at any time.Programmatic data collection.We designed a browser extension that (after obtaining meaningful written consent from the participants) collects data in a privacy-preserving way.The designed infrastructure enabled us to collect users' advertisement interests and aggregate statistics of Facebook activity (e.g., the number of reactions, comments, or posts made per day for the previous ten weeks).Collecting Facebook's inferred interests enabled us to programmatically create customized refective learning content for participants in the FA+RL condition.Due to the sensitive nature of the data displayed in the refective learning content, the generated videos were deleted from the server after participants viewed the videos.Additionally, none of the content injected into the videos was recorded for analysis.As such, we did not collect how many political or religious interests the users saw in the FA+RL condition.The aggregate statistics allowed us to analyze the temporal evolution of their Facebook activities in response to our intervention.Due to the type of data we were collecting (private Facebook data), in addition to a standard consent form, we provided and detailed in separate highlighted text the data we would and would not collect from the particpants' Facebook account for our study.If the participant wished to continue, they then downloaded our browser extension, which shared their Facebook session cookie with a server at our institution.Our servers used Selenium, a browser automation tool, to download aggregated statistics from participants' activity logs and their ad interests.We longitudinally collected these data (contacting users to retake surveys one week, one month, and ten weeks after the intervention).To protect our participants' Facebook accounts, after each data collection point concluded, the session cookie data was deleted from the server.Furthermore, participants were recommended to logout of their Facebook accounts after each data collection point to invalidate existing session cookies.Note that our data collection and refective video creation pipeline are fully programmatic, with no researcher ever seeing the raw data or having access to the participants' Facebook accounts.
Aggregate anonymized data collection.We only stored anonymized versions of the data using one-way hashes for any unique identifers (e.g., numeric Facebook IDs for a user) that could be considered personally identifable information (PII).We did not collect any photos or texts of any post or comment.We did not collect the users' social network (i.e., identities of friends or the people a user interacts with via reaction, commenting, etc.).On our server, we only stored aggregate anonymized data.Overall, our infrastructure and research procedures aimed to minimize data collection while enabling us to provide personalized interventions to the users and measure their efect.

In-Person Lab-Study.
In preparation for the study, participants were informed about the tasks they would complete and were asked to bring their own laptop for the study.Otherwise, one would be provided for them to use.Before participants began the study, they were given informed consent forms to sign explaining the data collection processes, the exact data collected, and the experiment.They were also verbally informed about the study, including the data that would be collected from their Facebook accounts, the protocol that they would participate in during the in-person study, and the opportunities to participate in any follow-up surveys.Participants were verbally asked to reconfrm their consent to be recorded during the study to capture the interview portion.Once consent was given, participants were randomly assigned to one of the two video interventions or the control video.Throughout the lab study, participants used their own laptops (or the one provided by us).Participants were seated opposite any research staf so that the participant's laptop screen was visible only to the participant and not to research staf.Furthermore, no video of participants or their actions on laptops was recorded or collected.
In the study, participants were frst asked to complete a short Qualtrics survey asking for demographic information, including how frequently they use Facebook and their reasons for using Facebook.The participants were then instructed to start the data collection process to collect their baseline ad settings and aggregate statistics before the intervention.If participants were assigned to the FA+RL condition, the data collection process would also generate the participant's personalized video.The assigned video condition was embedded into each participant's survey, and the participants were then instructed to watch their assigned video.They were also invited to pause the video anytime to provide feedback or ask questions.Following the video, the participants were instructed to complete a post-intervention survey to collect initial reactions to the video.The participants were then interviewed about their reactions to the video.During this interview, participants in all conditions were asked to navigate to their advertising settings on their own Facebook accounts.During this task, participants could refer to online resources, the video they had just watched, or ask the coordinators for help.The coordinators took note of what type of assistance was needed as each participant navigated to the ad settings page.Participants who needed extra help or who were unable to locate the settings were provided assistance by the research staf.This guaranteed that all participants, even those in the control condition, were exposed to Facebook's advertising settings.After participants completed the interview and survey, they again provided consent for the tool to collect data from their Facebook account to capture any changes participants made while or after watching their assigned video.Research staf were not able to see if or what changes any users made to their ad settings, unless participants explicitly mentioned it (e.g., in interviews).These sessions averaged 45 minutes.The shortest time was 25 minutes and the longest was 90 minutes.

Longitudinal Study.
Following the in-person study, participants were contacted by email one week, one month, and ten weeks after the in-person session to invite them to participate in a follow-up survey.This survey captured quantitative and qualitative attitudinal data.The participants were then asked to use the data collection tool to collect information from their Facebook accounts.The data collected during the one-week and one-month responses were the same data collected during the in-person lab study, including advertising settings, number of interests, and number of removed interests.Participants who completed the 10-week survey provided the same data, but were informed that the tool would collect an additional set of aggregate usage statistics from the past ten weeks, including the number of daily likes, comments, and posts.

Analysis plan
4.3.1 Atitudinal Data.Using participants' survey responses from the study session and subsequent follow-ups, we used a mixed-efects linear regression using a robust standard error estimator to determine the efectiveness of our interventions, including any attenuating efects that time may have on users' attitudes, using a participant-specifc random intercept.These attitudinal variables include video memorability, concern for Facebook data collection, change in perceived Facebook usage, vaguebooking scale, and self-disclosure scales, each of which was measured using 5-point Likert scales.In each initial model, we included the fxed efects: assigned video condition (C, FA, FA+RL), the number of days after intervention the data was collected, and an interaction between these two main efects.We also included demographics as covariates: age (18-24 vs. 25+), ethnicity (white vs. nonwhite) and gender (male vs. female).The broad buckets on age and ethnicity are a refection of the demographics of our sample where non-white was combined in order to have a large enough group for the analysis, and given that we targeted college students, above the typical college age was combined to have a large enough group.We also included the frequency of Facebook usage (constantly, multiple times a day, once a day, and less than once a day), the Internet Users' Information Privacy Concerns (IUIPC) scale [48] and a digital literacy scale [54] as covariates.For the IUIPC and digital literacy scales, we combined individual items into a sum score for each scale (after confrming their reliability -IUIPC = 0.846, digital literacy = 0.829) and then used these scores as covariates.Each of these covariates was tested for skewness and kurtosis, each being at acceptable levels, i.e., skewness <|3| and kurtosis <|10| [37].We included interaction efects between each covariate and the two main efects to account for potential moderating efects.Using a fully saturated model, we calculated a mean Variance Infation Factor (VIF) of 2.04 with no predictors having a VIF greater than 5 [52].We then iteratively trimmed each model by eliminating covariate interactions that were not signifcant, and removing covariate main efects if neither the covariate nor its interactions were signifcant as recommend by Field [29].4.3.3Facebook Activity Data.Using our Facebook data collection tool, we collected the number of daily reactions, comments, and posts for each participant, resulting in 10 weeks of data before and after the intervention.Due to the volatility of the daily data, we combined daily activity data into weekly counts of each activity.We also created a binary variable postIntervention to separate behavior before and after intervention.Using mixed-efects linear regression using a robust standard error estimator, we included the assigned condition, postIntervention, and the week from -10 to +10 weeks as main efects; an interaction efect between condition and postIntervention to evaluate the efectiveness of our conditions on changing user behavior; an interaction between condition and time to see how users' behavior changed over time by condition; and an interaction between weeks and postIntervention to see how user behavior changed over time before and after the intervention regardless of the condition.Adding the postIntervention variable allowed us to control for any diferences in activity before the intervention between groups.Finally, we added a three-way interaction between condition, time, and postIntervention to see if the efectiveness of our intervention changes over time.To control for diferences in participant demographics, we also included age, ethnicity, gender, frequency of Facebook usage, IUIPC, and digital literacy as covariates.We again included interactions between each covariate, any 2nd and 3rd-order interactions.We trimmed our models to include only signifcant efects following the same methodology used in our attitudinal data analysis.

Advertising Preferences Data.
In addition to collecting activity data, we used our data collection tool to collect users' advertising preferences before the intervention, after the intervention, and during each follow-up.These preferences include a setting that allows users to determine what profle information advertisers can use to target them, such as employer, job title, education, and relationship status.Located on a diferent preference page, Facebook allows users to opt out of being targeted and see fewer ads related to topics Facebook determined to be potentially sensitive.These include alcohol, parenting, pets, social issues and politics, gambling, and body weight control.Since these settings serve signifcantly diferent purposes and are located in diferent places, we built two separate logistic regression models to determine the likelihood of users changing each of these settings after the intervention.To do this, we created new variables to indicate if participants had made any changes to each of the settings, whether the setting was on the same setting as it had been right before the intervention for the entire 10-week span after the intervention.Specifcally, looking at whether participants enabled these settings after the intervention, we created a mixed logistic regression to predict the likelihood of changing the settings using the assigned video condition as the main predictor (also including the same covariates as above).As the FA and FA+RL conditions could potentially lead to diferent behavioral changes, we also created a logistic regression model where the outcome variables included whether users made a change to any setting within the sensitive topics or demographic settings.

Power Analysis
We computed an a priori power analysis using G-Power [44] for the regression models.Using a minimum power of 80% and a small efect size ( = .15),we obtain a minimum sample size of 68 [20].
To ensure that we met this minimum sample size throughout the 10-week follow-up, we assumed a drop-of rate of 50% and thus aimed to recruit about 140 participants.

Participant Demographics
We recruited a total of 138 participants (for demographics, see Table 1).Each participant was randomly assigned to one of the three video conditions.We had 45 participants assigned to the FA+RL condition, 47 assigned to the FA condition, and 46 assigned to the control condition.While we were conducting the study, near the end of the study, Facebook implemented a slightly new interface that caused our data collection tool to fail to collect the necessary data to create refective learning content for some users.Although it was random which users had the new interface, we had to remove those participants who were assigned the FA+RL condition from our analysis because they were unable to complete the intervention.Our fnal pool of participants included 39 participants in FA+RL, 46 participants in the FA, and 42 participants in the control condition.Since participants were not required to participate in all the following sessions, we also report the breakdown of participation through each follow-up in Table 2. To ensure that participant drop-of was not afected by the condition, we ran a series of unpaired t-tests between each condition for each follow-up survey.We could not detect a signifcant diference (p>0.05) between any conditions and the missingness of the data for any of the surveys.

Ethical considerations
In generating the refective learning content, we collected participants' profle information (interests by which Facebook categorized the user) and other potentially sensitive information.These data were collected and processed into the video programmatically without requiring any intervention from research staf.After each video was created, the data was removed from the server.Each video was placed in a restricted directory and could only be accessed through a personalized link provided to the participant by the research staf.This link included a securely generated access token that was validated by the server before allowing access to the videos.This mechanism prevented these potentially sensitive videos from being accessed by other participants.During the user session, researchers allowed participants to view the video on their own laptops, while the researcher sat on the other side to avoid seeing the video content.Furthermore, in subsequent uses of our data collection tool, we only collected participant data when we had their authorization.When participants completed a follow-up survey, they had the option of providing us with an access token to collect the aggregate data.If participants completed the survey without providing the token, our tool did not collect any data from their accounts.We took every precaution to ensure that we did not collect any personal or aggregate data without explicit and specifc consent of the participants.

Limitations
Our study specifcally focuses on a student population with videos that are crafted with scenarios and negative consequences (missing out on a job opportunity) that this target audience can relate to.Future research should expand to additional populations.Our pilot studies indicate that these videos should be closely crafted to the concerns that are relevant to the population of interest.Our refective learning condition focused primarily on sensitive interest categories such as politics and religion.As Facebook does not provide an exhaustive list of all advertising interests advertisers can use to flter their target audience, we relied on an automated search of interests, which resulted in 600 total religious and political interests.This likely captures only a subset of these interests.Capturing more of these interests may increase the efectiveness of personalized videos as they may more  accurately refect all of the viewer's applicable interests.In addition, future work could expand this to other sensitive topics.
Our research focuses on a 10-week period before and after the intervention.Future research should investigate the longer-term efects and the point at which this behavior change might attenuate.Additionally, we measured changes to user's advertising preferences on Facebook, and not whether or how often users actually visited these pages.Measuring only the changes may be a conservative estimate of users' interactions with the ad preferences page: Some users may have been motivated to visit the preferences page, but did not make actual changes (e.g., because the listed preferences are aligned with their actual preferences or because they were not satisfed with the level of control provided by Facebook).Future work should investigate user opinions about the advertising settings provided by Facebook and whether these are sufcient for most needs.

RESULTS
All of the data analyses were performed using the STATA 17.0 statistical software.To reduce the likelihood of reporting false positives, we applied the Benjamani-Hochberg procedure to reduce the false discovery rate.We chose a false discovery rate of .05,obtaining a new signifcance threshold of .024.All reported signifcant values fall below the critical value determined by this procedure.We report the fnal regression tables for the trimmed models in Appendix A.

Atitudes toward Video.
After the video intervention, participants were asked if they had learned something, changed their mind about something, or intended to change their behavior after watching the video.A summary of these results is shown in Figure 4. Participants who watched the designed videos were 6.4 times (FA, = .007)and 16.6 times (FA+RL, = .008)more likely to indicate learning something compared to the control condition.These indicate a medium efect for the FA condition and a large efect from the FA+RL condition.Using post hoc analysis, we did not detect a signifcant diference between the FA and FA+RL conditions ( = .42).In addition to learning something, we measured how persuasive the videos were by asking participants if the video changed their minds about something.We fnd a signifcant diference between the control condition and the FA+RL (3.4 times, p = 0.009, small efect) condition, but fail to detect a signifcant diference between the control and FA condition (p = 0.046).We also do not detect a signifcant diference between the FA and FA+RL ( = .44)conditions.Finally, to indicate whether our video was efective in increasing the likelihood that participants would act on the advice in the videos, we also asked participants after watching the video if they planned to change their behavior.Again, we detected a signifcant diference between the control condition and the conditions of FA (3.0 times, p=.013, small efect) and FA+RL (2.9 times, p = 0.023, small efect) without detecting a signifcant diference between the conditions of FA and FA + RL.

Ability to
Navigate to Setings.All participants were asked to navigate to their advertising settings on their own Facebook accounts.This was done for two reasons.One was to measure how efective the video was in teaching users how to fnd their settings, and the second was to make sure that all participants, including those in the control condition, were able to navigate to their ad settings page, albeit with some assistance at frst.Each participant was granted a score when navigating to their ad settings: 0 for needing full assistance, 1 for needing partial assistance (e.g.help fnding the frst settings page, etc.), and 2 for navigating to the settings page without any assistance.The coordinators also indicated which type of assistance was used.These results are shown in Figure 5.Using ordinal logistic regression, we fnd that participants in the FA (14.9 times, p=.0001, large efect) and FA+RL (24.8 times, p<.0001, large efect) were signifcantly more likely to need less assistance in fnding the ad settings page.No signifcant diferences were found between the FA and FA+RL conditions ( = .21).During this task, each participant who sought help only used one method of assistance, either using the video or asking the coordinator for help.Each participant who used the video for assistance was able to navigate to their settings page successfully.Therefore, any diferences found between the medium used for assistance are likely due to personal preference rather than to the efectiveness of the assistance.

Atitudes.
In this section, we report the impact of the fear appeal (FA) and fear appeal with refective learning (FA+RL) conditions on participants' attitudes and self-reported behavior.
Video Memorability.Participants responded on a 7-point scale how much they agreed with the statement, "I think about the video shown to me in the study frequently when using Facebook." Using mixed efect linear regression with Holm-corrected post hoc tests to test for diferences between the three video interventions, we found signifcant diferences between the control condition and both the FA (p < 0.001) and FA+RL (p < 0.001) condition.Specifcally, participants in the FA condition scored .52 standard deviations higher than participants in the C condition, and participants in the FA+RL condition scored .60 standard deviations higher.This efect is shown in Figure 6a.The time it took after the intervention for the participants to respond to this question did not have a signifcant impact on the outcome, nor did any covariates.
Facebook Information Concern.Our intervention videos specifcally discussed information that Facebook gathers and uses to provide targeted advertisements.We measured the level of concern of the participants with how much information Facebook has about them.For men, we found that concern was higher in both the FA (0.33 standard deviations higher, p = 0.028) and FA+RL (0.44 standard deviations higher; p = 0.006) conditions than in the control condition.For women, we found that the concern was actually lower in both the FA (0.11 standard deviations lower, p = 0.07) and FA+RL (0.14 standard deviations lower; p = 0.023) conditions than in the control condition.These efects and interactions are shown in Figure 6c.Unsurprisingly, we also found that participants with higher general privacy concerns, as reported on the IUIPC scale, had signifcantly (p < 0.001) higher concerns with the amount of information Facebook has about them.This higher level of concern existed throughout the study.Each 1 standard deviation increase on the IUIPC scale is associated with a 0.23 standard deviation increase in concern about Facebook.Finally, the amount of time it took after the intervention for participants to respond to this question again did not have a signifcant impact on the outcome.Our participants commonly indicated that, after seeing videos about how their data can be used for advertising, they viewed Facebook as a dangerous party.When asked in the session what was the most important takeaway of the fear appeal with refective learning video, P61 explained, "Things are targeted and things being targeted is not always good for the consumer, and the ways that [Facebook] can pick those categories [. . .] are not necessarily benefcial." When asked if the FA+RL video changed their mind about anything, P60 expressed this idea that Facebook might harm its users with the data collected: "I think the videos changed my mind about it being harmless to people like me.I always felt like it [targeted advertising] was not harming most people, or in general not harmful, but it was just more harmful to me." Facebook Usage Change.In each follow-up survey, participants also indicated if they intentionally used Facebook less after participating in the study.For men, we found a higher likelihood of intentionally using Facebook less under both conditions (FA: p = 0.001, FA+RL: p = 0.009) compared to the control condition, the condition FA having a 0.63 standard deviation higher and the condition FA + RL having a 0.52 standard deviation higher expressed likelihood of using Facebook less.Women were generally not as likely to claim that they used Facebook less (a 0.44 standard deviation diference, p = 0.014) than men.Specifcally, women in the FA condition (p=0.005)showed a 0.11 standard deviation decrease in expressed likelihood to use Facebook less compared to women in the control while the FA+RL (p=0.019)led to a 0.02 standard deviation decrease in women's expressed likelihood to use Facebook less.This interaction is shown in Figure 6b.The amount of time it took after the intervention for participants to respond to this question again had no signifcant impact on the outcome.
When asked about their usage of Facebook changing, participants frequently expressed the belief that the more they use Facebook, the more of their data could be used for nefarious purposes.As a result, both intervention videos seemed to motivate users to restrict activities that could be data mined by Facebook.After watching the FA video, P9 explained that "I don't want them using all that information that they have about me." As P10 (FA+RL) noted, "It kind of makes me want to spend less time on Facebook.[. . .] I don't really feel like people need to know that much information about me.[. . .] I don't know how much I actually trust the company Facebook, so having them know a ton of personal information about me is a little uncomfortable, so I don't really want to be on it."When asked if their behavior had changed in a follow-up survey, P119 (FA) said, "I don't go on [Facebook] nearly as much.I just go on and try to make sure I try not to leave any trace of myself.I did this before, but I am more careful about it now." When asked why, they explained, "I think it's just because my paranoia about Facebook always listening was heightened again." In addition, some changes users made to their behavior were not limited to just changing their usage of Facebook but also in just being more thoughtful in their interactions.For example, P25 (FA) was asked in a follow-up survey if they would change their behavior on Facebook, they expressed that they would be more mindful: "It hasn't changed too much, but I've started to become more mindful of 'oh, they're probably watching me' as I search or click on something."  Vaguebooking.Although neither the FA video nor the FA+RL video suggested vaguebooking as an efective strategy to reduce data tracking, we detected a signifcant diference in participants' adoption of vaguebooking strategies on Facebook in both the FA condition (a 0.35 standard deviation increase, p = 0.027) and the FA+RL the control condition.This efect is shown in Figure 6d.We found that vaguebooking decreased with the amount of time after the intervention participants responded to the follow-up surveys (p = 0.001).A one-day increase in response time led to a 0.004 standard deviation decrease in expressed vaguebooking activity.
Self-Disclosure.We could not detect a signifcant diference in self-disclosure behaviors after the intervention for either condition compared to the control.

Behavioral
Results -Advertising Setings.Facebook provides users with several diferent controls to allow them to limit certain types of targeted advertising: demographic targeting, sensitive topics, and interest categories.The created videos focus on helping users fnd these settings and understanding how using them can reduce the likelihood of harm from targeted advertising.In this section, we analyze the changes that users made to these settings.An overview of these results is shown in Figure 7.
Profle Information.One of the settings that Facebook provides to limit certain types of targeted advertising is the limiting of profle information that advertisers can use to target users.These include a user's employer, job title, education, and relationship status.We found a signifcant diference in the participants' odds of engaging with this feature between the control condition and the FA+RL condition (4.3 times, p =0.023, medium efect).However, we could not detect a signifcant diference between the FA condition and the control condition.
Sensitive Topics.Another control that Facebook provides is the "sensitive topics" interface.The sensitive topics controls allow participants to choose to see fewer ads related to topics Facebook deemed sensitive, such as Alcohol, Gambling, or Parenting.We found that participants in the FA condition were signifcantly more likely to opt out of at least one of these topics than participants in the control condition (5.5 times, p = 0.013, medium efect).We were unable to detect a signifcant efect for the FA+RL condition or any of the covariates.Figure 8 shows the breakdown of the changes that users made for each of the eight sensitive topics and demographic controls.
Interest Categories.The fnal targeted advertising control that we measured was the participants' likelihood of making changes to their interest categories.These interest categories are an algorithmic assignment to them by Facebook based on their activity on and of the platform.However, we found that participants in the FA+RL condition were 4.68 times more likely to remove any interest categories from their account than participants in the control condition (p = 0.014, medium efect).We were unable to detect a signifcant efect of the FA condition or any of the covariates.
Any Advertising Settings Changes.Since each condition could have led to diferent changes in their advertising settings, and users may not fnd it necessary to change their settings for each of the controls, we further investigated this by creating a new binary variable changedAnySetting.This allows us to determine how our interventions compare to the control for making any change to users' targeted advertising preferences.When combining these controls, we fnd that compared to the control, participants in the FA were 3.6 times ( = 0.01, medium efect) more likely to make a change, and the FA+RL was 6.46 times ( = 0.0004, medium efect) more likely.As all participants (including those in the control condition) received instructions on how to reach the advertising settings, this result shows that both interventions are efective at motivating users to make changes to their advertising settings.In the qualitative responses P133 (FA+RL) explained, "I don't think the default setting is ever the most private," indicating a sentiment that Facebook as a platform is not designed to support privacy by default.
Longitudinal Changes to Advertising Settings.Since all participants were directed to their advertising settings during the lab session there is a possibility that participants made changes entirely because they felt they were expected to by the coordinators, rather than because they were motivated by the video to make such changes.To mitigate this efect, we intentionally designed the layout of the lab session so that the study coordinators could not view the actions of the participants on their Facebook settings page.Likewise, since participants in the control condition were also instructed to visit the ad settings, it is likely that any pressure felt by participants was felt similarly across all conditions.Furthermore, since we collected longitudinal data at various intervals after the lab session, we were able to detect whether participants made changes during the lab sessions, after the sessions, or made changes at multiple time points.
We fnd that many participants made changes both during and after the lab sessions.Specifcally, of the participants who made changes to their advertising settings, in the FA+RL condition 50% of participants made changes only after the lab session, 15% made changes during the lab session and subsequently, while 35% of participants only made changes during in the lab setting itself.Similarly, in the FA condition of those that made changes to their account, 59% of them made changes after the session, 6% made changes during and after the lab session, with 35% only making changes during the lab session itself.These fndings suggest that the changes participants made to their advertising settings were more likely motivated by a desire to protect against perceived threats than by the desire to please the study coordinators.A detailed breakdown of these longitudinal changes can be seen in Figure 9.

Facebook
Activity -Reactions, Comments, Posts.In addition to self-reported Facebook use, we also wanted to test whether we could detect any changes in the observed Facebook activity of the participants after the intervention.Using 20 weeks of timestamped count data (10 weeks before the intervention and 10 weeks after the intervention) for participant reactions, comments, and posts, we tested how these numbers changed from before to after the intervention, depending on the intervention conditions and controlling for a number of covariates.
In addition to a general decrease in the number of reactions over time (p=0.042),we found a signifcant diference in how the number of reactions changed after the intervention between the control condition and both interventions.The FA condition resulted in a reduction in the average weekly number of reactions from 28% reduction (3.45 reactions) in weekly reactions after the intervention (p=0.015).The FA+RL condition resulted in a 35% reduction (4.3 reactions) in weekly reactions after the intervention (p=0.0025).This particular efect was not moderated by any of the covariates.We did fnd other signifcant interactions between the covariates and the intervention conditions, but since these efects are not diferent before and after the intervention, they simply indicate a baseline diference in user behavior between the intervention conditions.We included these interactions in our statistical evaluations to control for any potential misalignment of participant demographics between conditions.
In the follow-up surveys users described how they are aware of how Facebook is tracking their actions.In fact, participants who expressed the desire to protect themselves from Facebook by limiting their use of the platform talked about advertisements and "pages." Participants viewed Facebook interactions with ads or pages as more prone to data mining than Facebook interactions with their  friends and people they know.This trend is summarized by P69's Surprisingly, we could not detect signifcant diferences between (FA) response to the same question, "[I am] more cautious.As I see the conditions in terms of the change in the number of comments ads pop up, I am a lot more aware about where they are collecting or posts the participants made before and after the intervention.the information from." This led to more specifc behavior changes.This is likely due to the fact that for most participants, commenting When asked how their Facebook use had changed in the follow-up or posting is a fairly rare occurrence to begin with.Regardless of survey, P97 (FA+RL) said, "I am conscious about clicking on ads." P81 condition, the average number of comments per week was 2.14, (FA+RL) responded, "I have been more careful with how I engage while the median was 0. For posting, the average number of posts with ads and pages because it makes me think 'what is this telling per week was 0.26 and a median of 0. Another potential explanation the people that have my information?".
for this phenomenon is that after watching the videos, participants limited their engagement with organizations or Pages on Facebook but didn't fnd their interactions with their peers to be concerning.
In our qualitative data, we did not hear participants talk about concerns about interactions with their friends or people they know.They did not express that they would limit any of these interactions.
Rather, the conversation focused on limiting reactions and engaging with the content on Pages, which they felt Facebook would track.

DISCUSSION
In general, in this study, we explored the efcacy of a visual intervention in bridging a knowledge gap among users regarding social media controls and helping users preserve their privacy in the face of targeted advertising on social media.However, we observed a distinct diference between using only fear appeal and using both fear appeal as well as refective learning techniques in the videos.
Videos using both fear appeal and refective learning are useful for educating users.Our results show that a fear appeal with a refective learning approach had longitudinal, signifcant efects on users' behaviors and attitudes on Facebook.Specifcally, both of our video approaches led to immediate benefts in improving the ability of users to fnd their advertising settings and being persuaded to change their behavior based on what they learned.Longitudinally, these approaches led users to report frequently thinking about the video's content afterward, increased concern regarding Facebook's data mining of their information, and intentionally using Facebook less (including vaguebooking).Additionally, both videos were efective (FA -3.6 times more likely than control, FA+RL -6.46 times more likely than control) at leading users to make any change to their advertising settings, whether that be in their sensitive topics, demographics, or interest categories.Although the detected efect of the FA+RL condition is higher than just the FA conditions, we could not detect a signifcant diference between the two.At a minimum level, this indicates that the use of a persuasive approach (fear appeal) to educate users about targeted advertising is an efective method for attitudinal and behavioral change.Our results show that fear appeal messaging when coupled with messages of self-efcacy are an efective strategy for motivating behavior change in the domain of targeted advertising.
Refective Learning increases the strength of this relationship.The primary diference identifed between using a fear appeal only and a fear appeal with refective learning lies in the way users interacted with Facebook after the intervention.Specifcally, the fear appeal was only associated with an increase in changes to a user's sensitive topics, namely seeing less of that content.It was not associated with a signifcant change in other advertising settings.On the other hand, the refective learning portion led to an increase in the likelihood of engaging with their targeted advertising interest categories and demographic settings, in addition to an increase in changes to sensitive topics.
We surmise that the bespoke nature of the refective learning content had a more personal efect on its viewers, leading to a more robust engagement with users' own accounts.The video, tailored with real data from our users, spoke to the direct consequences of their Facebook usage -what Facebook had inferred from them.This speaks to how this approach heightened viewers' sensitivity to the fact that they are being profled.Qualitative analysis showed how reactions to content created by organizations were one of the key ways participants felt Facebook could profle them.Thus, the reduction in these types of reactions is a logical extension of this sentiment.This is also refected in the increased awareness and change in profle preferences of users, which is directly related to profling the user.These results suggest that personalization, an understudied aspect of PMT, can improve persuasive messaging.Most prior work has focused on external infuences on threat and coping appraisal, including intrapersonal information in a persuasive media strengthens the persuasiveness of the messaging.Future work should investigate how personalization afects the decision-making process, i.e., whether it infuences threat appraisal or coping appraisal.
Only fear appeal-based videos are not enough to provide users knowledge for sustained privacy preserving behavior.The fear appeal video was linked to participants being nearly 8 times more likely to change their Facebook settings to opt out of ads related to sensitive topics.Although taking action to protect themselves from seeing ads related to sensitive topics is an important behavior change, when it comes to protecting against harm from targeted advertising, this is actually a fairly superfcial action.These controls seem only to limit the types of ads shown to them and do not give users control over the ways Facebook and its advertisers use their individual data.The refective learning condition led to a more robust engagement with Facebook's advertising controls.This engagement led to users being more likely to limit the types of information advertisers can use to target them and being more likely to curate their interests (i.e., removing problematic interests).The fear appeal also had a surprising interaction with gender, as males were more likely to indicate intentionally using Facebook less.However, while fear appeal with refective learning videos also showed signifcant efects, the magnitude of these changes was overall lower (0.5 standard deviations less than the control) than that of the fear appeal videos (0.63 standard deviations less than the control).Furthermore, these results did not appear to lead to a signifcant reduction in reported usage for women.This aligns with users' reported concerns related to profling on Facebook, as men in the FA and FA+RL conditions had an increase in concern, while women in those conditions did not see an increase in concern.This raises a distinct question as to what role gender potentially plays in mediating the efectiveness of these persuasive approaches.Future work should investigate why this efect exists and if women already have a higher concern for profling online in general, and thus these persuasive techniques may not infuence users' concern or usage.
The fear appeal video presented a general, albeit relatable, problem for student users.However, it was not as efective overall as the video which combined fear appeal with refective learning in the context of encouraging privacy-preserving behaviors.Rather, participants in this condition handled their concerns by using the platform less.However, the fear appeal video did plausibly have a greater efect with regard to sensitive topics.One possible explanation is that the fear appeal videos were indeed successful in conveying privacy concerns that would generally apply to an archetype or persona.Our work is aligned with prior literature where limiting the use of a platform is a common privacy tactic [60].However, often this behavior leads to negative consequences and missing out on social capital [60,61].In addition, this may suggest that the personalization content added in the FA+RL video increased levels of self-efcacy which led to a more robust interaction with Facebook's advertising settings, instead of just limiting the use of Facebook altogether.
The need to use both refective learning and fear appeal to educate users Our results seem to indicate that the impact of fear appeal on limited account usage (withdrawal instead of control) is simply a reaction when users do not have a more concrete understanding or concern that they can address.The success of the refective learning condition shows that users can take specifc tactics to address the source of the threat if they are alerted to this and motivated to do so.As the fear appeal video content specifcally mentioned demographic items such as employer and education, it is not a surprise that these were common items for those in the fear appeal condition to turn of.However, in the refective learning condition, we see that the most likely demographic setting to be changed was the relationship status.This suggests that the fear appeal can motivate users to be concerned about the specifc scenario suggested in the video, while the refective learning condition provides users with motivation and the tools to apply lessons learned about a specifc information type to other types of advertising information.
Designing and Deploying Future Persuasive Media Our work shows that videos are an efective way to infuence users to engage in privacy-preserving behaviors.Moreover, due to their accessible and easily disseminated nature, videos have the potential to infuence a large number of users.Thus, in the future, we envision opportunities to leverage the strengths of both approaches-the specifcity of refective learning and the generality of fear appeal.Future research, as well as social media platform designers, may examine how one might combine both approaches to create videos that can speak to issues of concern that relate to the group a person belongs to (e.g., college student) while also concretely noting how platforms and other systems are algorithmically characterizing them.The appeal to both one's group identity and one's specifc potential to be misclassifed may be the most efective approach.When expanding this work to a more general audience, designers should design the fear appeal scenario around a believeable scenario.While our videos were curated for specifc college students, we designed the scenario around potential discrimination that these students found plausible and applicable to them or someone that they know.More general scenarios should likewise be believeable and applicable to the intended audience.In addition, future work should investigate how these videos can be made more accessible (e.g., those with visual impairments) in order to reach a wider and more equitable audience.
In addition to considerations for designing persuasive media, given the personal data required for personalizing the refective learning content, it is likewise important to consider how to deploy such media without causing further privacy concerns about the personalization of the media content.We envision two primary methods for curating and deploying personalized persuasive media that likely limit such privacy concerns.First, we consider that a Social Network Site or Operating System could deploy personalized persuasive videos to their users.These organizations typically already have access to the data required to create personalized videos.Furthermore, they are typically invested in limiting further privacy concerns their users may have regarding third parties on their platform (e.g., apps, advertisers), so they have an incentive to encourage users to engage with their advertising settings.Finally, platforms would beneft from users removing interest categories that are not applicable to them, as this practice improves the precision of their audience selection.Second, we consider that personalization could be performed "client-side", as proposed by Kobsa et al. [39], which does not require any centralized data collection.
Alternatively, one could envision a refective learning component that is not personalized at all.This could be done with a video-or video-based education module which simply encourages users to view their advertising settings while watching the videos, rather than explicitly integrating these settings into the video.While this may not be as efective, it still could allow users to more directly refect on the content in the persuasive media.

CONCLUSION
In this study, we were able to explore how a video intervention can help users engage in more privacy-protective behaviors.By integrating persuasive elements into these videos, we are able to provide more efective mechanisms to help educate people about protecting their privacy.We believe that these modes of persuasion can be a powerful tool in our quest to help users understand their privacy and empower them to take privacy-preserving actions.With this, Facebook makes assumptions about your personal preferences such as your gender, religion, race and politics.By collecting information from your Facebook profle advertisers could use this information to discriminate against you.

A.3 Longitudinal Results Behavior
For example, John is studying fnance at BYU and is preparing to graduate.
John's friends are also majoring in the same feld but attending diferent local schools such as Utah Valley University and University of Utah.
Recently he has noticed that his friends are getting ads for job opportunities on their Facebook feeds, while he isn't seeing any.
John fears that these companies are explicitly avoiding BYU students because of the recent criticisms of the school's stance on various social and political issues.John is concerned about the possibility of companies discriminating against him in the recruiting process and worries that this will adversely afect his career opportunities.
This can happen to anyone, it could even be happening to you.
Facebook makes inferences about you with every click and interaction you make and stores it to give you a tailored ad experience.Now that you know how Facebook uses your information and activity to tailor your ad experience, here is how you can prevent being discriminated against.
Go to your Facebook homepage.
In the top right corner, click on the down arrow to see a list of your account settings.
Next, select "Settings" Then scroll down on the left-hand column and click on "Ads" This will bring you to your Ad Preferences page.Here there are three diferent sections listed on the left-hand column.
The frst you will see is the "Advertisers" section.
Here, you can hide ads from specifc advertisers by going to "Advertisers you've seen most recently" or "Advertisers whose ads you've clicked on" and clicking on "Hide Ads." To see which advertisers ads you've already hidden, go to "Advertisers you've hidden" Next, click on "Ad topics" below the "Advertisers" tab in the left column.
Some advertisements are related to sensitive topics such as these.
You can request that Facebook show you fewer of these ads by clicking the see fewer button.
Finally, below the "Ad topics" tab, click on "Ad Settings".
Scroll down and click on "Categories used to teach you".
Here, you can dictate what information Facebook uses from your profle.
If you scroll even further to "Categories used to reach you" and click on "Interest Categories", you will see categories Facebook associates with you based on your profle information and online activity.
These categories help advertisers reach people who are most likely to be interested in their products, services, and causes.
On the right side of each category, you have the option to remove those that you don't want to see or be associated with.
At the bottom of your screen you may also click on "Removed Interests" to see which interest categories you have removed.
You can see even more of your categories by going back to "Categories used to reach you" and scrolling down to "Other categories".
Because discrimination can heavily infuence the ads you see on your feed, becoming informed and knowing how to take charge of your ad settings is the frst step in regaining control of your experience on Facebook.
C. With this, Facebook makes assumptions about your personal preferences such as your gender, religion, race and politics.By collecting information from your Facebook profle advertisers could use this information to discriminate against you.
For example, John is studying fnance at BYU and is preparing to graduate.
John's friends are also majoring in the same feld but attending diferent local schools such as Utah Valley University and University of Utah.
Recently he has noticed that his friends are getting ads for job opportunities on their Facebook feeds, while he isn't seeing any.
John fears that these companies are explicitly avoiding BYU students because of the recent criticisms of the school's stance on various social and political issues.John is concerned about the possibility of companies discriminating against him in the recruiting process and worries that this will adversely afect his career opportunities.
John is concerned about the possibility of companies discriminating against him in the recruiting process and worries that this will adversely afect his career opportunities.
This can happen to anyone, it could even be happening to you.
Facebook makes inferences about you with every click and interaction you make and stores it to give you a tailored ad experience.
This is a video showing some of the information Facebook tracks about you that can be used for advertising.This is some of your personal information These are some of your political interests These are some of your religious interests And these are some of your miscellaneous interests Do you feel these categories accurately represent you?
How might some of these lead to you being discriminated against?
Now that you know how Facebook uses your information and activity to tailor your ad experience, here is how you can prevent being discriminated against.Go to your Facebook homepage.
In the top right corner, click on the down arrow to see a list of your account settings.
Next, select "Settings" Then scroll down on the left-hand column and click on "Ads" This will bring you to your Ad Preferences page.Here there are three diferent sections listed on the left-hand column.
The frst you will see is the "Advertisers" section.
Here, you can hide ads from specifc advertisers by going to "Advertisers you've seen most recently" or "Advertisers whose ads you've clicked on" and clicking on "Hide Ads." To see which advertisers ads you've already hidden, go to "Advertisers you've hidden" Next, click on "Ad topics" below the "Advertisers" tab in the left column.
Some advertisements are related to sensitive topics such as these.
You can request that Facebook show you fewer of these ads by clicking the see fewer button.
Finally, below the "Ad topics" tab, click on "Ad Settings".
Scroll down and click on "Categories used to teach you".
Here, you can dictate what information Facebook uses from your profle.
If you scroll even further to "Categories used to reach you" and click on "Interest Categories", you will see categories Facebook associates with you based on your profle information and online activity.
These categories help advertisers reach people who are most likely to be interested in their products, services, and causes.
On the right side of each category, you have the option to remove those that you don't want to see or be associated with.
At the bottom of your screen you may also click on "Removed Interests" to see which interest categories you have removed.
You can see even more of your categories by going back to "Categories used to reach you" and scrolling down to "Other categories".
Because discrimination can heavily infuence the ads you see on your feed, becoming informed and knowing how to take charge of your ad settings is the frst step in regaining control of your experience on Facebook.

4. 1 . 1
Research Qestions.Our interventions were designed to be engaging, persuasive, and educational.We wanted to understand whether our interventions were efective in increasing the concern

4. 3 . 2
Qalitative Data.Two researchers collaboratively coded all of the open-ended interview and survey items and developed an initial codebook.The fnal codebook was fnalized through discussion and consensus building between the two researchers and a third researcher.The themes uncovered in the qualitative data are used to understand and contextualize the quantitative results.In summary, we used an iterative consensus building process as per the standards for qualitative research as discussed by McDonald et al.[50]

Figure 4 :
Figure 4: Participants reporting learning something, having their minds changed, and intent to change their behavior after watching the assigned video.

Figure 5 :
Figure 5: Participants ability to navigate to Facebook's advertising settings and the medium used if assistance was needed.

Figure 6 :
Figure 6: Profle and Interaction Plots for how often users thought about the video, their reported decline in Facebook usage, concern for personal information Facebook has collected, and reported Vaguebooking strategies.

Figure 7 :
Figure 7: Proportion of participants that made changes to their sensitive topics, targetabe profle information, removal of interest categories, or changes to any advertising settings.

Figure 8 :
Figure 8: Proportion of participants that opted out of seeing ads in sensitive topics i.e.Alcohol, Parenting, Pets, Social Issues, and disallow advertisers to target them based on profle information such as Employer, Job Title, Education, and Relationship Status

Figure 9 :
Figure 9: Proportion of participants that made changes to Facebook's Advertising settings in and out ofthe lab setting.

Table 2 :
Number of Participants by Video Condition and Survey

Table 3 :
Learned Something from Video -Logistic Regression

Table 7 :
Think About the Video -Linear Mixed-Efects Regression

Table 14 :
Changed Sensitive Topics -Logistic Regression

Table 17 :
Facebook Activity -Comments -Mixed-Efects Regression Observations 1440 * p<0.05, * * p<0.01, * * * p<0.001.Did you know that Facebook collects data on you and uses that information to fll your feed with ads that can infuence what you buy, where you live, what activities you participate in, your mental health, political stance and much more?One way Facebook accomplishes this is by classifying you and tailoring your ad experience based on how you interact online through the pages you like, groups you join, links you click and more.
2 Fear Appeal + Refective Learning Did you know that Facebook collects data on you and uses that information to fll your feed with ads that can infuence what you buy, where you live, what activities you participate in, your mental health, political stance and much more?One way Facebook accomplishes this is by classifying you and tailoring your ad experience based on how you interact online through the pages you like, groups you join, links you click and more.