Data Refusal from Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design

Amidst calls for public accountability over large data-driven systems, feminist and indigenous scholars have developed refusal as a practice that challenges the authority of data collectors. However, because data affect so many aspects of daily life, it can be hard to see seemingly different refusal strategies as part of the same repertoire. Furthermore, conversations about refusal often happen from the standpoint of designers and policymakers rather than the people and communities most affected by data collection. In this article, we introduce a framework for data refusal from below—writing from the standpoint of people who refuse, rather than the institutions that seek their compliance. Because refusers work to reshape socio-technical systems, we argue that refusal is an act of design and that design-based frameworks and methods can contribute to refusal. We characterize refusal strategies across four constituent facets common to all refusal, whatever strategies are used: autonomy, or how refusal accounts for individual and collective interests; time, or whether refusal reacts to past harm or proactively prevents future harm; power, or the extent to which refusal makes change possible; and cost, or whether or not refusal can reduce or redistribute penalties experienced by refusers. We illustrate each facet by drawing on cases of people and collectives that have refused data systems. Together, the four facets of our framework are designed to help scholars and activists describe, evaluate, and imagine new forms of refusal.


INTRODUCTION
Refusal is a practice of saying "no" to how data are collected or used, and rejecting the processes, goals, or authority of data collectors.A First Nations community might force governments and academics to follow community-defined research policies [32].A family might try to refuse ecommerce data collection [109].A citizens group might sue a government agency for engaging in 10:2 J. Zong and J. Nathan Matias domestic surveillance [29].In our time, corporations and governments continue to collect data to run systems that profoundly affect every element of people's daily lives.Because these systems reach across many different domains, acts of refusal can take on many different forms.As a result, it can be difficult to see the diverse acts of refusal undertaken by individuals and collectives across society as instances of a broader movement.
To begin to describe these actions, feminist and indigenous scholars have developed refusal as a broad concept for understanding the agency of the people whose lives are affected by data regimes [7,8,14,19,39,54,75,84,92,97,102,113].These concepts matter because activists and community leaders are using the idea of refusal to build shared conversations and to explain their work.Yet, as new ways to use and collect data continue to be invented, so do new ways to refuse.How can people both understand existing practices as acts of refusal and also think systematically about how refusal responds to the design of data systems?
To address this question, we develop the idea of data refusal from below by presenting a design framework for illuminating the spectrum of refusal.The options available to communities are shaped by policymakers, computer scientists, and designers who influence what kinds of data refusal to require, allow, or prevent.Communities navigate these technical and institutional constraints to arrive at creative ways to resist, disrupt, or pursue alternatives.We recognize the process of discovery that enables new refusals as a generative practice of design.To support refusers in this work, we adopt the idea of a "prism of refusal" from Benjamin [14] to characterize refusal strategies across four constituent facets: autonomy, time, power, and cost.These facets are considerations that apply to all refusal, whatever strategies are used.We illustrate each of these facets with cases of people who have refused systems of data power.By considering these four facets, we hope marginalized communities can envision and organize refusals that more directly meet community needs and advance a just world.
Starting from the standpoint of people most affected by data collection, we write about and for those who are typically excluded from design decisions about data-driven systems.Scholarship on refusal has largely focused on actions that designers and policymakers can take to create change within powerful institutions.However, when research on data refusal primarily focuses on the standpoint of small groups of influential people, scholars risk sidelining the goals and perspectives of the vulnerable and marginalized.Furthermore, we risk entrenching theories of change that rely on the decisions and goodwill of elites, rather than supporting people's agency to shape their relationships to data regimes.For activists, this framework offers a way to explain their work within and alongside marginalized communities as part of a broader movement of refusal.It also offers a way to think about how their acts of refusal fit into a broader terrain of possible actions.For designers and scholars, this framework offers a way to understand the actions of refusers as a form of participation in technology design.Just as data systems affect people's everyday lives, the agency people exert within systems-including through their non-participation-exert pressure on the design of new technology in response to changing behaviors and collective actions.
While there is an important tradition of research from below in a global context, this article has a North American focus.As researchers in the United States (U.S.), our point of view on questions of data refusal may differ from the values or needs of a global audience.Many of the cases of refusal we draw on are responses to U.S., Canadian, and European regulatory regimes that have outsized influence on large North American tech firms with global reach.While our work is informed by these particular conversations, we acknowledge the importance of geographic and cultural context in shaping what approaches to refusal are possible or practical.
In this article, we advance the idea that refusal can be theorized as an act of design and provide a framework for thinking about current and future acts of refusal.In Section 2, we establish an initial scope for the concept of data refusal from below.Drawing on conversations in feminist stand-10:3 point theory, we argue that acts of refusal undertaken by the marginalized must be understood differently from refusal by the institutionally privileged.In Section 3, we argue that refusal can be thought of as an act of designing alternate social configurations.Seeing refusal as design-and, by extension, refusers as designers-creates opportunities for design-oriented theory and methods to contribute to the continually evolving practice of refusal.In Section 4, we ground our conceptual framework contribution in the methods and approaches of computer science and design.Frameworks serve descriptive, evaluative, and generative purposes by giving scholars language to describe and compare artifacts in the world and imagine new design possibilities.The article's main contribution appears in Section 5, where we present a framework consisting of four facets of refusal: autonomy, time, power, and cost.When introducing each facet, we use real-world cases to highlight important considerations that apply generally when using the framework to analyze instances of refusal.Finally, in Section 6, we explicate the descriptive, evaluative, and generative uses of the framework, reflect on how designers can learn from acts of refusal, and articulate a vision for a politics of refusal in a world shaped by large institutions.

DATA REFUSAL FROM BELOW
Refusal is a practice of "say[ing] no without being given a right to say no" [92].In this section, we first introduce the origins of the idea of refusal and talk about what it means in the context of data.Then, we summarize existing work on data refusal and define "data refusal from below" as an important subset of data refusals.

Data Refusal
Scholars across anthropology, feminist theory, and bioethics have developed refusal as a concept for understanding how people reshape social and political relations.Anthropologist Carole Mc-Granahan argues for the need to "recognize and theorize refusal" as a way that people "[stake] claims to the sociality that underlies all relationships, including political ones" [75].Important conversations about refusal have been situated in conversations about citizenship and indigeneity [74,97,107].One particular source of inspiration for this article is the work of sociologist Ruha Benjamin, who developed the idea of informed refusal within a family of conversations about the "relationships between biological and political experiments" [13].For example, knowledge-making endeavors around stem cells and DNA shape how institutions see people and how power is structured in society.Benjamin describes how "biocitizens" such as stem cell advocates contend over biological research by organizing to draw attention to inequalities and injustices in those systems of knowledge.Citing Rapp [87], Benjamin describes "moral pioneers" and "biodefectors" who work individually or collectively to opt out of studies, prevent labs from being built, or repatriate blood samples that geneticists used beyond agreed limits [14].
Drawing from these prior conversations, the idea of data refusal sheds light on ways that people contest the power of data in their lives, while also making the power of data collectors more legible [97].Scholars have argued that multinational corporations' domination over digital ecosystems grants firms "direct power over political, economic and cultural domains of life, " an arrangement scholars have described as digital colonialism [24,66].Like the historical subjects of colonialism, data subjects continue to design innovative ways to reshape their social relations with data collectors, even when power imbalances are great.
People often take up refusal in opposition to a variety of harms caused by data systems.Critical data scholar Anna Lauren Hoffmann has drawn a distinction between analyses of harm that focus on access to "rights, opportunities, and resources" and those that attend to social, discursive, and representational dimensions of data [53].Privacy scholars Citron and Solove have proposed a legal typology of privacy harms that cover a broad range of harms to do with rights, opportunities, and resources [20].Their framework includes types of harm that can easily be observed, because they affect an individual's physical and psychological safety, reputation, or economic status.Crucially, their framework also includes collective harms including discrimination, risks to relationships, and constraints on collective autonomy.Scholars from a variety of disciplines also have written about important cases of symbolic and representational violence that are not well captured by privacy frameworks.For example, Safiya Noble has written about the dignitary harm of Google search's stereotype-reinforcing results [79].Sasha Costanza-Chock notes the cisnormativity of by airport scanning systems that inevitably subject trans, intersex, and gender-non-conforming people to additional scrutiny [22].Hoffmann explains that symbolic harms are also upheld by shaping discourse to normalize oppressive conditions and diffuse resistance [55].
When people refuse a data system, they are taking pragmatic actions toward changing those systems.For Ruha Benjamin, refusal is a form of agency that involves rejecting "the terms set by those who exercise authority in a given context" and that "may also extend beyond individual modes of opting out to collective forms of conscientious objection" [14].While computer science and communication scholars often imagine lack of technology access or use as a form of deprivation [26,89], refusal is "a politically significant action: one that opens up possibilities for power shifts, resistance to dominant political structures, and emancipation" [18].

From Below
Refusal is a situated action that is shaped by the standpoint and imagination of the people who do the refusing [19].Feminist standpoint theorists have argued that people's ways of understanding the world depend on their position within social structures [47].The language of "above" and "below" are sometimes used to "emphasize and articulate the tensions between the wants, needs, and knowledge of unequally empowered stakeholders" [111].Feminist scholar Donna Haraway, for example, argues that scientists should value knowledge "from below, " which begins from the experiences of those "subjugated" under hierarchies of power [46].This language of power clarifies an important distinction: Because refusal happens within these systems of power, people's options for refusal will depend on their relationship to power.
A growing body of research considers actions of refusal by academics, designers, and industry professionals who are positioned to influence the creation of technology.For example, the authors of the Feminist Data Manifest-No commit in their capacity as academics to "entering ethically compromised spaces like the academy and industry not to imbricate ourselves into the hierarchies of power but to subvert, undermine, open, make possible" [19].Other work has advanced efforts by tech designers to re-shape the values and practices of technology production [4], including by refusing to design [7,11,42,112] or participate in industry events [41] or by engaging in labor organizing [23] and sabotage [76].
Another subset of work on data refusal focuses on the practices of those who are less proximal to the creation of new technology, yet are affected by its design and deployment.Recently, feminist scholars who also contributed to the Feminist Data Manifest-No have articulated the concept of critical refusal through case studies illustrating "how data practices can be at the center of issues that impact vulnerable communities" [40].Researchers in computer science have studied practices including crowd-worker mutual aid [60], collective actions by platform moderators [71], and political organizing around smart city implementation [111], although this work is not typically examined explicitly through the lens of refusal.
Our purpose in advancing the idea of "data refusal from below" is to emphasize the value of refusal by people harmed by data systems who have low direct power over technology creation.In computer science, designers do not always see actions by people who resist or reject technology as legitimate or worthy of study [89].Yet as Communication scholar Seeta Peña Gangadharan argues, "we cannot just rely upon the enlightened goodwill of privileged elites to recognize and rectify [injustices], " because refusal initiated by the marginalized "matters just as much as recognition of the marginalized by privileged people or institutions" [39].
Even though technology designers can refuse in ways that others cannot, their refusals can be consistent with the values of other refusers working "from below." Anthropologist and Kahnawà:ke citizen Audra Simpson has described how academics can respect the limits established by others with less institutional power.She writes about refusals taken up by her Kahnawà:ke community to reject settler-colonial ideas about borders, citizenship, and membership [98].One such everyday refusal involved feigning ignorance about certain information during ethnographic interviews [97].Simpson Our encouragment that scholars and designers take a view from below is not meant to exclude or dismiss refusals by tech workers and academics.As other human-computer interaction (HCI) scholars of tactics from below have noted, "above" and "below" are oversimplificationsresearchers are often simultaneously situated "above" and "below" in different contexts [111].Similarly, Ganesh and Moss note that "'inside' and 'outside' Big Tech [are not] watertight categories, " even as they acknowledge the different discursive implications of different kinds of responses to harm [38].Yet standpoint is a generative idea for thinking about acts of refusal.For example, organizers of the #NoTechForIce campaign name "educating communities about how to protect themselves against new forms of criminalization, taking direct action to confront corporate actors, [and] organizing with tech workers and students to leverage their influence over Silicon Valley" as some of several ways they are resisting the relationship between tech companies and U.S. immigration enforcement.These different approaches all contribute to a wider movement by explicitly incorporating an analysis of power and standpoint.Similarly, Indigenous studies scholars Tuck and Yang point out the need for analytic practices of refusal that "negotiate how we as social science researchers can learn from experiences of dispossessed peoples-often painful, but also wise, full of desire and dissent-without serving up pain stories on a silver platter for the settler colonial academy, which hungers so ravenously for them" [107].We hope that this framework helps scholars see refusal practices from above and below as parallel and complementary.

REFUSAL AS DESIGN
To understand refusal as design, one must understand the nature of design as a form of sociotechnical power that people resist and also take up in the act of resistance.Herbert Simon calls design an "act of envisioning possibilities and elaborating them" [50].Just as Simon writes of planners who "took whole societies and their environments as systems to be refashioned" [50], the systems we are concerned with in this article use the products of mass data collection to reshape politics, economics, and culture.Information scholars have long argued that data systems should be understood not as purely technical but also socio-technical-best understood "as systems and devices embedded in larger material and social networks and webs of meaning" [78].Design of this kind is not just carried out by individual experts or engineers, but by an assemblage of actors across the tech industry, governments, and civil society.
Because refusal involves deliberate acts to influence socio-technical systems of power, we see refusal itself as a form of design.For example, when activists refuse the use of facial recognition systems for mass surveillance, they are imagining a different arrangement of power that includes the typical domains of design: services, policies, and user experiences.For Benjamin, refusal is "seeded with a vision of what can and should be" [14].Similarly, Simon points out that "everyone designs who devises courses of action aimed at changing existing situations into preferred ones" [50].When refusers think strategically about the actions they need to take to bring about better technological futures, they are engaging in a design process that is pragmatically oriented and considers alternatives.Rather than designing physical artifacts, they are designing refusals as social artifacts.Instead of physical objects, these artifacts are individual and collective actions that challenge existing social relations.
Like all design, refusal also is concerned with relationships and obligations between designers and users.As Flores et al. argue, "technology is not the design of physical things.It is the design of practices and possibilities to be realized through artifacts" [35].When people are affected by systems that they did not ask for, design imposes relations and obligations that are unwanted.Anthropologists have theorized this creation of obligation through the lens of gift-giving.For Marcel Mauss, gifts are never freely given; they carry with them a hierarchy of power, expectations, and responsibilities between giver and recipient [69].Sociologists Fourcade and Kluttz have pointed out that enrollment into digital systems often involve a "give-to-get" relationship that "masks the structural asymmetry between giver and gifted" [36].Saying no to a gift, then, involves refusing the social order that the gift implies and upholds.Analogously, when people refuse technical systems and data collection, they are reshaping an implied relationship-the commodification and appropriation of their data under the pretext of a fully reciprocal relationship.
Because people refuse systems with an alternative in mind, refusal is an important form of participation in the design process, which expert designers never fully control.McGranahan argues that "to refuse can be generative and strategic, a deliberate move toward one thing, belief, practice, or community and away from another" [75].Ruha Benjamin notes that the "prism of informed refusal sets out to explore the capacity for resisting and reimagining ... without at once romanticizing or valorizing resistance as an inherent 'good"' [14].For Benjamin, resistance and refusal are not end goals but pragmatic and transient forms of agency along the way to a longer-term set of societal outcomes.Seeta Peña Gangadharan argues that "when marginalized people refuse technologies, they imagine new ways of being and relating to one another in a technologically mediated society" [39].
By framing refusal as a form of design, our work explores an opportunity to apply methods and theories from design to support refusal in practice.We offer a framework for describing, evaluating, and generating acts of refusal using the four facets of autonomy, time, power, and cost.Our framework encourages designers and activists to consider each of these facets as component design characteristics of refusal.

METHODS: DESIGN DIMENSIONS IN COMPUTER SCIENCE
In this article, we offer a conceptual framework for refusal of data systems that we hope can be used by people planning acts of refusal.Framework papers illuminate a series of cases with relevant scholarship to reveal how that scholarship can inform further research and design.Computer scientist Michel Beaudouin-Lafon offers three central uses for conceptual frameworks: descriptive, evaluative, and generative power [12].Descriptive power refers to how well a framework provides language to describe existing cases.A usefully descriptive framework would provide a consistent set of concepts for analyzing multiple cases of refusal, even if they seem very different.Evaluative power refers to how a framework helps make comparisons between features of different cases.A framework with clear evaluative power would help designers use core concepts to assess the relative strengths and weaknesses of two cases of refusal.Finally, generative power refers to the usefulness of a theory to identify unmet needs and inform the creation of new cases.A framework with strong generativity would help people develop new approaches to refusal that have not yet been realized in practice.

10:7
Computer science and design researchers, particularly in HCI, use conceptual frameworks to describe, evaluate, and generate designs.Their goal is to put theory into practice by using the framework to inform future designs.In particular, Shaowen Bardzell has articulated the importance of generative applications of feminist theory to feminist HCI and interaction design [9].Frameworks serve a similar purpose for scholars and activists in the social sciences.For instance, Sasha Costanza-Chock has drawn from social movement theories to develop a framework for imagining online activism tactics [94].For Ruha Benjamin, refusal is a "prism" that situates actions "within a more comprehensive spectrum of human agency vis-à-vis technoscience" [14].Benjamin's idea of a prism of refusal is an apt analogy for the purpose of the framework in this article.By separating instances of refusal into the components of a spectrum, we can study refusal more clearly while also supporting people to imagine new kinds of resistance in "a much larger terrain of action and negotiation" [14].
In this article, we offer a framework of refusal as design with four facets: autonomy, time, power, and cost.In contrast to work taxonomizing forms of resistance to data collection [15,110], we are not listing discrete categories that differentiate instances of refusal from one another.Instead, our framework separates out four common aspects that constitute any act of data refusal.Each example of refusal includes all four facets.These facets can be used analyze cases independently and also to draw attention to ideas that emerge at the intersection of multiple facets.
We developed this framework by curating real-world cases of refusal from academic and journalistic sources and identifying characteristics of each approach that could generalize to other cases.We also referenced existing critical scholarship on refusal, especially the Feminist Data Manifest-No [19] and Benjamin's work on informed refusal [14], with particular focus on discussions of what refusal can accomplish and who is able to refuse.As we fleshed out parts of the framework, we iteratively sought out additional cases to illustrate contrasting positions along relevant dimensions.For example, after reviewing a case involving efforts to uphold individual autonomy at the expense of other values, we made sure to consider cases with contrasting approaches to autonomy.We settled on our four facets when we felt that they were able to usefully represent the considerations and critiques associated with our cases in academic and journalistic discussions.The cases we include in the article are not meant to be an exhaustive representation of refusal practices.This is because technology, and people's responses to technology, continue to evolve over time.However, we did make an effort to include examples that exemplify multiple representative ways of thinking about autonomy, time, power, and cost.

FOUR FACETS OF REFUSAL
This article describes four facets of refusal to consider when evaluating, describing, and imagining data refusal.When people develop new forms of data refusal, they will need to develop answers to these four questions.

Autonomy: Individual and Collective
The concept of autonomy is important to conversations about refusal, since the desire and capacity to refuse are linked with the idea of self-determination.Autonomy is typically defined as the capacity to freely make informed choices.The authors of the Feminist Data Manifest-no, for example, express their commitment to "research cultures that promote data autonomy and SELF-representation" [19].In the U.S., individual consent rose to prominence in the 20th century as the dominant method for establishing practical protections to individual autonomy in research and medical contexts [88].Regulations such as the European Union's General Data Protection Regulation (GDPR), Brazil's Lei Geral de Proteção de Dados, and the California Consumer Privacy Act (CCPA) have drawn from this framework to establish individual consent as a central virtue of data protection [70].
As the current wave of privacy regulations demonstrate, prioritizing individual autonomy leads to forms of data management that involve offering people choices that they can accept or refuse.In the U.S., the close relationship between autonomy and individual consent was established in the mid-20th century when U.S. medical researchers and bio-ethicists turned to the idea of consent in their search for ways to prevent the atrocities of Nazi scientists who exploited victims of the Holocaust [82].On the frontiers of science, risks and harm are difficult to estimate and scientists tend to justify their work as a common good.As Rothman summarizes, "human experimentation pitted the interests of society against the interests of the individual" [88, Chapter 5].Informed consent provided a versatile procedure for individual interests to be protected from the consequences of utilitarian arguments from scientists.
Because harms can often be collective, any act of refusal must reckon with both individual and collective autonomy.In the theory of privacy as contextual integrity, the philosopher Helen Nissenbaum argues that privacy depends on upholding context-specific norms beyond the individual.These norms are influenced by individual data subjects as well as data collectors, infrastructures of information flow, and the relationships between all involved entities [77].In parallel, feminist philosophers have noted that accounts of individual autonomy have often struggled to reconcile the importance of self-determination with the fact that individuals are "socially embedded and ... shaped by a complex of intersecting social determinants" [68].To reconcile this tension, feminist scholars have advanced the concept of "relational autonomy, " a concept that acknowledges how individuals are interconnected, their actions are socially embedded, and their choices are influenced by power relations [68].Because autonomy can be an individual or collective action in context, any endeavor of refusal involves questions about whose autonomy is at work.

Individual Autonomy Protects Individuals' Interests Against Those of Data Collectors.
Though many people consider large-scale corporate, government, and academic data collection to be common knowledge, others are surprised and upset to learn that their data are collected and used without their personal knowledge or consent [31].For example, when the public learned that some Facebook users had their feeds altered as part of Facebook' Emotion Contagion study in 2014, many people expressed that they felt exploited or manipulated [45].Scholars pointed out the lack of consent-either prior or retroactive-as one of the central ethical issues with the study [44].A procedure for consent or retroactive opt-out, scholars argued, would have provided Facebook users with an opportunity to exercise their autonomy and potentially refuse participation in the study.
Individual consent represents a fruitful area for the design of refusal.Researchers and policymakers have worked to establish individual consent procedures as a norm and legal requirement for online data collection.Feminist HCI researchers have suggested that for data collectors to fully support individual agency and autonomy, their operationalization of affirmative consent must be voluntary, informed, revertible, specific, and unburdensome [59].System designers have also used design as a form of enquiry in the articulation and evaluation of ethical frameworks including procedural and substantive theories of autonomy and consent [114,115].For example, in response to regulatory requirements created by the GDPR, companies have introduced consent management platforms to provide users with information about web trackers and solicit their individual preferences about tracking [80].In another example, the Consentful Tech Project develops guidelines and resources around digital consent, and has produced a curriculum guiding tech designers to "build consentful cultures and technologies" [108].Designers have used these guidelines to prototype a user flow for signing a collective letter that also allows signatories to revoke their signature just as easily [30]. 10:9

Individual Autonomy Cannot Account for the Conflicting, Equally Legitimate Interests of
Different People.When a single decision about data collection affects multiple individuals, individual consent cannot account for everyone's autonomy.For example, photographs often depict more than one person.The person who takes the photo might not even appear in the image, and the person uploading a photo to an online service might be another person altogether.An individual consent process will only ask one of these people for their preferences, and that person might not have the right to make decisions for the others.Barocas and Levy use the term privacy dependencies to refer to the ways "our privacy depends on the decisions and disclosures of other people" [10].Amy Hasinoff notes that these problems are especially pronounced when thinking about consent and the circulation of intimate imagery online [48].Refusal strategies that prioritize individual autonomy-such as opting out of consent to data collection-are meant to protect a single person's welfare against the interests of data collectors.But data collection commonly complicates where the "boundaries of a person" can be drawn [27].When multiple people have legitimate interests in the circulation of data, no individual-choice consent decision can even record everyone's interests-especially if they do not all agree.
Even if a single instance of data collection respects the autonomy of everyone the data are collected from, procedures like individual consent cannot account for the collective outcomes of the use of those data.Many uses of data also affect people across society, not only those directly involved in data collection.While a single photo on its own has limited uses, large photographic datasets can be used by facial recognition algorithms that are incorporated into abusive policing and immigration systems.Because consent is only meant to help people manage risks to themselves, it cannot prevent risks to people who are not yet in a dataset or to society at large.
For many privacy scholars, the need to protect individual autonomy presents a seemingly intractable "consent dilemma" [100].When problems arise from individual choice, legal scholars tend to think that the only alternative is the power of the state.Solove notes that a dilemma arises because "the most apparent solution-paternalistic measures-even more directly denies people the freedom to make consensual choices about their data" [100].Relational theories of collective autonomy offer a further option that prioritizes autonomy while accounting for risks and benefits beyond the individual.

Collective Autonomy Balances the Interests of Both Individuals and Society.
When people come together to collectively manage how their data are collected and used, they are conceiving of their autonomy as relational by entwining their own interests with that of a larger group.As Salomé Viljoen argues, "data isn't collected solely because of what it reveals about us as individuals.Rather, data is valuable primarily because of how it can be aggregated and processed to reveal things (and inform actions) about groups of people" [91].Data cooperatives are an example of organizations that allow their members to collectively manage the terms of data collection and use.Cooperatives-such as those formed by consumers, tenants, or employees-comprise a "group that perceives itself as having collective interests, which it would be better to pursue jointly than individually" [3].The Salus Coop is an example of a data cooperative formed around health research data.Members donate their own health data to the co-op, and contribute to medical research while retaining control over their own data.Through the use of a common good data license, the co-op specifies terms that researchers must follow to use members' anonymized data [62].The Native BioData Consortium [104][105][106] is another example organization that is led by indigenous geneticists who work to keep the storage and use of biological samples local to tribal communities.Cooperatives protect the rights and autonomy of individual members by setting clear terms for how their contributed data can be used, while also permitting health research that might provide collective benefits.
To imagine new legal mechanisms for implementing collective notions of consent and autonomy, privacy scholars have argued that data processing entities should have a fiduciary obligation to act in the interests of the people whose data they manage.For example, Balkin suggests that the same legal duties of care and loyalty that apply to doctors and lawyers ought to apply to companies and providers that work with data [6].Delacroix and Lawrence instead argue that a multitude of thirdparty "data trusts" could negotiate with data collectors according to terms collectively determined by the trust's beneficiaries, "introducing an independent intermediary between data subjects and data collectors" [25].Similarly, RadicalXChange's proposal for Data Coalitions would "establish tightly regulated collective bargaining entities ... [to] pursue their Members' varying interests from a vastly better bargaining position" than if they were to bargain individually [61].These proposals rely on models of autonomy that allow individuals to delegate the capacity to consent to a representative third party, which social scientists have called "representative consent" [57].Critics of data trusts and co-ops note that these proposals attempt to fit data into existing legal frameworks for governing property or labor relations, which subjects people and their data to pre-existing extractive and coercive market conditions [90].Nonetheless, this line of research suggests possible ways collective data management can balance individual autonomy with the common good.

Time: Reactive or Proactive
Designers of data refusal activities must also consider the timescale in which refusal operates.As Solove writes, "privacy is an issue of long-term information management" [100].Many cases of refusal from informed consent to class action lawsuits occur after an attempt at data collection.These reactive approaches are possible when mechanisms of data collection procedures and their possible harms are known.Even after harms occur, future refusal can sometimes redress those harms for the people affected.Yet as new developments in data collection, use, and disclosure continue to be invented over time, refusal can be mobilized to proactively prevent future harm.

Reactive Refusal Responds to Harms That Have Already Occurred.
Because the power of data is often wielded without even informing those involved, the people harmed by data collection often find themselves reacting to harms after they occur.In January 2019 when IBM published roughly a million photos of unsuspecting individuals to create a facial recognition dataset they called Diversity in Faces, many people were surprised and upset to discover their photos were included [81].From 2004 to 2014, some people who published photographs to Flickr (a photosharing site) provided them under a Creative Commons license that allowed the reuse of those photos by third parties.Whether or not the people in those photographs consented, their likeness was passed from company to company over the years.Yahoo purchased Flickr, published a public photography dataset in 2016, [103], and IBM later re-purposed these images to train facial recognition algorithms.These photos were quickly adopted by other large tech companies hoping to improve their products [86].It is likely that immigration offices, law enforcement, and other institutions are using algorithms trained on the faces of people who never knew their photos had been shared and never had a chance to refuse.
In response to public outcry over the Diversity in Faces dataset, IBM assured the public that they "will work with anyone who requests a URL to be removed from the dataset" [81].Although IBM will not inform people if their image appears in the dataset, the company requires people to send a precise copy of the image they want the company to remove.If that person is unable to submit a copy of every image of them that IBM holds, then the company will retain their biometric data.Because IBM defines the process and controls information about the process, they can create a reasonable-sounding procedure that is nearly impossible to opt out of.Because IBM's process puts the burden on the public to check whether their images were included, people in the state of Illinois instead filed a class action lawsuit against the company [2].This lawsuit represents a collective refusal of IBM's actions by people in Illinois.By asking for $5,000 in damages for each person whose rights were ignored, the lawsuit is designed to deter IBM and other firms from collecting, storing and disseminating data that puts people at risk without their consent.
IBM's data removal process and the Illinois class action lawsuit are both refusal strategies that can only happen after a harm has been discovered by the public.These strategies help those affected by a current instance of data collection.But if a similar scandal happens in the future, then the people affected would have to seek recourse all over again.Though scholars have put forth arguments for and against class actions as a way to deter future wrongdoing [34], lawsuits will still only be able to react to new data collection scandals in the future if they do occur.

The Effects of Data-driven Harm
Occur on Extended Timescales.Over a long period of time, the same dataset can change hands and be copied and reused beyond its original purpose.When Flickr launched in 2004, it was a small startup based in Vancouver.In 2005, Yahoo acquired Flickr and moved all of its data from servers in Canada to servers in the United States, which changed the set of laws that applied to the data [1].In 2007, Yahoo shut down the Yahoo Photos service and provided people with a way to transfer their photos to Flickr.People who transferred their photos to Flickr would have exposed their photos to inclusion in the facial recognition dataset despite originally using a different service.Ten years after Flickr launched and likely after many people had forgotten about their early uploads, Yahoo aggregated 100 million photos into a public research dataset [103].Yahoo researchers published an academic report on the dataset that has been cited in hundreds of computer vision research papers.Then in 2017, Yahoo was acquired by Oath (renamed in 2019 to Verizon Media).In 2018 Oath sold Flickr to SmugMug, another photo sharing service.In the meantime, IBM researchers downloaded a copy of the dataset from Yahoo and modified it to create the Diversity in Faces dataset released in 2019.
Because datasets can be copied, remixed, and recirculated endlessly, addressing harm reactively requires ongoing vigilance.Throughout this convoluted chain, people's non-commercial copyright agreements with Flickr became less influential as their data proliferated through multiple hands.Even if everyone who had a copy of a photo could be contacted, it would be difficult to force every derivative dataset to comply with the original copyright agreement, let alone consent that was never sought.
A one-time decision based on information available at the time a dataset is created cannot address all subsequent developments.Originating in biomedical contexts, informed consent is a onetime decision at the point of data collection that forms an agreement about how the data will be used.But as Sedenberg and Hoffmann [96] point out, within the biomedical model it is "unclear how to assess harm or potential risk outside of physical interventions or in-person interactions, as with the possibility of reidentified data or harms that may occur on extended timescales." As the IBM case illustrates, data also passes through different hands after collection, where it can be copied, stored, and combined with other data in ways that cannot be anticipated at the point of consent.In an analysis of machine learning datasets that had been retracted for ethical concerns, Peng et al. found that the data not only continued to have wide availability and use in research, but also that researchers lack the infrastructure to track derivative datasets to understand their impact [83].Because new developments in data collection, use, and disclosure continue to be invented over time, all possible harms cannot be anticipated by a one-time decision.

Proactive Refusal Tries to Prevent Future Harm.
When it is apparent that technologies will continue to produce further unacceptable harm, activists have turned to refusal strategies that prevent new cases of harm before they occur.In the U.S., the movement against government face surveillance is an important case for understanding the importance of proactive refusals.In January 2020, Robert Williams was wrongfully arrested on the basis of an incorrect match by a police department's facial recognition system [51].Despite the fact that the company that provides this technology to the police "does not formally measure the systems' accuracy or bias, " the police used the flawed match as a primary factor in the decision to arrest Williams.Because of the arrest, Williams was forced to miss work, had to pay a bond to be released, and faced potential shame and embarrassment in his social circle.In response, the American Civil Liberties Union (ACLU) has filed suit against the Detroit Police Department on Williams' behalf.While a reactive lawsuit may help Williams, it cannot reverse the harm that was done to him.If police continue their use of surveillance technologies, then cases like Williams' will only happen more and more often.How can advocates proactively prevent new cases where police abuse facial recognition from occurring in the future?
Activists and advocacy groups have used strategic litigation and legislation to ban the government use of facial recognition and force corporations to stop building and selling facial recognition systems [64].In 2019, San Francisco became the first U.S. city to pass a bill preventing local government agencies from procuring or using facial recognition, or using information from external facial recognition systems [21].Since then, a handful of other U.S. cities have passed similar legislation, and some state and federal regulations are under consideration [17].By reducing police use of facial recognition, activists are reducing the capacity of the state to make future misidentifications that lead to arrests.
Proactive refusal involves concerted efforts to prevent irreversible harms by addressing the conditions that bring them about.In Joy Buolamwini's testimony at a congressional hearing on facial recognition technology, she notes that even when the police correct mistakes resulting from facial recognition mis-identifications, the "damage [is] done" irreversibly [16].In 2020, IBM announced that it would stop developing or researching facial recognition technology [85].In the future, IBM and other tech companies could be deterred from collecting facial biometric data-and repeating the same data scandal-by regulation and ongoing public scrutiny.

Power: Foreclosing Possibility or Creating Possibility
People engage in refusal because there is something they want to prevent or change.That is why any act of refusal must grapple with questions of power, which feminist scholars have defined as "the capacity to produce a change" [63].It is "the power of ability, of choice and engagement.It is creative; and hence it is an affecting and transforming power but not a controlling power" [93].In our work, we focus on power from the standpoint of refusers rather than data collectors.For corporations and governments who seek data to increase their control over the public, power is a form of domination.But for those engaged in refusal, power is expressed as an ability to resist and reconfigure their relationships with data collectors.Refusal is powerful in a given situation when it attempts to change the behavior of data collectors, causing them to engage with refusers in a new way or preventing them from engaging at all.

Some Offers to Refuse Are Designed to Placate Instead of Create Change.
Because refusal involves potentially coming into conflict with powerful institutions, it can appear to be a good thing when data collectors offer well-defined processes for people to opt out or voice dissent.However, data collectors often use these pathways of dissent to limit refusers' agency by directing them away from other unspoken possibilities.Ruha Benjamin conceives of refusal as a privileged form of political action, because "the capacity to refuse rests upon a prior condition of possibility-that one has been offered something in the first place" [14].When data collectors give people limited power to choose, it is arguably better than no choice at all.But if data collectors frame choices such that all possible options benefit them at the chooser's expense, then people must find ways to refuse on different terms.
The choice of whether to opt out of facial recognition training data is an example of a choice that ultimately limits agency.As facial recognition becomes increasingly integrated into policing practices that disproportionately target communities of color, people of color might reasonably choose to opt out of training the systems that will be used to surveil them.Under the CCPA, companies like IBM, Amazon, and Clearview AI who produce facial recognition systems are required to offer individual consumers the ability to remove their own data from these training sets.Hypothetically, if enough people do this, then training sets could become systematically biased against those who are more likely to opt out.The option to opt-out sets up a lose-lose situation.Opt out, and you will potentially exacerbate already-pervasive racial bias in facial recognition.Opt in, and your image will be used to fine-tune a facial recognition system that a company is likely trying to sell to immigration enforcement agencies.Engaging with the choice in either direction might distract refusers from seeking other Rejecting the broader injustice of surveillance and overpolicing is not within the terms of refusal offered by these companies.The question of opting out treats as given the idea that companies will continue to build these systems.Even if any particular individual opts out, companies and governments will simply acquire more vulnerable people's faces to build their systems-as Google did by targeting homeless people for facial recognition data [56] or as NIST did by using images of immigrants, children, and dead people [65].Fundamentally, these opt-out systems do not provide the option of challenging or constraining companies' power to pursue data collection for facial recognition.

Some Kinds of Refusal Create
Pathways to Systemic Change.Some acts of refusal reconfigure systems of power entirely, beyond individual relationships or circumstances.For example, a long history of unethical research conducted on indigenous communities in Canada has included researchers who performed non-consented nutrition research on indigenous children that led to deaths [67], used blood samples for secondary research without consent, and supplied information on dissident indigenous groups to repressive regimes [33].Research and data collection on indigenous communities has happened through "imperial eyes" [99], excluding these communities from participation in data collection processes, governance, and ethics oversight.As a result, data collection on indigenous people often fails to serve their interests or help communities answer questions about their own well-being.Scholars and practitioners of indigenous data sovereignty have explored ways for indigenous communities to retain access and control over their own data as a way to reduce colonial dependency on settler states by collecting data that better reflect the values of indigenous groups-including by gathering information about disparities that settler states refuse to provide [99].In this context, asserting sovereignty refuses extractive forms of data collection in favor of alternative methods that prioritize indigenous peoples' right to self-determination.
To provide guidance on how First Nations data should be used, the First Nations Information Governance Centre (FNIGC) developed a set of principles in 1998 called ownership, control, access, and possession (OCAP) [32].The OCAP principles state that communities collectively own information and cultural knowledge pertaining to them; have the right to control the collection, use, and disclosure of that information; must maintain access to that information no matter where it is circulated; and must be able to assert legal jurisdiction over their data [33].To demonstrate the use of OCAP in practice, the FNIGC operationalized the principles to guide the design of the First Nations Regional Health Survey, "the only First Nations-governed, national health survey in Canada that collects information about First Nation on-reserve and northern communities" [32].
As the first national survey to be fully designed by First Nations representatives, it now serves as a primary information source informing healthy policy decisions relevant to First Nations communities.The development of OCAP represents one of many parallel efforts led by indigenous communities to assert self-determination in community-driven data practices [49].
Without action from groups involved with the FNIGC, the Canadian government would not have considered First Nations collective data autonomy to be possible.In fact, some national legislation in Canada continues to conflict with OCAP principles [32].The FNIGC have changed how research on and with First Nations is conducted in Canada by pragmatically creating opportunities to exercise data sovereignty through partnerships and policy.In this context, exercising power is not merely engaging with what has been offered, but "attempting to negotiate the terms of one's engagement" [14].Exercising refusal involves more than saying no to existing options on the table.Refusal also involves creating new options and changing the landscape of engagement going forward.

Cost: Accepting Cost or Reducing/Redistributing Cost
Acts of refusal can involve significant costs to the people and groups who engage in it.We use the term cost to refer to things people must sacrifice in the course of refusal-such as effort, time, money, safety, or social capital.Researchers have found that introducing even small costs can make people less likely to opt out of data collection-even if they previously expressed a desire for privacy [5].We also consider more abstract costs that create barriers to refusal, such as when hidden information or specialized knowledge is required to make an informed refusal.
Scholars who study social movements have studied cost in efforts to explain why one person might participate in social movements while another may not.Sociologist Doug McAdam draws a distinction between "low-and high-risk/cost activism, " arguing that engagement in high-cost activism are affected by both individual and structural factors [73].For instance, some people may be individually more risk-tolerant, or have more resources that enable them to take risks.Costs are also structural, since not everyone has the same support from peers and institutions to take risks.Due to these intersecting considerations, higher-cost forms of refusal are less available in ways that reproduce individual and societal inequality.People also experience costs differently depending on their personal situation and social position.
While cost can deter people from refusal, refusal also has the potential to change how cost is distributed amongst groups of people.For example, some forms of collective refusal exhibit network effects, becoming less individually costly as more people join in.Refusal that relies on the development of infrastructure can have a high one-time cost that is amortized over long-term use by many people.Refusal can even uncover hidden information that reduces barriers to future refusal by others.When addressed collectively, the costs of refusal can be altered alongside the relationship between data collectors and refusers.

Refusal Has Costs for Individuals and Communities.
The costs of refusal are also borne by the people connected to those who refuse.In 2014, sociologist Janet Vertesi decided to prevent companies from learning about her pregnancy and her children after birth [109].Vertesi's family collectively decided that the only way to hide this knowledge from companies was to prevent data about her pregnancy from being collected at all.Since many parts of everyday life rely on commercial data collection, Vertesi and her family decided to build and maintain alternative infrastructures in a practice she calls "digital homesteading." Because companies use purchase data for targeted advertising, Vertesi started using cash for all transactions.To stay connected with others, she built a phone using open source hardware to avoid data collection from commercial apps and operating systems.Her family shares data on their own servers over a home network.Vertesi also had to 10:15 convince friends and family not to post or mention her children online.Her family chose to be isolated from community networks in some cases and decided to "unfriend" a family member who did not realize that so-called private messages are visible to corporate platforms [109].
Vertesi's attempt to restructure digital power in her household came with individual and social costs.Vertesi needed the technical expertise, time, and money to set up her own devices and networks.Refusal disconnected her from important friendships and led her to lose contact with some family members.Though she decided to bear the costs of refusal, Vertesi's case illustrates Benjamin's observation that "refusing the terms set by those who exercise authority in a given context is only the first (and at times privileged) gesture in a longer chain of agency that not everyone can access" [14].

Costly
Refusal Can Entrench Inequality.The authors of the Feminist Data Manifest-No write that "not everyone can safely refuse or opt out without consequence or further harm" [19].When refusal is costly, cost is experienced unequally in ways that reflect structural inequality in society.For example, the U.S. Customs and Border Protection (CBP) agency has deployed biometric face scanning technology at airports across the country.The agency scans international travelers' faces on entry and exit and compares the scans to images stored in visa, passport, and immigration databases [37].This program poses risks to privacy and civil liberties, and CBP has already experienced a data breach that leaked the information of thousands of people [28].But despite the risks that CBP's face scanning program poses to travelers, refusing the program could be even more individually costly.U.S. citizens can opt out of face scanning, though airlines typically fail to disclose that this option exists [37].The choice is unavailable to non-citizens.When citizens do opt out, they are rerouted into private search and screening procedures that involve additional time and scrutiny.As Ruha Benjamin points out, "it is coercive to say one has a choice, when one of those choices is automatically penalized" [14]-yet, these are the terms of refusal offered to international travelers.These procedures are known for discriminatory application, particularly against people who appear to be Muslim or from the Middle East.Benjamin further notes that "rebuffing the authority of the state as exercised through technoscience causes individuals to experience the underside (or outside) of biological citizenship ... in which refusal is always, already guilty" [14].When the choice to refuse means opposing the power of the state, whether or not a refusal is successful is often a secondary consideration to the high cost of refusing at all.

Some Refusal Strategies
Reduce or Redistribute the Costs of Future Refusal.Some refusal strategies allow people to help others refuse more easily in the future.This can occur when collective action benefits from network effects.Consider Tor, a volunteer-run system for using the internet anonymously.A person accessing an online service through Tor will have their internet traffic encrypted, interleaved with other people's traffic, and routed through a series of distributed relays.Each relay is only able to decrypt enough information to know where to pass the data next, making it difficult for web services to trace the origin of a request.People who volunteer to run relays help forward each other's requests to conceal the origins of their data.Tor is an example of refusal by obfuscation [15].Obfuscators accept that data collection will occur given the difficulty of opting out, and instead refuse to allow data to be identifiable to data collectors.Tor only works when many people run relays, and its effectiveness at obfuscation improves as more people use it.Therefore, it distributes risk across many people and reduces the individual risk of deanonymization for each additional participant.Further, its benefits can be shared by people who do not need to know who each other are or imagine themselves as a collective.
Another example of refusal creating new opportunities for others is the Stop LAPD Spying Coalition's public records request and subsequent lawsuit against Los Angeles Police Department [29].In response to the LAPD's expanding use of surveillance technology, the Coalition conducted surveys and focus groups with community members, finding that clear majorities of participants felt that the police should not be using predictive policing [101].The coalition requested information on data policing programs that involved creating lists of neighborhoods and individuals that would be the focus of increased policing.The programs had been canceled after an internal audit concluded that the lists were racially biased and lacked oversight.Yet the details of the lists had not been released to the public.By using legal and public pressure to acquire records about data policing, Stop LAPD Spying Coalition was able to bear the costs of refusal on behalf of the individuals targeted on the list.With records of how the lists were used, the Coalition can help individuals navigate the negative effects of their inclusion-which could involve further acts of refusal.

Uses of the Framework of Data Refusal from Below
The framework of data refusal from below can be used for descriptive, evaluative, and generative purposes.
Descriptive power.The framework can be used to describe existing forms of refusal with shared language, no matter how different they may appear.For example, in the case of the IBM facial recognition dataset described in Section 5, some users may have opted out by asking IBM to remove their images from the dataset.This refusal is individual, because it relies on people to make decisions about their own autonomy.It is reactive, because it responds to IBM's acquisition and use of data after the fact.The act of opting out alone has little power to change IBM's business model.And finally, people experience few costs by opting out.Going to a website and filling out a form to opt out at first glance has little in common with the actions of digital homesteaders or the movement around Indigenous Data Sovereignty.However, as we have shown, these cases can also be described along the same dimensions.Having better descriptions of refusal cases is important, because the language we use to describe cases focuses our attention toward the mechanisms that cause refusals to be successful or unsuccessful.
Evaluative power.The framework can also facilitate comparison between different cases of refusal along the four dimensions.Consider the movement to ban facial recognition in the U.S., which can be described as a collective, proactive, high power, high cost form of refusal.When compared to the approach of individual opt-out in the IBM case, facial recognition bans are advantageous along the facets of autonomy, time, and power.However, pursuing these has higher costs in terms of time and collective effort from activists, advocacy groups, and lawmakers.Because the framework enables these comparisons along four non-overlapping dimensions, we are able to have clearer conversations about values-what kinds of refusals are more or less normatively desirable.
Generative power.Feminist scholars have long noted the importance of refusal as a generative practice [8,40,75,97,107].In design frameworks, a highly generative framework is also operationalizable, which is to say that it speaks both to what-what are the desirable qualities of something that does not yet exist-and also the how-what are steps a designer could take to create that thing.An example of something with low generative power is a typology based on observed cases, because it is hard to infer things yet to be imagined from a list of existing types.An example of something with high generative power is a formal model of a system that can describe a combinatorial space of possibilities in terms of discretely adjustable dimensions.Our framework certainly has generative capabilities, but falls short of the ideal-because we are studying a highly situated social phenomenon, it is unlikely that we would be able to produce a simple formal model of refusal the same way we would of a programming language.
For design researchers, the main generative use of our framework is to identify gaps in the refusal design space.The cases we examined all involved tradeoffs between the four facets of autonomy, time, power, and cost.Thinking systematically using these facets, we can ask questions about whether there exist refusal strategies that approach the ideal: collective, proactive, powerful, and low-cost.Even if the framework's generative power is limited in the sense that it does not tell us how to design such a refusal strategy, the facets help us imagine what could be possible.If such ideal strategies do not exist, then our facets may inform conversations about what practical factors lead to these considerations being in tension.
More broadly, the framework can help refusers think about what kinds of approaches are appropriate for their goals given their pragmatic constraints.People may often feel trapped at a fixed position along each of the refusal facets.For example, people included in facial recognition datasets often do not know each other, and do not have existing infrastructure in place to collectively organize-constraining them to primarily rely on individual autonomy.In other cases, people are not in financial or legal positions to bear high costs of refusal, constraining them to consider what can be accomplished at low cost.Staking out certain positions along some of the facets while considering open-ended possibilities along other facets, refusers can narrow the search space when considering a wider range of actions available to them.

Lessons for Designers: Refusal as a Prompt for Design
What can designers (broadly construed) learn from the framework of data refusal from below?In addition to inspiring new social relations, refusal can also motivate creative new software designs.Sasha Costanza-Chock notes that in many endeavors, the question "'what's wrong?' drives our pursuit of 'what if?"' [23].In addition to driving social change, attempts to refuse corporate information systems have also generated fundamental advances in various computing fields.Scholars and activists have created grassroots computer networking infrastructure [52], new theoretical approaches and software plugins for information security [15], and novel research software for community-led experiments [72], to name a few.
These examples of refusal-inspired design also challenge conventional wisdom in computer science about the relationship between design and technology use.Computer scientists and designers who study technology non-use often assume that non-users will become future users with improvements to technology design [89,95].That is, they assume that if they implement certain features or streamline certain experiences, then the number of users will grow.This standpoint views refusers as either passive resources to be mined for design ideas, or problems to be solved through design.In reality, refusal from below requires active exertion of agency that exceeds what software is designed to offer.Refusal expresses the dissonance between the limited forms of agency that software imposes on users, and the fuller extent of their desires and intentions.In some cases, this dissonance can be resolved with improvements to software design that more faithfully model a user's intention.But in others, they point to differences in substantive values [115] that directly challenge the power relations that a system enables or the politics it expresses-where improving the software means acting against the refusers' interests.This is clearest in the facial recognition double-bind discussed in Section 5.3.1.A view that exclusion from technology "always and necessarily involves inequality and deprivation" [89] implies that communities of color will be better served by facial recognition systems with improved subgroup accuracy.But as Wyatt reminds us, it is important to "distinguish between 'have nots' (the excluded and the expelled) and 'want nots' (the resisters and the rejecters)" [89].The anti-facial recognition movement built on many refusals has demonstrated the desirability of exclusion compared to forcible inclusion.In this light, designers should see refusal as a prompt for reflecting on the broader socio-technical context of a system, and how software acts upon people with different subject positions.

Working Toward Institutionalized Refusal
While many cases of refusal are developed ad hoc in response to specific harms that arise, the long term sustainability of refusal requires contending with institutions.Given the scale and reach of governments and corporations developing data systems, what might equally powerful institutional forms oriented around refusal look like?Huybrechts et al. use the term institutioning to highlight the idea that institutions are "highly dynamic and contested spaces where change is not only imposed from the outside but also generated from within, " and how movements of participatory design and co-design alter institutional frames [58].In her writing on refusal, Ruha Benjamin argues that there is a need to institutionalize refusal [14] to support people's capacity to collectively organize, consider long-term solutions, and challenge power.
The need to contend with institutions is clear in Ruha Benjamin's case study of a UK Border Agency project using genetic ancestry tests to screen asylum applicants [14].Even though the Border Agency's ancestry tests were flawed, any individual who refused to comply risked increased scrutiny and potential deportation by the U.K. government.Indeed, caseworkers were "encouraged to regard refusal to submit samples with suspicion" [14].At the border, Benjamin observed the need for "'second-hand refusals' of those speaking on behalf of asylum seekers" [14].These refusals came from refugee advocates and academic scientists who publicly criticized the project.For Benjamin, the case at the border "underscores the need to institutionalize informed refusal rather than leaving it to already vulnerable individuals to question scientific and state authority" [14].
Cases of refusers working toward institutionalized refusal can be studied in terms of autonomy, time, power, and cost.
Autonomy.As the work of the Stop LAPD Spying Coalition shows, advocacy groups embedded in local communities can establish structures that help them act as an extension of refusers' autonomy.Benjamin argues that institutionalized refusal should not redirect agency away from refusers toward small groups of institutional elites.Benjamin argues that "rather than rely[ing] on 'secondhand refusals' by public advocates, watch-dogs, and whistle-blowers, it is vital to cultivate norms and develop mechanisms that allow those who are targeted by a particular initiative to voice dissent on their own terms" [14].For example, in the Coalition's campaign against predictive policing, their advocacy was informed by focus groups with community members and their materials included direct quotes from participants voicing their personal concerns in their own words.
Time.Advocates from the ACLU, Algorithmic Justice League, and other organizations working to ban facial recognition are pushing government institutions to change the temporality of refusal.Benjamin writes in the context of genetic testing that institutionalized refusal might involve, at minimum, "greater onus [...] on institutions to incorporate the concerns and insights of prospective research subjects and tissue donors upstream, far in advance of recruitment" [14].In the context of facial recognition, this means enacting policy that limits harmful government use of facial recognition so that the onus is not on individuals to refuse after the fact.
Power.The FNIGC's OCAP principles, and their use in interactions with the Canadian government, are an example of building power to create institutional change.As Benjamin notes, "[institutions'] norms and practices, as well as existing social hierarchies, place pressure on people to defer to authority" [14].Rather than defer to the authority of Canadian data collectors, the FNIGC's principles outline desired terms of engagement that asserted their sovereignty and established new practices for data collection, use, and circulation.
Cost.Data cooperatives and other worker-led organizations show how the costs of refusal can be redistributed through collective organizing.In 2018, after prolonged worker organizing, Google canceled Project Maven, which was a contract to produce surveillance software for the U.S. military.Since then, in many cases of worker organizing at Google, activists have experienced 10:19 retaliation from the company [43].Although the costs of refusal in this context are still apparent, many workers who participated in these movements were more able to do so, because they were not the only ones speaking out.Collective organizing with an eye toward institutional change can push institutions to "actively and genuinely support the choice to refuse participation, " which protects individuals who otherwise "are required to risk the fallout from acting autonomously" [14].
Our framework helps illuminate the challenges people face today while suggesting aspirational goals for the future.Ultimately, the call to institutionalize refusal is a call to imagine and sustain new forms of collectivity and solidarity over the socio-technical systems that shape our lives.
, in her capacity as an academic, chose to develop her own practice of ethnographic refusal by articulating limits to what she would report.By carefully considering "what you need to know and what I refuse to write in" [97], Simpson's ethnographic refusal has upheld the sovereignty of the Kahnawà:ke by respecting the limits the community places on what can be known by outsiders.