Algorithmic Reason
Algorithmic Reason
Algorithmic Reason
The New Government of Self and Other
CL AU DIA AR ADAU
TOBIAS BL ANKE
1
3
Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
© Claudia Aradau and Tobias Blanke 2022
The moral rights of the authors have been asserted
Impression: 1
Some rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, for commercial purposes,
without the prior permission in writing of Oxford University Press, or as expressly
permitted by law, by licence or under terms agreed with the appropriate
reprographics rights organization.
This is an open access publication, available online and distributed under the terms of a
Creative Commons Attribution – Non Commercial – No Derivatives 4.0
International licence (CC BY-NC-ND 4.0), a copy of which is available at
http://creativecommons.org/licenses/by-nc-nd/4.0/.
Enquiries concerning reproduction outside the scope of this licence
should be sent to the Rights Department, Oxford University Press, at the address above
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2021949806
ISBN 978–0–19–285962–4
DOI: 10.1093/oso/9780192859624.001.0001
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY
Links to third party websites are provided by Oxford in good faith and
for information only. Oxford disclaims any responsibility for the materials
contained in any third party website referenced in this work.
Contents
Acknowledgements vi
List of Figures x
Introduction 1
PART I: R ATIONALI T I E S
1. Knowledge 21
2. Decision 42
References 219
Index of names 251
Index of subjects 254
Acknowledgements
This book has been a journey of several years, which has spanned multiple
disciplines in the social sciences, humanities, and computing. It took us sev-
eral years to make sense of how disciplines and approaches diverge in their
diagnoses of what is at stake with big data, algorithms, machine learning, or
artificial intelligence. Tracing the sinuous contours of different debates across
disciplines has been an arduous, at times disorienting, but also rewarding task.
It has required getting to grips with varied concepts and methods and attend-
ing to how words carry not just different meanings but work differently across
disciplinary and intradisciplinary practices. This journey was partly made pos-
sible by the fact that both of us had previously traversed disciplines and worked
with the ambiguities and tensions between these: Claudia from English and
French to political science and then international relations; Tobias from polit-
ical philosophy to computer science and then digital humanities. Our rather
eclectic trajectories can perhaps explain the theoretical and methodological
eclecticism of the book. Yet, this is not an eclecticism of ‘anything goes’, but one
that has been fostered by controversies and contestations we have followed and
by the commitment to take seriously actors who enter these dissensual scenes,
whether engineers or activists, scientists or workers.
This journey among and between disciplines would not have possible with-
out the generosity and critical engagement of many friends and colleagues
across different fields and institutions. Several friends have read and com-
mented on different aspects of the book. Special thanks to Martina Tazzioli for
discussions on knowledge and Michel Foucault, Elisa Oreglia on accountabil-
ity and China, Jef Huysmans on enemies and anomalies, and Anna Leander
on ethics and politics. Several chapters of the book have benefited from be-
ing aired at conferences, workshops, and invited lectures at universities in the
US, UK, Brazil, France, Germany, Nepal, Switzerland, and South Korea. We
would like to thank Jonathan Austin, Didier Bigo, Mercedes Bunz, Werner
Distler, Jonathan Gray, Mireille Hildebrandt, Andy Hom, Alexandra Homolar,
Anna Leander, João P. Nogueira, Sven Opitz, Martina Tazzioli, and Tommaso
Venturini for invitations to present various aspects of this work. The pan-
els on ‘Data Worlds? Public Imagination and Public Experimentation with
acknowledgements vii
algorithm could not tackle abstract concepts well. The final image called for
human estimation, approximation, and simple guesswork. It was more work
than creativity. Producing the image additionally required infrastructure cred-
its from Google. The book cover therefore is an expression of one of the key
ideas in this book: that we need to understand algorithmic operations and AI
from the perspective of work.
List of Figures
From the hidden entrails of the National Security Agency to Silicon Valley,
algorithms appear to hold the key to insidious transformations of social, po-
litical, and economic relations. “‘Ad-tech” has become “Natsec-tech.” Potential
adversaries will recognize what every advertiser and social media company
knows: AI is a powerful targeting tool’, announced the Final Report by the
United States (US) National Security Commission on Artificial Intelligence
(AI).1 Chaired by Eric Schmidt, former CEO of Google, and published at the
end of the Trump administration in the US, the report captures a feeling of in-
evitability of AI for national security. National security will be defined not only
by AI—understood as a constellation of digital technologies—but by a par-
ticular use of these technologies for marketing and targeted advertising. The
comparison with advertising technology is not new for national security appli-
cations. It has become a staple of public understandings of digital technologies
in an age where we are exposed to AI through our everyday online and social
media experiences. We have become used to being targeted as part of our digi-
tal lives, while data insidiously travels between security and advertising, public
and commercial actors.
Security agencies like GCHQ, the UK’s signals intelligence agency, and big
tech companies such as Facebook appear connected through the transfor-
mation of ourselves into data. Yet, these connections are less than seamless,
as companies claim to protect privacy against mass surveillance and intru-
sion by security agencies, while the agencies in turn assert that they are
the only ones to conduct legitimate surveillance. An exhibition at the Sci-
ence Museum in London, which was dedicated to the centenary of GCHQ,
prominently displayed a photo from an anti-Facebook demonstration.2 Mass
surveillance, the image seemed to suggest, is what companies like Facebook
do, not GCHQ. This apparent confrontation between GCHQ and Facebook
obscures the long-standing entwinement of state and commercial surveillance.
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0001
2 introduction
3 Big Brother Watch, 10 Human Rights Organisations, and Bureau of Investigative Journalism and
Others, ‘Applicants’ Written Observations’, §19.
⁴ NSCAI, ‘Final Report’, 63.
⁵ Big Data & AI World London, 2019.
⁶ Kaplan and Morgan, ‘Predicting Displacement’, 9.
algorithmic reason 3
social and political practices? How did they become an inevitable answer to
problems of governing globally? The promise of algorithms traverses social
and political fields globally, ranging from the politics of security to that of
humanitarian action. This book proposes to understand the conditions of
possibility of their production and circulation, which we call ‘algorithmic
reason’. Crime, displacement, terrorism, border control, democratic gover-
nance, security, and humanitarianism are increasingly reconfigured through
new algorithms borrowed from other fields and massive amounts of data.
While there has been a lot of attention to differences in how digital tech-
nologies and algorithmic governance materialize across disparate sites, there
is still a question about how algorithms and associated digital technologies
have become the common answer to such heterogeneous and globally dis-
persed problems. We use algorithmic reason to render the rationalities that
hold together proliferating and dispersing practices. The concept goes beyond
algorithms as mere instruments for governing and emphasizes how a relatively
new political rationality is ascendant. Through algorithmic reason, we can
understand how national security questions find a link to Facebook’s meth-
ods and how humanitarian action to govern precarious lives is entangled with
big tech companies and start-ups. These are only two of the transformations
that this book investigates. Across different social, political, and economic
transformations, we show how algorithmic reason ‘holds together’ a new gov-
ernment of self and other, reshapes power relations between the governing and
the governed, and unblocks the impasses of knowledge about individuals and
populations.
Algorithmic reason
1⁵ Ibid., 32.
1⁶ The Precarity Lab, Technoprecarious, 2.
1⁷ Stoler, Duress, 6.
1⁸ Erickson et al., How Reason Almost Lost Its Mind, 30.
6 introduction
22 Bucher, If … Then; Seaver, ‘What Should an Anthropology of Algorithms Do?’; Ziewitz, ‘Govern-
ing Algorithms’.
23 Brown, In the Ruins of Neoliberalism, 11.
2⁴ Peck, Brenner, and Theodore, ‘Actually Existing Neoliberalism’, 3.
2⁵ Mezzadra and Neilson, The Politics of Operations, 19.
8 introduction
does not work. Critical data scientist Cathy O’Neil also asks about the impli-
cations of how we define ‘working’ algorithms and unpacks it into three related
questions: ‘[A]re the algorithms that we deploy going to improve the human
processes that they are replacing?’, ‘for whom is the algorithm failing?’, and ‘is
this working for society?’3⁵ However, O’Neil’s analysis ultimately relies on the
distinction between working and failing and thus leaves little room for contin-
gent or emergent effects. We propose to attend to the operations of algorithms
even when they appear not to work and when their promises do not seem to
go together with their performative effects. We use ‘operations’ here in the
etymological sense of workings, activities that are productive rather than ac-
tivity in a general sense.3⁶ We are inspired by Mezzadra and Neilson’s use of
operations to render the interval that separates input from outcome.3⁷ Under-
standing how algorithms operate entails attention to the work that takes place
in the interval or the production details and workflows to move from an input
to an output.
Algorithmic operations cannot be separated from the data work that hap-
pens in-between in terms of big data processing, datafication, metadata work,
machine learning, deep learning, and AI, which have come to infuse public
and governmental vocabularies. Each of these terms is underpinned by spe-
cific modes of knowledge, practice, and politics. For instance, analyses of (big)
data have tended to focus on activities of data collection. They have also at-
tended to relations between citizens and state, as citizens have been made
processable through the practices of data collection and processing. According
to Didier Bigo, Engin Isin, and Evelyn Ruppert, data has become ‘generative
of new forms of power relations and politics at different and interconnected
scales’.3⁸ Media theorist José van Dijck has emphasized the imagined objec-
tivity of data that produces an ideology of ‘dataism’, where computational
expressions of cultural and social relations are taken as the truth of these re-
lationships.3⁹ ‘We are data’, cautioned cultural theorist John Cheney-Lippold
in his analysis of how individual and collective subjectivities are transformed
and unformed.⁴⁰ As data orients attention to the relations between state and
citizens, it can also become an engine of activist politics. Scholars have pro-
posed agendas around ‘data activism’ and ‘data justice’.⁴1 Sociologists Davide
Beraldo and Stefania Milan have argued that we need to supplement ‘data pol-
itics’ with a ‘contentious politics of data’ to understand how data is effective at
every political level and ‘re-mediates activism’.⁴2 According to the critical data
scholar Jonathan Gray, ‘data witnessing’ renders another mode of attending
to ‘the systemic character of injustices across space and time, beyond isolated
incidents’.⁴3
Unlike data, the language of machine learning and AI as another in-between
of algorithmic operations has directed political attention towards the trans-
formations of what legal scholar Frank Pasquale has called the ‘black box so-
ciety’.⁴⁴ Machine-learning algorithms and related AI technologies can quickly
appear as both secret and opaque, even to their designers. As such they can
intensify questions of discrimination, accountability, and control and reacti-
vate anxieties about human–machine relations as ‘an insensate and affectless
system [that] seems to violate some fundamental notion of human dignity and
autonomy’.⁴⁵ As critical AI researcher Kate Crawford has pithily put it, AI is a
‘registry of power’, because ‘AI systems are ultimately designed to serve exist-
ing dominant interests.’⁴⁶ Unlike the contentious politics of data, algorithms
and machine learning seem to more drastically restrict the space of political
contestation.
Therefore, critical analyses of algorithms and AI have been largely oriented
towards questions of power as domination. For Rouvroy and Berns, algorith-
mic governmentality is highly depoliticizing. It eschews the reflexive human
subjects by producing modes of supra-individual behaviour, which do not re-
quire subjects to give an account of themselves.⁴⁷ If the statistical government
of populations focused on producing aggregates, categorizing risk groups, and
assessing abnormalities, algorithmic governmentality is no longer concen-
trated on either individuals or populations, but on their relations. Beyond
shared norms and normativities, algorithms challenge political projects of the
common and emancipatory possibilities of action. Even when algorithms are
thought to produce publics, these are often seen as de-democratizing subjects,
a ‘calculated public’ as the network of subjects and objects linked together
through the digital.⁴⁸ Becoming ‘algorithmically recognizable’ creates the
⁴2 Beraldo and Milan, ‘From Data Politics to the Contentious Politics of Data’, 3.
⁴3 Gray, ‘Data Witnessing: Attending to Injustice with Data in Amnesty International’s Decoders
Project’, 985.
⁴⁴ Pasquale, The Black Box Society.
⁴⁵ Burrell and Fourcade, ‘The Society of Algorithms’, 14.
⁴⁶ Crawford, The Atlas of AI, 8.
⁴⁷ Rouvroy and Berns, ‘Gouvernementalité algorithmique’, 8.
⁴⁸ Gillespie, ‘The Relevance of Algorithms’, 168.
12 introduction
⁴⁹ Ibid., 184.
⁵⁰ Rieder, Engines of Order, 19.
⁵1 Rancière and Jdey, La méthode de la scène, 30-1.
⁵2 Ibid., 29-31.
methodology of the scene 13
The methodology of the scene means that we do not start with presuppo-
sitions about which subjects and objects count, or which actors with which
equipment should be considered important. People, technologies, devices,
knowledge, and actions are drawn together in a scene. They appear, fade, or
disappear as the scene unfolds. A scene can be a technology expo, the Snowden
leaks, a parliamentary inquiry into the role of Cambridge Analytica, or an edu-
cational scene of ‘hacking’ algorithms. Scenes unfold in different directions, as
they draw in a multitude of people, discourses, and things. For instance, Face-
book became the object of public attention and controversy after it emerged
that a lot of false information promoted by alt-right groups had circulated via
the social network at the time of the US 2016 presidential election and the
Brexit referendum in the UK.⁵3 This scene of controversy over machine learn-
ing and disinformation might have started in the media, but it has unfolded in
a multitude of directions. It developed from the US Congress inquiry, an in-
vestigation led by journalists and the whistle-blower Chris Wylie, to Facebook
acquiring AI start-ups in order to step up its fight against ‘fake news’.
Approaching algorithmic reason through scenes allows us to attend to
both their dispersed and distributed operations and the regime of rational-
ity that holds these operations together. Feminist and information studies
scholar Leopoldina Fortunati invites us to analyse the Internet as ‘a terrain
of confrontation, struggle, negotiation and mediation between social groups
or political movements and even individuals with different interests’.⁵⁴ As a
methodological device, the scene alerts us to internal differences and contes-
tations over how algorithmic reason unfolds. A scene entails a hierarchy of
spaces and a temporality of action. This asymmetry is central to a scene. All
that is required is a movement of elevation: a step, a podium, or a threshold
is sufficient.⁵⁵ These asymmetries also mean that a scene orients analysis to
power asymmetries rather than assumptions of flatness and symmetry that
have been often associated with related ideas such as assemblages.⁵⁶
Our empirical analyses combine a wide range of materials, from analysing
online and offline documents across fields of expertise to observing the justi-
fications that actors offer of their practices at professional exhibitions, talks,
and industry conferences, as well as in patents and online media. We have
also used digital methods to follow algorithmic operations or to ‘hack’ apps
developed by humanitarian actors. Developing the methodology of the scene
required eclectic analytical vocabularies and methods that cut across the qual-
itative/quantitative binaries. This would not have been possible without our
backgrounds in different disciplines, and the aim to speak across the compu-
tational and social sciences, and the humanities. For us, eclecticism has been an
epistemic and political commitment to work beyond and against disciplinary
boundaries.
Our different disciplinary backgrounds meant that we could not take analyt-
ical vocabularies for granted or make assumptions about what algorithms do.
Much of the literature on algorithms, digital technologies, and AI has focused
on processes of de-democratization and depoliticization, as these technologies
are entwined with practices of domination, oppression, colonialism, depriva-
tion of freedom, and debilitation of political agency. In this book, through the
methodology of the scene, we offer a different political diagnosis of algorith-
mic reason. The scenes we attend to are all scenes of dissensus and controversy.
Therefore, they allow us to trace how algorithmic variations inflect and hold to-
gether heterogeneous practices of governing across time and space. Moreover,
in Part III of the book we argue that scenes of dissensus and controversy can
become democratic scenes, in the sense of the opposition between processes
of ‘de-democratization’ and ‘democratization of democracy’.⁵⁷
⁵⁷ Balibar, Citizenship, 6.
structure of the book 15
anxiety about the ways in which traditional modes of knowledge have been
unsettled by its ability to expand, given the increase in storage capacities and
cloud technologies. We start from the scene of the Cambridge Analytica scan-
dal, as the main actors claimed to have been effective in using large amounts
of data to achieve substantial changes in the political behaviour of individuals
and groups. While much of the controversy concerned the possibility of ma-
nipulating elections, we show that a different political rationality of governing
individuals and their actions is at stake here. We argue that it is the decomposi-
tion and recomposition of the small and the large that constitutes the political
rationality of governing individuals and populations. This logic of recomposi-
tion also recasts the distinction between speech and action so that a new mode
of ‘truth-doing’ is established as constitutive of algorithmic reason.
Chapter 2 turns to algorithmic decisions and difficult political questions
about algorithms making life and death decisions. We place algorithmic judge-
ments within the controversial scene of predictive policing and use the method
of ‘following an algorithm’ to understand the operations of an algorithm devel-
oped by CivicScape, a predictive policing company. Theoretically, we connect
algorithmic decision-making with decisions as enabled by work relations,
drawing on the lesser-known critical theory of Günther Anders. By following
a predictive policing algorithm, we show how it operates through workflows
and small shifts in data representations where each of the elements might in-
fluence the overall outcome. A second element of algorithmic reason emerges
through the partitioning of abstract computational spaces or what are called
‘feature spaces’ in machine learning.
The next three chapters map how these rationalities of algorithmic reason
are materialized through processes of othering, platformization, and valoriza-
tion. Chapter 3 investigates the algorithmic production of suspicious and
potentially dangerous ‘others’ as targets of lethal action. How is the line be-
tween the self and other drawn algorithmically, how do figures of the other
emerge from the masses of data? Starting from the public scene of the NSA
SKYNET programme, which wrongly identified the Al Jazeera journalist Ah-
mad Zaidan as a suspect terrorist, we show how others are now produced
through anomaly detection. We argue that anomaly detection recasts figures of
the enemy and of the risky criminal in ways that transform our understanding
of racial inequalities and is more aptly understood in terms of what politi-
cal philosopher Achille Mbembe has called ‘nanoracism’.⁵⁸ Methodologically,
there are numerous limits to analysing the unfolding of this scene, as the work
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0002
22 knowledge
with which we started this book, sees AI as seamlessly helping intelligence pro-
fessionals ‘find needles in haystacks, connect the dots, and disrupt dangerous
plots by discerning trends and discovering previously hidden or masked indi-
cations and warnings’.⁴ For intelligence professionals, the ‘needle in a haystack’
captures a vision of globality and global threat and epistemic assumptions of
visibility and invisibility, of uncovering secrets and accessing that which is
hidden and concealed. It buttresses an epistemology of security that proceeds
through ‘small displaced fragments of information, establishing the investiga-
tion of links between subjects of interest, understanding patterns of behaviour
and communication methods, and looking at pieces of information that are
acquired through new and varying sources’.⁵ It is also reassuring in its bucolic
resonance and ‘comforting pastoral imagery of data agriculture’.⁶
More than an associational epistemology of ‘connecting the dots’, the ‘needle
in a haystack’ captures the epistemic shift in relation to the algorithmic pro-
cessing of big data, away from problems of data size and scale towards seeing
opportunities in fragments of digital data everywhere. This shift is not lim-
ited to the worlds of security professionals, as it is also present in the idea
of long-tail economies that have driven algorithmic rationalities and big data
analytics for the past decade.⁷ Popularized by the former Wired editor Chris
Anderson, known for his prediction of the ‘end of theory’ in the age of big
data, long-tail economies render the move from economies based on a few ‘hit’
products to the increased number of ‘niche’ products. The promise of long-tail
economies is that of ‘infinite choice’ and—implicitly—of profit from even the
smallest products. The niche products of long-tail economies are the unknown
needles in the haystack of interest to security professionals. This epistemo-
logical promise of small fragments, unknown ‘needles’, and the long tail has
also travelled to political concerns about democracy and publics. Fears of
so-called ‘microtargeting’ are translated into anxieties about how algorithmic
knowledge can thwart or even undo democratic processes.
What is at stake in these concerns across security, economic, and politi-
cal worlds is the relation between the large and small. Initial concerns about
volume and scale of big data have given rise to inquiries into the small, the
granular, and the micro. It is in this sense that the philosopher of information
Luciano Floridi sees the value of big data in the ‘small patterns’ that it can
reveal.⁸ Louise Amoore and Volka Piotukh have highlighted the ‘little analyt-
ics’ which turn big data into ‘a series of possible chains of associations’, while
computer scientists speak about ‘information granularity’ and ‘data granules’.⁹
Big data connects the epistemological promise of capturing everything—
collecting and storing all data records seems within humanity’s reach—and of
capturing the smallest, even insignificant details. The small, banal, and appar-
ently insignificant detail simultaneously harbours the promise of personalized
medicine, atomized marketing, granular knowledge of individuals, and secu-
rity governance of unknown dangers. In a world of data, nothing is too small,
trivial, or insignificant.
Long before big data, the small and the large, the whole and its parts, the
general and the particular had been part of the great epistemic ‘divides’ in
natural and social sciences, requiring distinct instruments that would render
them accessible as well as separate methods and infrastructures of knowl-
edge production. For instance, statistical reasoning in the social sciences,
which prioritized the generality of groups and aggregates, was criticized for
effacing individual complexity and specific detail. Historian of statistics Alain
Desrosières challenges such a strong line of separation in arguing that statis-
tics, understood in its institutional and not just scientific role, has been a
mediator between state activities focused on the individual (such as courts)
and on the general population (such as economic policy or insurance).1⁰ Statis-
ticians had to connect small and disparate elements to produce aggregates. The
gap between the small and the large and attempts to transcend it are not unique
to statistics and can be understood in relation to the government of individuals
and populations.
Algorithmic reason promises to transcend the methodological and onto-
logical distinctions between small and large, minuscule and massive, part and
whole. As the languages of macro-scale and micro-scale or holistic and indi-
vidualistic methods indicate, the small and the large have historically required
different material apparatuses and analytical vocabularies. Transcending the
gap between them has been a partial and difficult endeavour. Yet, with big data,
‘[t]he largest mass goes along with the greatest differentiation’, as historian of
risk and insurance François Ewald has aptly put it.11 The theorist of networks
Bruno Latour and his colleagues have also argued that, with big data, ‘[i]nstead
of being a structure more complex than its individual components, [the whole]
has become a simpler set of attributes whose inner composition is constantly
changing.’12 In their pithy formulation, the whole has become smaller than the
sum of its parts. While social sciences have grappled with the problems of the
great divides and have attempted to dissolve, blur, or resist these dichotomies,
these tensions in knowledge production have been reconfigured through big
data.13
We argue that, rather than privileging the small over the large, the part over
the whole or vice versa, algorithmic reason entails the continuous decom-
position of the large into the small and the recomposition of the small into
the large.1⁴ This epistemic transformation, which combines previously exclu-
sive methods, instruments, and approaches, is transforming the government
of individuals and populations. What can be known and what becomes un-
knowable? What becomes governable and what is ungovernable? To trace the
elements of algorithmic reason, we start from the Cambridge Analytica scan-
dal and its use of digital data in elections around the world. The initial public
denunciation of Cambridge Analytica’s use of data and machine learning algo-
rithms problematizes the relation between populations and individuals, large
and small, significant and insignificant both epistemologically and politically.
What could be known about individuals and groups through the masses of
data extracted by Cambridge Analytica from Facebook and other digital plat-
forms? Was the large too large to be comprehensible and the small too small
to be consequential? How is such knowledge mobilized in the government of
populations and individuals? These controversies that burst into the public
show how algorithmic reason transcends the binaries between the small and
the large, the individual and collective, telling and doing, which have shaped
both social sciences and the government of individual and collective conduct.
The former British company Cambridge Analytica erupted into public light
following media revelations that it had worked with Donald Trump in the
2016 US election campaign.1⁵ The ensuing controversy, which has unfolded
across the pages of newspapers, parliamentary inquiries, and academic jour-
nals, has seen journalists, activists, politicians, and social scientists split over
whether Cambridge Analytica’s techniques of data collection and algorithmic
processing were simply another form of propaganda or whether they were able
to manipulate elections by changing the views and behaviour of significant
numbers of individuals. This debate was then folded into a wider controversy
that unravelled transnationally—from the UK and Germany to the US and
Canada—and which refocused on the role of social media companies and
particularly Facebook in political campaigning.1⁶ Much of the controversy
concerned the harvesting of user data and breaches of privacy by Facebook,
from whom Cambridge Analytica initially collected the data. The $5 billion
settlement between Facebook and the US Federal Trade Commission high-
lighted the new privacy obligations and the privacy regime that Facebook
would need to set in place.1⁷ Mark Zuckerberg reacted to the scrutiny of Face-
book after Cambridge Analytica in a long blog post where he reiterated a ‘pivot
towards privacy’ within Facebook and promised extensive machine-learning
capacities to remove ‘harmful content’.1⁸
Initially centred on the 2016 presidential elections in the US and the Brexit
referendum in the UK, the Cambridge Analytica revelations showed wide-
ranging interventions in elections around the world, from India to Kenya.
However, it soon became clear that, unlike traditional polls or surveys com-
panies employed in political campaigns, Cambridge Analytica had used large
sets of third-party data combined with survey data of US populations. ‘This
is publicly available data, this is client data, this is an aggregated third-
party data. All sorts of data. In fact, we’re always acquiring more. Every day
we have teams looking for new data sets’, explained Alexander Nix, former
CEO of Cambridge Analytica.1⁹ Such public statements and the subsequent
1⁵ Cadwalladr and Graham-Harrison, ‘Revealed: 50 Million Facebook Profiles Harvested for Cam-
bridge Analytica’; Rosenberg, Confessore, and Cadwalladr, ‘The Facebook Data of Millions’.
1⁶ The final report by the Digital, Media, Culture, and Sport Committee of the UK House of Com-
mons outlines these transnational elements of the controversy. DCMS, ‘Disinformation and “Fake
News”: Final Report’.
1⁷ Federal Trade Commission, ‘Statement of Chairman and Commissioners’.
1⁸ Zuckerberg, ‘Facebook Post’.
1⁹ Butcher, ‘Cambridge Analytica CEO Talks to Techcrunch about Trump, Hillary and the Future’.
26 knowledge
2⁰ Wylie, Mindf*ck.
21 Grewal, ‘Suspending Cambridge Analytica’.
22 Information Commission Office, ‘Facebook Ireland Ltd. Monetary Penalty Notice’.
23 Bennett, ‘Voter Databases, Micro-Targeting, and Data Protection Law’, 261.
cambridge analytica large and small 27
[t]he aim of statistical work is to make a priori separate things hold together,
thus lending reality and consistency to larger, more complex objects. Purged
of the unlimited abundance of the tangible manifestations of individual cases,
these objects can then find a place in other constructs, be they cognitive or
political.3⁰
The reconfiguration of key dichotomies of social and political life and the
return to the individual have given rise to a discourse of fear about the uses
of what Wylie has called in the earlier quote ‘microtargeting’ for political
campaign purposes and the fate of democracy. Microtargeting has become
the political equivalent of security practitioners’ ‘needle in a haystack’. Re-
searchers have debated its underlying epistemic effects and particularly what
a UK parliamentary inquiry into ‘Disinformation and “fake news”’ labelled as
a ‘risk to democracy’.3⁴ The report goes as far as to assume that the knowledge
produced through big data amounts to a new form of propaganda, as ‘this ac-
tivity has taken on new forms and has been hugely magnified by information
technology and the ubiquity of social media’.3⁵ In addressing concerns about
echo chambers and filter bubbles, Internet studies researcher Axel Bruns, how-
ever, cautions that such alarmist arguments are often based on ‘technological
determinism and algorithmic inevitability’.3⁶ Rather, we need to ask what is
distinctive about the knowledge produced through algorithmic operations
upon big data and what is specific about algorithmic microtargeting.
At first sight, the political techniques advertised by Cambridge Analytica
resonate with older techniques of targeted communication: audience seg-
mentation and targeted advertising. Yet, epistemically, it is the ‘micro’ in the
‘targeting’ that has raised most questions and concerns about lasting risks to
democracy. What is produced as ‘micro’ with data? Surveys, polls, and ques-
tionnaires have long relied on statistical classifications of populations, but they
summarized and represented groups, developing expectations of behaviour
based on different versions of the ‘average’. ‘Average’ behaviour stands for a
‘macro’ expectation we might have of everybody’s behaviour. Traditionally,
statistics produced and relied on macro-ensembles of national, ethnic, or class
categorizations.
In an inquiry into the uses of social media and Facebook in the UK, Nix gave
a lengthy outline of the key elements of microtargeting as deployed by Cam-
bridge Analytica, while acknowledging that specific strategies and techniques
depend on different legislative environments around the world. According to
Nix, the most important element of their work was the collection of diverse
data, taking in everything they could possibly find:
the United States—that comprise of consumer and lifestyle data points. This
could include anything from their hobbies to what cars they drive to what
magazines they read, what media they consume, what transactions they make
in shops and so forth. These data are provided by data aggregators as well as by
the big brands themselves, such as supermarkets and other retailers. We are
able to match these data with first-party research, being large, quantitative re-
search instruments, not dissimilar to a poll. We can go out and ask audiences
about their preferences, their preference for a particular purchase—whether
they prefer an automobile over another one—or indeed we can also start to
probe questions about personality and other drivers that might be relevant to
understanding their behaviour and purchasing decisions.3⁷
Once assembled from diverse sources, Nix’s big data can be used to create
associations that enable the segmentation of smaller and smaller groups, as
well as the creation of dynamic categories beyond averages. In the quote, these
are ‘personality and other drivers’, which categorize behaviour and decisions.
Such work requires ever more data, with Facebook being only one source
among many. Nix cites mainly commercially available data in the US, which
can be employed to build profiles for social media microtargeting. The US data
broker Acxiom, for instance, claims to have files on 10% of the world’s popula-
tion.3⁸ It offers ‘comprehensive models and data’ on consumer behaviours and
interests.3⁹
Big data is used to produce continually emergent and changing composites
among populations to be targeted. To make clear how effective this oper-
ation can be, Nix uses the comparison with advertising to account for the
power of Cambridge Analytica’s data and algorithms. In the inquiry, he com-
pares their work with ‘tailoring’ and ‘communicating’ products like ‘cars’
so that ‘you can talk about, in the case of somebody who cares about the
performance of a vehicle, how it handles and its metrics for speeding up
and braking and torque and all those other things’.⁴⁰ Different aspects of a
vehicle are assumed to speak to different groups, so that marketing com-
munication could selectively target these fragments of an imagined whole.
Nix seems to be suggesting that Cambridge Analytica approaches target-
ing as a division of the whole into parts—the whole of the message is split
into parts like the whole of a car can be split in fragments; the population
is divided into micro-groups, which are then correlated with other data
fragments.
In Nix’s vision, the accumulation of more and more data is simultaneous
with the greatest fragmentation. It is perhaps even only interesting because of
that. The promise of Cambridge Analytica’s microtargeting is the ability to de-
compose the largest population into the smallest data and to recompose the
smallest part in the largest possible data. How is the epistemic conundrum of
the large and the small solved? The large and the small are quantities upon
which Cambridge Analytica and other data analytics companies apply differ-
ent techniques of composition and decomposition. In the section ‘Composing
and decomposing data’, we show how algorithmic reason can appear so po-
litically attractive, as it transcends the binaries of part and whole, populations
and individuals by composing the smallest details and decomposing the largest
multiplicities.
As Nix’s comments in the section ‘Cambridge Analytica large and small’ out-
line, what is at stake in the public controversy around Cambridge Analytica
and Facebook data is the epistemic tension between the large and the small,
the micro and macro, the individual and the population. The decomposition
of the whole into the smallest possible data and its recomposition is not sim-
ply a mathematical operation of division and addition. By decomposing and
recomposing the large and the small, algorithmic reason has produced not
just an epistemic transformation, but also a political rationality of governing
individuals and populations.
For many, it was the massive data, the collection and algorithmic processing
of vast amounts of data that might have changed the fate of the 2016 elections
in the US or the Brexit referendum in the UK. Massive data has many fans in
many areas of digital analysis. Peter Norvig, Director of Research at Google,
has claimed that Google does not necessarily have better algorithms than ev-
erybody else, but more data.⁴1 Marissa Mayer, Google’s former Vice President
of Search Products and User Experience, had also noted ‘that having access to
large amounts of data is in many instances more important than creating great
algorithms’.⁴2 Many algorithms had not fundamentally changed, and there was
‘no single scientific breakthrough behind big data’, as ‘the methods used have
been well known and established for quite some time’.⁴3
The concept of ‘datafication’ has particular significance in this context, since
it suggests that everything can be data. As Mayer-Schönberger and Cukier have
put it, ‘to datafy a phenomenon is to put it in a quantified format so that it
can be tabulated and analysed’.⁴⁴ They use the example of the Google Ngram
Viewer to show how the larger data of a whole book can be further datafied by
splitting it in smaller parts or N-grams.⁴⁵ N-grams are here simply a number ‘n’
of characters in a word joined together. The word ‘data’, for instance, contains
two 3-grams: ‘dat’ and ‘ata’. N-grams might not help us understand texts better,
but they provide computers with a way to parse vast amounts of heterogeneous
texts.
If books can become computer-readable data, then everything can be data.
Mayer-Schönberger and Cukier have argued that big data unravels existing
epistemologies and methodologies of knowledge production by providing us
with N = all, where ‘N’ is the common letter used to describe samples in statis-
tics, while ‘all’ stands for the totality of data.⁴⁶ In N = all, N does not stand
for the number that cannot be expanded upon anymore. It is the moment the
sample becomes everything so that the distinction between part and whole can
be transcended. Statistics appears revolutionized not through scientific break-
throughs and new models, but through the exhaustiveness of data. In 2009,
Microsoft researchers proclaimed the emergence of a fourth paradigm of data-
intensive research in science.⁴⁷ With massive data, science moves from the idea
that ‘the model is king’ to ‘data is king’.⁴⁸
Yet, big data is not only problematic in its empiricist promise of capturing
reality, but in the capacity of turning it into knowledge for the government of
individuals and populations.⁴⁹ Deemed ‘too big to know’, big data challenges
existing analytical and methodological capacities to transform it into some-
thing workable. This is the challenge that computer scientists and engineers
identified in big data when arguing that ‘[t]he pathologies of big data are pri-
marily those of analysis’.⁵⁰ Big data always goes beyond what can currently be
processed. This paradox has been starkly expressed in the wake of the Snowden
disclosures about intelligence practices. Documents released by the Intercept
each, namely the rationalities and techniques directed towards individuals and
those oriented towards populations.⁵⁵
The practices of Cambridge Analytica have also appeared to be much
less exceptional than Nix’s grand announcements might have suggested. The
company deployed rather mundane algorithmic operations on data. Their
banality was highlighted in the investigation conducted by the Information
Commissioner Elizabeth Denham (ICO) in the UK. It concluded that Cam-
bridge Analytica and its parent company SCL Group were not exceptional in
their practices, as their methods relied on ‘widely used algorithms for data
visualisation, analysis and predictive modelling’. Rather than new algorithms,
[i]t was these third-party libraries which formed the majority of SCL’s data
science activities which were observed by the ICO. Using these libraries, SCL
tested multiple different machine learning model architectures, activation
functions and optimisers … to determine which combinations produced the
most accurate predictions on any given dataset.⁵⁶
This large volume and variety of data points allowed Cambridge Analytica to
build computational models that clustered not the whole population, but in-
dividuals who could be targeted by advertising. Wylie had concurred that the
company’s work required a permanent focus on data and finding ‘extra data
sets, such as commercial data about a voter’s mortgage, subscriptions, or car
model, to provide more context to each voter’.⁵⁸
Decompositions and recompositions of the small and the large produce not
just larger clusters and groups, but the individuals themselves. The singular
individual has now been replaced by a big data composite. Individuals have
become the staging points of big data strategies, abundant multiplicities of data
rather than reductive statistical averages.⁵⁹ As one practitioner has put it: ‘Big
Data seems primarily concerned with individual data points. Given that this
specific user liked this specific movie, what other specific movie might he [sic]
like?’⁶⁰ The epistemic specificity of big data resides not in the details of individ-
ual actions or massive data about social groups and populations. Algorithmic
reason is not simply recasting the relation between masses and individuals, be-
tween part and whole, making possible their continual modulation. It affords
infinite recompositions of reality, where small and insignificant differentials
become datafied and inserted in infinitely growing data. Data is without limits
at both the macro-scale and micro-scale, and it allows the conjunction of scales
in ways that are unexpected for both social and natural sciences. As Ewald has
remarked, ‘[e]ach element of data is unique, but unique within a whole, as
compared to the rest’.⁶1
Algorithmic reason conjoins omnes et singulatim, the particular and the gen-
eral, the part and whole, the individual and the population and transcends
limitations of epistemic and governing practices.⁶2 Its distinctive promise is
not that of endless correlation or infinite association, but that of surmounting
the epistemic separation of large-N/small-n through relations that are end-
lessly decomposable and recomposable. Yet, these algorithmic compositions
and decompositions are not necessarily truthful. Which data compositions
gain credibility and which ones should undergird governmental interventions?
If nothing is too small or insignificant to produce knowledge, what will count
as truthful knowledge? Although less reverberating than the Cambridge An-
alytica scandal, a related professional controversy about speech and action
simmered in the worlds of big data. It became no less tumultuous, as it moved
from the world of social and computer sciences to that of a public debate.
Truth-telling, truth-doing
⁵⁹ The concept of dividual was coined by Gilles Deleuze and is widely used to render the quantified
or datafied self. Deleuze, ‘Postscript on the Societies of Control’. On the quantified self, see Lupton, The
Quantified Self.
⁶⁰ Janert, Data Analysis with Open Source Tools, 7.
⁶1 Ewald, ‘Omnes et singulatim’, 85.
⁶2 Foucault, ‘Omnes et singulatim’.
36 knowledge
avowal. Avowal, Foucault argues, was ‘the decisive element of the therapeutic
operation’ in the nineteenth century and became increasingly central to the
fields of psychiatry, medicine, and law.⁶3 For instance, the therapeutic prac-
tice required the patient to speak the truth about oneself. Law also demanded
avowal as confession, truth-telling about oneself. For Foucault, avowal is a
verbal act through which a subject ‘binds himself [sic] to this truth, places
himself in a relation of dependence with regard to another’.⁶⁴ Avowal as a
form of truth-telling about oneself is different from other modes of knowl-
edge production—for instance, the demonstrative knowledge of mathematics
or the inductive production of factual knowledge. Avowal is situated in a ‘re-
lationship of dependence with regard to another’, relying on and reproducing
asymmetric power relations.⁶⁵ Truth-telling about oneself would appear to be
a strange intruder in the world of computer science and datafied relations. And
yet, questions of what is truthful in the masses of data, whether small details
should count or not, are entwined with the problematization of truth-telling
about oneself.
The extension of algorithmic operations to the smallest and least significant
element has problematized what counts as truthful knowledge about individ-
uals and collectives. Which elements of the datafied individual should feed
the algorithms? ‘You are what you click’, announced an article in The Nation in
2013.⁶⁶ ‘You are what you resemble’, states controversial computer scientist Pe-
dro Domingos in his book, The Master Algorithm.⁶⁷ In their complaint against
Cambridge Analytica, the Federal Trade Commission describe this production
of subjectivity through ‘likes’:
For example, liking Facebook pages related to How to Lose a Guy in 10 Days,
George W. Bush, and rap and hip-hop could be linked with a conservative and
conventional personality. The researchers argued that their algorithm, which
was more accurate for individuals who had more public Facebook page ‘likes’,
could potentially predict an individual’s personality better than the person’s
co-workers, friends, family, and even spouse.⁶⁸
Clicking and liking are the elements that enable the algorithmic production of
‘truth’ from what otherwise appear to be unconnected actions. How do these
different actions compose the small and the large and what happens with truth-
telling about oneself in the process?
Sandy Pentland, former MIT Media Lab director and one of the most influ-
ential data scientists, articulates the difference between what he calls ‘honest
signals’ beyond raw data and the implicitly dishonest sense of language.⁶⁹ He
draws a sharp distinction between social media big data and behavioural big
data. The former is about language, while the latter is about traces of actions.
If for Foucault truth-telling about oneself entailed a verbal act in the form of
avowal, Pentland appears to discard the truth of language in favour of the truth
of action. In this sense, big data is not only about quantification or even math-
ematization, but it is about reconfiguring relations between speech and action,
undoing distinctions between saying and doing.
While discarding social media ‘likes’, Pentland acknowledges the signif-
icance of language by recasting it as ‘signals’ of action. Language is not
significant for what people say, but for the signals that reveal the nonconscious
actions accompanying speech: ‘How much variability was in the speech of the
presenter? How active were they physically? How many back-and-forth ges-
tures such as smiles and head nods occurred between the presenter and the
listeners?’⁷⁰ Pentland proposes to build ‘socioscopes’, which are the combined
big data equivalent of the ‘telescope’ and the ‘microscope’, as they can com-
pute the complexity of social life in rich and minute detail.⁷1 His socioscopes
become the basis of a renewed science of ‘social physics’.⁷2 The data of interest
to Pentland is what individuals do rather than what they say: ‘Who we actu-
ally are is more accurately determined by where we spend our time and which
things we buy, not just by what we say and do’.⁷3
Former Google data scientist and author of the New York Times bestseller
and the Economist Book of the Year Everybody Lies, Seth Stephens-Davidowitz
joins Pentland in the indictment of the truthfulness of what we say. To
avoid the problem of deception through language, big data promises access
to the ‘truth’ of behaviour by recording banal, seemingly insignificant ac-
tions: ‘Big Data allows us to finally see what people really want and really
do, not what they say they want and say they do’.⁷⁴ If Pentland focuses on
⁷⁵ Ibid., 219.
⁷⁶ Hayles, Unthought.
truth-telling, truth-doing 39
prefer structured and semantically defined data for their processing. In times
of messy and varied big data, smaller structured data has become ever more
valuable, as it can be more easily computed. Thus, truthful knowledge is pro-
duced in a digital economy, where ‘nonconscious’ actions are more readily
datafied. For instance, a tweet contains a limited number of characters and
has become well known for its pithy communication. However, the structured
data linked to it can be much larger than the content of any one tweet.⁸⁰ This is
what the Economist called the ‘digital verbosity’ of a tweet that can reveal a lot
both about the author of a tweet and their social network.⁸1 Here, this addi-
tional structured data stands for the algorithmic relevance of small details that
are structured enough to be ‘actionable’ by computers. While media scholars
have rightly argued that ‘raw data’ is an oxymoron,⁸2 what is produced as com-
putable structured data is what becomes truthful. Truthful knowledge is what
can be easily datafied and made algorithmically actionable as structured data.
As we saw in the Introduction, big data has been criticized for its ‘gener-
alized digital behaviourism’ that avoids any confrontation with the human
subject, either physically or legally.⁸3 We have argued that neither generalized
empiricism nor behaviourism can account for the production of algorithmic
knowledge. Algorithmic reason transcends dichotomies of speech and action
through a reconfiguration of doing as nonconscious actions rather than willed
acts, either individual or collective. Truth-telling about oneself has entered
computer science through the mediation of digital devices that extract data at
the smallest level as digital traces. Testimony, speech, evidence do not disap-
pear, but are decomposed as both conscious saying and nonconscious doing.
For Stephens-Davidowitz, queries using the Google search engine are not just
significant semantically, but also as signs that are indicative of doings. Algo-
rithmic reason decomposes speech and action and reconfigures their relation
to truth by rendering them as nonconscious acts.⁸⁴
Decomposing the large and recomposing the small, reconfiguring language
and action hold together the intelligence professionals’ desire to find the ‘nee-
dle in a haystack’, the marketing and advertising professionals’ dream to access
consumers’ desires and the politicians’ quest for a granular representation of
To decide is to cut.
—Michel Serres, The Parasite (1982), 23
Algorithms shape not only what we see, how we are taught, how books are
written, how party manifestos are put together, but also what urban areas are
policed, who is surveilled, and even who might be targeted by drones. As most
explicitly put by the US whistle-blower Chelsea Manning, ‘[w]e were using
algorithms to catch and kill’.1 In Foucault’s terms, algorithms appear to ‘let
live and make die’, thus resuscitating the arbitrariness of sovereign decisions.2
Algorithmic decision-making carries the spectre of sovereign exceptionalism
through the verdict on the figure of the enemy and the constitution of a new
normal.3 Unlike the knowledge of individuals and populations, algorithmic
decision-making is about the ‘cut’, which induces perceptions of the splitting
of self and other, of the delimiting, confinement, and management of zones of
suspicion, risk, and danger.
In 2016, two data scientists, Kristian Lum and William Isaac, published a pa-
per that simulated the use of PredPol, a predictive policing software deployed
by many police forces in the US and beyond, to a large urban area in the US.⁴
They compared the number of drug arrests based on data from the Oakland
Police Department with public health data on drug use. Using the PredPol al-
gorithm, they could show that, ‘rather than correcting for the apparent biases
in the police data, the model reinforces these biases’.⁵ Initially developed by the
anthropologist Jeffrey Brantingham at UCLA and the mathematician George
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0003
decision 43
⁶ Brantingham, ‘The Logic of Data Bias and Its Impact on Place-Based Predictive Policing’.
⁷ Amoore and Raley, ‘Securing with Algorithms’.
⁸ For instance, Noble, Algorithms of Oppression; Benjamin, Race after Technology.
⁹ Benjamin, Race after Technology, 83 (emphasis in text).
1⁰ Institute of Mathematics and its Applications, ‘Written Evidence’, 6.
11 Chan and Bennett Moses, ‘Is Big Data Challenging Criminology?’, 36. See also Pasquale, The
Black Box Society; Introna, ‘Algorithms, Governance, and Governmentality’; Burrell, ‘How the Machine
“Thinks”’.
44 decision
of Anders’s critical theory for addressing the digital revolution have also been highlighted by Fuchs,
‘Günther Anders, Undiscovered Critical Theory’.
46 decision
For example, the claim that the pilot of the plane that dropped the bomb on
Hiroshima ‘acted’ when he pressed his button, sounds incorrect. In view of
the fact that his physical effort, which might have attested to his ‘productive
activity’, was entirely insignificant, one might even say that he did not do any-
thing at all… . Nor did he see the effect of his ‘productive activity’, since the
mushroom cloud that he saw is not the same as the charred corpses. Nonethe-
less, with the help of this ‘not doing anything’, in a kind of annihilatio ex nihilo,
he caused two hundred thousand people to pass from life to death.21
21 Anders, The Obsolescence of Man, 44. In the original, Anders uses ‘ungenau’ at the end of the first
sentence, which is better translated as ‘inaccurate’ rather than ‘incorrect’.
22 Anders, Burning Conscience, 1.
23 Anders, ‘Theses for the Atomic Age’, 496.
48 decision
overwhelms our capacity to imagine their implications, but also the unlimited
mediation of our work processes.2⁴
2⁴ Anders, Nous, fils d’Eichmann. An unofficial translation into English by Jordan Levinson is avail-
able at http://anticoncept.phpnet.us/eichmann.htm?i=1. The quote is from this English translation.
2⁵ Foucault, Discipline and Punish.
2⁶ Anders, Et si je suis désespéré, que voulez-vous que j’y fasse?, 71–4.
2⁷ Anders also associates the subliminal with Leibniz’s theory of ‘tiny perceptions’, the infinitely small
perceptions that pass beneath the threshold of consciousness (Anders, The Obsolescence of Man).
2⁸ Ibid., 46.
2⁹ Anders offers a modification of Ulrich Beck’s understanding of the contemporary ‘world risk
society’ as being characterized by ‘unseen’ risks. If Beck was able to draw a distinction between the
unexceptional decisions in the nuclear age 49
Outlook
Sunny Rain
Overcast
No Yes No Yes
instruments of scientists who make such risks visible and the wider public who can come to recognize
themselves as part of ‘world risk society’ only through the scientific explanation of these risks, Anders
places less faith in making technologies visible exactly because of their apparent innocuity (Beck, World
at Risk).
3⁰ Anders, The Obsolescence of Man, 8.
31 Mitchell, ‘Machine Learning’.
50 decision
both mundane and widely used in computer science. They are imbrications of
data, formalisms, language, and reassuring natural metaphors.
If decisions are still traceable in this diagram, many algorithmic decisions
have dispersed trajectories, which are both inconspicuous and almost imper-
ceptible. Their workflows of people, systems, and data remain illegible. A good
example are random forests, which are illegible as they are built from many
different decision trees whose answers are combined. Random forests ‘build
thousands of smaller trees from random subsections of the data’.32 Mathemati-
cian Hannah Fry explains the use of random forest algorithms in the justice
system:
Then, when presented with a new defendant, you simply ask every tree to vote
on whether it thinks awarding bail is a good idea or not. The trees may not
all agree, and on their own they might still make weak predictions, but just
by taking the average of all their answers, you can dramatically improve the
precision of your prediction.33
Predictive policing takes its inspiration from other big data operations and
incorporates ‘more variables as some departments already have done, and per-
haps even other data sources beyond police and government records like social
media and news articles’.3⁴ PredPol, one of the most discussed companies pro-
ducing predictive policing software, prides itself on not employing personal
data, and relying only on the time and location of crimes as recorded by the
3⁵ For a discussion of the different software and types of predictive policing, see Ferguson, The Rise
of Big Data Policing.
3⁶ Brantingham, ‘The Logic of Data Bias and Its Impact on Place-Based Predictive Policing’, 484.
3⁷ Brantingham, Valasik, and Mohler, ‘Does Predictive Policing Lead to Biased Arrests?’, 3. Another
public controversy emerged around the use of an earthquake model for crime prediction by PredPol, as
a result of an intervention by the sociologist Bilel Benbouzid. As the model was developed by a French
seismologist, Benbouzid helped set up an exchange between the earth scientist and the PredPol applied
mathematician (Benbouzid, ‘On Crime and Earthquakes’).
3⁸ Lau, ‘Predictive Policing Explained’.
3⁹ Stop LAPD Spying Coalition and Free Radicals, ‘The Algorithmic Ecology’.
⁴⁰ Puente, ‘LAPD Pioneered Predicting Crime’.
⁴1 Stop LAPD Spying Coalition, ‘Groundbreaking Public Records Lawsuit’.
⁴2 PredPol, ‘Geolitica’.
52 decision
by activists, scholars, and even some police departments and had begun to
share details of their algorithmic decision-making. CivicScape was a company
that garnered particular attention for a while, as it had taken the seemingly rad-
ical approach of making its algorithms openly available online, so that it could
tackle bias and discrimination publicly. The company published their code on
GitHub, a community site for sharing code online.⁴3 Rather than making as-
sumptions about the model and the algorithm, as researchers have had to do
for PredPol, we could trace the workflow of predictive policing by following
the instructions on CivicScape’s GitHub.⁴⁴
CivicScape did not publish sample datasets on its GitHub repository. As ma-
chine learning is based on a combination of data and algorithms, this is a severe
limitation for the company’s self-proclaimed transparency. This exclusion of
data from GitHub, however, is part of their business model, as AI companies
depend on their strategic data acquisition. In the age of GitHub and global
sharing of software, it is increasingly hard to defend the intellectual property
rights of code. Upon request, the company pointed us to publicly available
datasets from police departments in the US. As the company’s founder once
worked for the Chicago Police Department, we chose its data portal and the
crime data published on it from 2001 to present.⁴⁵ The data from the Chicago
police includes reported incidents of crime.
By publishing their code on GitHub, CivicScape have turned transparency
into a business model. ‘Our methodology and code are available here on our
GitHub page’, they explain, to ‘[i]nvite discussions about the data we use’.⁴⁶
CivicScape’s GitHub pages are then also not just technical records but em-
bed code in a set of assurances of social utility. CivicScape promises to work
(1) against bias and (2) for transparency by (3) excluding data that is directly
linked to vulnerable minorities, and to (4) make classifiers transparent.⁴⁷ How-
ever, just because we can see code on GitHub does not mean that we know
it. There are serious limitations to these imaginaries of making ‘black boxes’
⁴3 CivicScape, ‘CivicScape Github Repository’. The site is not active anymore, but its static content
without the code can still be accessed through the Internet Archive with the last crawl in Septem-
ber 2018, available at https://web.archive.org/web/20180912165306/https://github.com/CivicScape/
CivicScape, last accessed 30 January 2021. The CivicScape code and notebooks have been removed
from GitHub. Its website (https://www.civicscape.com/)) has also been taken down.
⁴⁴ At the time of writing this chapter, in 2020, CivicScape is not producing identifiable products
anymore. Its founder Brett Goldstein, a former police commander in Chicago, is now at the US De-
partment of Defense, where he has a high-profile role leading the Digital Defense Service (Miller, ‘Brett
Goldstein Leaves Ekistic’). The Digital Defense Service is a key strategic initiative of the Pentagon to
provide Silicon Valley experience to its digital infrastructure (Bur, ‘Pentagon’s “Rebel Alliance” Gets
New Leadership’).
⁴⁵ City of Chicago, ‘Crimes—2001 to Present’.
⁴⁶ CivicScape, ‘CivicScape Github Repository’.
⁴⁷ Ibid.,
policing with algorithms 53
visible and the equation of seeing with knowing.⁴⁸ The CivicScape code on
GitHub was not enough to reproduce all their algorithms fully, but it allowed
us to follow the algorithmic workflows.
Even if we could render a predictive system legible, this would not neces-
sarily mean we can make effective use of this knowledge. The CivicSpace code
requires setting up a separate infrastructure to process the data and therefore
favours those who have the effective means and expertise to do this. A lack
of infrastructure and social environment considerations has long been iden-
tified as a shortcoming of the open-source transparency agenda. In a critique
of early open-source initiatives in India, Michael Gurstein has shown that the
opening-up of data in the case of the digitization of land records in Banga-
lore led to the intensification of inequalities between the rich and the poor,
as the necessary expertise ‘was available to the wealthy landowners that en-
abled them to exploit the digitization process’.⁴⁹ Without access to expertise
and infrastructure, the ‘effective use’ of open digital material remains elusive.
We were only able to follow the CivicSpace algorithm, once we had organized
our own infrastructure, downloading the Chicago crime data and setting up an
algorithmic decision-making environment in the R and Python programming
languages.
To trace algorithmic decision-making in the CivicSpace algorithm, we need
to first take a step back and understand the components involved. Accord-
ing to Andrew Ng from Stanford University, who co-founded and led the
Google Brain project and was Chief Scientist at Baidu, the most economi-
cally relevant algorithmic decision-making is supervised prediction.⁵⁰ Here,
algorithmic decisions is understood as mappings from an input space A to a
target output space B: A → B, where B is predicted from A. Other machine-
learning approaches like reinforcement learning have increasingly attracted
public attention, as computers managed to surpass humans at complex games
such as Go. Yet, the mundane reality of machine learning generally consists of
more banal chains of A → B decisions. Examples of such decisions include the
recognition of red lights by self-driving cars, the discrimination of benign and
malign cell growth in medical imaging, and the development of places of inter-
est for urban policing. A predictive policing algorithm might develop chains
of A → B mappings to decide on all places in a city and to declare new policing
hotspots.
15
10
RC NSE
S
Y
BB T
Y
IN BA FT
M Y
V SS E
TH A LT
O ICE
VE U TIC
TI A AG
LE R
ER
DA ER
RO EF
O E PR AU
E
IC GLA
ER CT
TH
TH
NA FE
AL T T
O
F
R
B
H
IM
EP
R
CR
EC
TO
D
O
M
How many data points would you need to maintain the same minimum dis-
tance to the nearest point as you increase the number of inputs of the data?
As the number of inputs increases, the number of data points needed to fill
the space comparably increases exponentially, because the number of inputs
corresponds to the number of features.⁵⁴
Given a set of data points, partition them into a set of groups which
are as similar as possible.
—Charu Aggarwal and Chandau Reddy, Data Clustering:
Algorithms and Applications (2013), 2
Once the Chicago crime data is featurized, predictive policing might target
so-called hotspots or locations of interest. Geographies are popular features
for predictive policing as they are easily quantifiable and enable further cal-
culations. They follow a globally unique referencing systems such as latitudes
and longitudes, which are themselves abstract representations of locations on
a globe. Even though they can be ‘proxies’ of a discriminatory past, geogra-
phies are often seen as much less controversial than personal data such as the
infamous ‘heat list’ used by the Chicago police—an index of about 400 people
in the city of Chicago supposedly most likely to be involved in violent crime.
Despite the confidence of former Chicago Police Commander Steven Caluris,
who believed that ‘[i]f you end up on that list, there’s a reason you’re there,’⁵⁵
‘heat list’ policing was quickly discarded in Chicago for its potential privacy
violations.
Using the space abstractions of latitudes and longitudes, we can recreate
a workflow and trace how an algorithm might arrive at ‘hotspots’ of crime
within Chicago. In just a few lines of code, existing crime locations can be al-
gorithmically clustered and then heat-mapped on a Google map (Figure 2.4).
The clustering assumes crimes to be related if they are co-located and sim-
ply counts the number of crimes for any particular geographic location. On
the left-hand side of Figure 2.4, which we created with the Leaflet visualiza-
tion toolkit, we see that there are three hotspots in Chicago. However, on the
right-hand side, we show that the visualization of the clusters can be decep-
tive, as crime locations are very much distributed across the city as a whole.
Heatmaps are known to create such visual distortions. In their analysis of
predictive policing in Germany and Switzerland, Simon Egbert and Matthias
Leese have argued that crime maps were one of the most important elements,
as they ‘would preconfigure to a large extent how crime risk information would
be understood’.⁵⁶
⁵⁷ According to its GitHub pages, CivicScape includes all kinds of data like, e.g., weather data in
its predictions. To keep it simple, we only consider the baseline crime statistics in our rendering of its
algorithmic reasoning.
⁵⁸ The New Inquiry, ‘White Collar Crime Risk Zones’.
⁵⁹ Ferguson, The Rise of Big Data Policing, 72–75.
algorithmic partition and decision boundaries 59
interventions that depart from statistically based hotspot policing, which pro-
duces maps based on historical data of crime frequency. For data scientist
Colleen McCue, policing needs to ‘shift from describing the past—counting,
reporting, and “chasing” crime—to anticipation and influence in support
of prevention, thwarting, mitigation, response, and informed consequence
management’.⁶⁰ Data-driven predictive policing technologies promise to be
proactive rather than reactive, as historical data was thought to replicate the
past rather than ‘intervening in emerging or future patterns of crime’.⁶1
To unpack such predictive decisions, we developed a simple machine-
learning algorithm using a wider subset of features in the Chicago crime data.
We trained a decision tree algorithm as already introduced in Figure 2.1.
Then, we used this algorithm to predict whether every possible location
within Chicago might be a crime location—and not just the locations already
recorded in the crime statistics. Figure 2.5 shows the result. It visualizes the
move from clustering existing crime locations in Figure 2.4 into predicting
new ones. The rotated contours of Chicago remain visible, but the algorithmic
operations are entirely different. Even if a location has not been listed in the
historical crime data, it becomes possible to predict whether a recorded po-
lice incident is likely for a specific location (light-grey dots). While the rather
simple algorithm struggles with performance of less than 80% accuracy, it
manages to identify two of the three hotspots from Figure 2.4, though missing
out on the inner city one, where the locations are too close to each other. The
algorithm identifies potential new hotspots in the bottom right corner. Even
with such a simple algorithm, crime locations are not just reproduced based
on past data, but new zones of criminality are generated.⁶2
Computers do not need visualizations and crime maps to support algo-
rithmic decisions. The abstraction of the feature space allows algorithms to
move beyond ‘chasing’ existing crimes by clustering what has happened into
hotspots and become anticipatory to predict crimes for all places in Chicago.
Figure 2.5 is still easily identifiable as Chicago. Algorithmic decisions need the
geographies of Chicago only in so far as these help produce feature spaces.
Here, hotspots are translated into partitions and cuts, subspaces demarcated
by dividing lines. So-called ‘decision boundaries’ separate some data items in
the feature space and bring others together.⁶3 This does not mean that the ‘de-
cision’ is effaced, but that it is dispersed so that it is difficult to trace it in both
its banality and multi-dimensional abstraction.⁶⁴ As Michel Serres has put it,
geometry gives us the ‘theoretical conditions of resemblance’, where figures
can move without deformation.⁶⁵
Figures 2.6 and 2.7 move away from the human-readable geographical maps
to visualize the geometry of feature spaces directly, even if artificially limited
to two dimensions in order to make them printable. Both figures represent
how a machine-learning algorithm would ‘know’ and ‘see’ the Chicago crime
data. We used two typical machine-learning algorithms, which can partition
the abstract feature space of crime data into two distinct areas and distinguish
crimes from non-crimes. The axes are simplified representations of two types
of spatial data—latitude and longitude—and the data items are simulated, as
otherwise the decision boundaries and data points would become unread-
able. The real Chicago crime locations are more overlapping, which makes
1.05
0.90
0.75
0.60
0.45
0.30
0.15
0.00
⁶⁶ For this example, we have generated a random dataset of 150 observations distributed over three
distinct regions the algorithms had to cut.
62 decision
1.05
0.90
0.75
0.60
0.45
0.30
0.15
0.00
On 31 March 2017, two journalists, Ahmad Zaidan and Bilal Abdul Kareem,
filed a lawsuit against the Trump administration for having been wrongly put
on the US ‘Kill List’ and targeted by drone strikes. The plaintiffs argued that
they were targeted as a ‘result of arbitrary and capricious agency action’ by the
US government and asked to be allowed to challenge their inclusion on the Kill
List.1 Ahmad Zaidan was already a known figure to the media, as the Snowden
disclosures had shown that he had been singled out algorithmically as a ‘person
of interest’ for the National Security Agency (NSA).
The NSA’s infamously named SKYNET application became the object of
public controversy for identifying innocent people as anomalies and potential
targets for drone attacks. Documents made public by Snowden and the Inter-
cept showed that NSA analysts were interested in finding ‘similar behaviour’
based on an analysis of the Global System for Mobile Communications
(GSM) metadata collected from the surveillance of mobile phone networks
in Pakistan.2 Deemed to work ‘like a typical modern Big Data business
application’,3 SKYNET collected information on persons of interest. It used
travel and mobile phone usage patterns such as ‘excessive SIM and Handset
swapping’ and relied on cloud behaviour analytics employing ‘complex com-
binations of geospatial, geotemporal, pattern-of-life, and travel analytics … to
identify patterns of suspect activity’.⁴ SKYNET also built on behaviour patterns
generated from previous targets’ metadata to then derive both similar and
unusual behaviour. According to the Snowden disclosures, Zaidan had been
identified as a courier for Al Qaeda and was potentially selected as a US target.
Through the algorithmic use of his GSM metadata, Zaidan becomes a suspect
terrorist, being cast simultaneously as a member of the Muslim Brotherhood
and an Al Qaeda courier.
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0004
70 others
In the case before the US courts, the UK NGO Reprieve, supporting both
Zaidan and Kareem, argued that they were at risk of drone attacks given their
inclusion on the Kill List and challenged their designation as potential terror-
ists. Reprieve explained that the journalists were effectively ‘serving time on a
death row that stretches from America out across the globe—one without bars
or gates or guards, and none of the trappings of a recognizable justice system,
either’.⁵ Zaidan’s case is based on the Snowden documents showing his inclu-
sion in the SKYNET programme, while Kareem brings evidence of five near
misses by drones to account for his targeting.
In their opposition motion, the US government argued that Zaidan’s des-
ignation as ‘potential terrorist’ does not necessarily mean that he would have
been included on the Kill List:
Indeed, even assuming the truth of Plaintiff ’s allegation that the alleged
SKYNET program identifies ‘potential terrorists’ based on ‘electronic pat-
terns of their communications writings, social media postings, and travel,’ id.
at 33, it remains wholly unsupported that ‘potential terrorists’—as Plaintiff al-
leges, he has been judged—are nominated and approved for lethal action… .
In fact, it is well established that the Government undertakes other non-lethal
measures against known or suspected terrorists, including through economic
sanctions or other watchlisting measures, such as aviation screening and No
Fly List determinations.⁶
⁸ For discussions that address the production of difference in drone warfare, see Wilcox, ‘Embodying
Algorithmic War’; Pugliese, ‘Death by Metadata’; Chamayou, A Theory of the Drone.
⁹ Formulated by Richard Ericson and Kevin Haggerty, the ‘data double’ is probably one of the most
used metaphors that renders practices of governing the self with and through data (Haggerty and
Ericson, ‘The Surveillant Assemblage’). Yet, as we have seen in Chapters 1 and 2, there is no direct
connection between the individual and a ‘data double’. Olga Goriunova has also criticized the repre-
sentational implications of the ‘data double’ or ‘digital traces’ and argues that we need to understand
the ‘operation of distance’ between the digital subject and the living person (Goriunova, ‘The Digital
Subject’).
72 others
Who is the enemy? This question has been central to philosophical and politi-
cal thought. The figure of hostis humanis generis (‘enemy of all humanity’) has
not only served to justify the use of violence, but it has informed contempo-
rary engagements with the production of the figure of the enemy in the ‘war
on terror’. Distinctions between friend and foe—or what Schmitt has expressed
as the political separation of hostis and inimicus1⁰—have often been analysed
in terms of historical continuity and discontinuity. For cultural studies schol-
ars, the figure of the enemy resurfaces in similar terms, as ‘[t]he enemy of all
humankind is cast as one archetypical pirate figure; the international terror-
ist thus become recognizable as a quasi-pirate’.11 Yet, the figure of the enemy
emerges not just through cultural practices, but through relations of power,
military, and political rationalities, and technological devices, all of which vary
historically.
Critical scholars have explored these multiform figures, while attending to
continuities of race and gender that underpin relations of power and pro-
fessional worlds of practice. Historian Reinhart Koselleck has shown that
modernity brought a ‘radicalization of concepts of the enemy’ through the
language of the inhuman and subhuman. According to him, this language
of representing the enemy would have been unconceivable before.12 More
recently, political philosopher Achille Mbembe has analysed the exacerba-
tion of the figure of the enemy to the extent that he comes to diagnose the
present as a ‘society of enmity’.13 His diagnosis of the present resonates with
international relations theorist Vivienne Jabri’s analysis of the ‘domestica-
tion of heterogeneity’, which draws on the ‘trope of “humanity”’ to legitimize
discourses and practices of governing formerly colonized others.1⁴
For philosopher Byul-Chung Han, the digital entails a complete transfor-
mation of the relations between self and other to the extent that he comes to
speak about the ‘expulsion of the other’ in digital times. Han argues that ‘the
negativity of the Other now gives way to the positivity of the Same’.1⁵ Without
negativity, there are no others and there are no enemies. If the language of the
enemy appears transitory in digital practices, does it mean that practices of
othering are also transitory, subsumed to the continuous positivity of the self?
However, Han’s provocative argument does not account for practices of al-
gorithmic othering. Even if the language of enemies is increasingly eschewed
by professionals of the digital, othering is mutable and multiform. Distribu-
tions of humanity, subhumanity, and infrahumanity continue to be produced
algorithmically.
Rather than privileging digital transformations, international relations
scholars have attended to the transformation of the enemy in the ‘war on
terror’. They have explored how and why the enemy is produced as more
fluid, elusive, and abstract. In analysing how the category of the ‘the univer-
sal adversary’ was invented in US Homeland Security, Mark Neocleous draws
attention to the motley adjectives of invisible, faceless, elusive, or abstract
mobilized to describe it, thereby leaving the category of the enemy ‘open to
endless modification’.1⁶ Christian Olsson similarly argues that contemporary
wars evince an avoidance and even absence of officially declared enemies.1⁷ A
plethora of substitutes or euphemisms are deployed to avoid the explicit ref-
erence to an enemy in the wars in Iraq and Afghanistan. Olsson’s insights are
particularly relevant for us, as he focuses on military and political discourses,
which have historically been most authoritative in articulating figures of the
enemy. In an analysis of the legal languages of ‘civilian’ and ‘combatant’, Chris-
tiane Wilke points out that the plethora of categories that have come to replace
the combatant—unlawful combatant, illegal enemy aliens, insurgents, irregu-
lar force, militants, or warlords—‘enable the modification and withdrawal of
legal protections that are attached to the standard categories of the laws of
war’.1⁸
While much of the work in the humanities and social sciences has privileged
representations of the enemy in legal texts, political discourses, and mass
culture, science and technology studies (STS) scholarship has attended to
technological enactments of enemy figures.1⁹ For instance, Peter Galison has
shown that cybernetics developed its own vision of the enemy when it was
summoned to stage an encounter with the enemy during World War II.
Galison presents us with several visions of the enemy: the racialized repre-
sentation of the German and Japanese enemy in public discourse, the quasi-
racialized figure of the anonymous enemy of air power, and the non-racially
marked emerging cybernetic enemy in Norbert Wiener’s work.2⁰ The cyber-
netic enemy was a third emergent figure of the enemy as the ‘machinelike
opponent’, where the boundary between the human and nonhuman became
blurred. In Galison’s analysis, racialization is gradually loosened and largely
disappears, once we move from the opposition human–subhuman in pub-
lic discourse to that of individual human–anonymous mass in air wars and
then human–machine in cybernetics. Given Galison’s attention to cyber-
netics, there is less discussion of how different figures of the enemy relate
to each other, and how race might be implicated in technological enact-
ments, even if not explicitly invoked or immediately visible in a machinelike
human.
Figures of enmity emerge in variegated worlds of technoscience as well as in
the world of the professionals of politics and security. We propose the notion
of ‘enemy multiple’ to account for the coexistence, contestation, and coordina-
tion of enactments of enemies across social and political worlds of practice.21
The enemy multiple does not simply refer to a more fluid or evanescent figure
of the enemy. It renders the decomposition and recomposition of figures of
the enemy and the redrawing of racializing lines between self and other with
algorithmic reason. Angela Davis reminds us that ‘it is extremely important
to acknowledge the mutability of race and the alterability of the structures of
racism’.22 We trace how the enemy multiple emerges through the methodolog-
ical orientation to scenes of controversy and enactment, which helps attend
to how transformations of otherness and mutations of racism intersect, are
juxtaposed, align, or clash.
The schema of friend–enemy, self–other appears as ‘vastly complicated by
close analysis of contemporary sites and events of violent confrontation, both
“at home” and “abroad.”’23 The complex and fragile architecture of security
has always been enacted in fraught ways, dispersed (epistemic) practices,
2⁴ Balzacq et al., ‘Security Practices’; Bueger, ‘Making Things Known’; Davidshofer, Jeandesboz, and
Ragazzi, ‘Technology and Security Practices’; Huysmans, Security Unbound; Bigo, ‘Freedom and Speed
in Enlarged Borderzones’; Amicelle, Aradau, and Jeandesboz, ‘Questioning Security Devices’.
2⁵ Bigo, ‘The (In)Securitization Practices of the Three Universes of EU Border Control’.
2⁶ McEntire, Introduction to Homeland Security, 136.
2⁷ Bousquet, The Eye of War, 11.
2⁸ Qaurooni and Ekbia, ‘The “Enhanced” Warrior’, 66.
2⁹ Chamayou, A Theory of the Drone, 65.
76 others
Qaeda courier and member of the Muslim Brotherhood. The other as anomaly
unmakes the binaries of friend–enemy, normal–abnormal, identity–difference
by producing relational uncertainty. In the section ‘Knowing the other like an
algorithm’, we show how anomaly detection renders the other detectable and
knowable algorithmically at the intersection of security practices and machine
learning.
Documents disclosed by Snowden show that for the NSA, anomaly detec-
tion names the promise of big data to capture the ‘unknown unknowns’
and departs from digital techniques that concentrate on analysing known
suspects or profiling risky individuals.3⁰ NSA job descriptions for data scien-
tists list anomaly detection among the essential skills required: ‘data mining
tools and/or machine learning tools to search for data identification, char-
acteristics, trends, or anomalies without having apriori knowledge of the
data or its meaning’.31 Similarly, the UK government argues in the Investi-
gatory Powers Bill that access to bulk data allows the intelligence agencies to
search for ‘traces of activity by individuals who may not yet be known to the
agencies … or to identify potential threats and patterns of activity that might
indicate national security concern’.32 The role of anomaly detection to target
the not yet known was afterwards confirmed in a review of the Investigatory
Powers Bill.33
In the wake of an attack at the Soldier Readiness Centre at Fort Hood in
Texas in 2009, the US Defense Advanced Research Projects Agency (DARPA)
issued a call for funding of projects addressing Anomaly Detection at Multiple
Scales. In its call, DARPA identifies a problem of targeting anomalies in vast
amounts of data by taking the relatively ‘small’ case of the Fort Hood military
base:
3⁰ GCHQ, ‘HIMR Data Mining Research’. HIMR stands for the Heilbronn Institute for Mathematical
Research at the University of Bristol, UK.
31 NSA, ‘Data Scientist. Job Description’.
32 UK Home Department, ‘Draft Investigatory Powers Bill’, 20.
33 Anderson, ‘Report of the Bulk Powers Review’.
knowing the other like an algorithm 77
For example, there are about 65,000 personnel at Fort Hood… . Under a few
simple assumptions, we can show that the data collected for one year would
result in a graph containing roughly 4,680,000,000 links between 14,950,000
nodes. There are currently no established techniques for detecting anomalies
in data sets of this size at acceptable false positive rates.3⁴
DARPA’s initiative made anomaly detection into a key research project for ma-
chine learning and big data. It envisaged anomaly detection to ‘translate to
significant, and often critical, actionable information in a wide variety of appli-
cation domains’.3⁵ Following on from DARPA and similar investments around
the world, computer scientists have declared it ‘a vital task, with numerous
high-impact applications in areas such as security, finance, health care, and
law enforcement’.3⁶ Anomaly detection has also become a key part of machine-
learning books focusing on security applications as a technology that is crucial
for the education of future professionals.3⁷
One of the documents disclosed by Snowden, which maps the current cloud
capabilities developed by the NSA and GCHQ and continuing gaps in their
services, contains a matrix that includes four variations of known–unknown
target and known–unknown query (Figure 3.1). This matrix starts from the
case of known knowns where both the target and the query about the target
are known—e.g. has X been in regular contact with Y? The remainder of the
matrix gradually adds further unknowns, obscuring either query or target or
both until the most challenging case is reached: ‘unknown target, unknown
query’. This fourth case in the matrix resonates with Donald Rumsfeld’s ‘un-
known unknowns’ of the war on terror, but now it is firmly associated with
anomaly detection. Something is going on somewhere, but it is not known
by who, where, and how. The document points out that GCHQ’s and NSA’s
techniques aim to find exactly these anomalies, which are the holy grail of their
new digital capacities.3⁸
For security professionals and data scientists, one of the greatest promises of
machine learning is that it appears to ‘offer the possibility of finding suspicious
activity by detecting anomalies or outliers’.3⁹ A report by the Heilbronn Insti-
tute for Mathematical Research, disclosed by Snowden and the Intercept a few
⁴⁰ Ibid., 39.
⁴1 NSA, ‘XKeyScore’. Anomaly detection through machine learning has come to supplement or even
replace the work of ‘sensing’ what is ‘out of place’ that citizens were enjoined to do in the global counter-
terrorism efforts. For a discussion of sensing the unexpected, potentially catastrophic event in the
context of what has come to be known as the ‘war on terror’, see Aradau and van Munster, Politics
of Catastrophe, Chapter 6.
knowing the other like an algorithm 79
contacts, which the Snowden slides dismiss as ‘call chaining techniques from
known [suspects]’.⁵⁵ The technique was one of linking or connecting, proceed-
ing from the known to unknown and thereby producing ‘known unknowns’.
Social network analysis has been used to render risks amenable to interven-
tion by enacting and expanding connectivity.⁵⁶ For GCHQ in the UK, such
networks have been traditionally a vital component of intelligence work. As
they outline,
[c]ontact chaining is the single most common method used for target dis-
covery. Starting from a seed selector …, by looking at the people whom the
seed communicates with, and the people they in turn communicate with …,
the analyst begins a painstaking process of assembling information about a
terrorist cell or network.⁵⁷
in how racism takes hold in masses of data, as anomalies are distinct from both
norms of humanity and population normalities.
Algorithmic nanoracism
Historian of medicine Georges Canguilhem has shed light on the unusual po-
sition of the concept of anomaly in relation to normality, abnormality, and
pathology.⁶⁵ Canguilhem is one of the few scholars to have noted the epistemic
difference of anomaly as a term that cannot be collapsed into the abnormal
or the pathological. He draws attention to an etymological error that has ef-
faced the distinction between anomaly and abnormality in ordinary language.
Unlike the normal and the abnormal, anomaly is not derived either from the
Greek nomos or from the Latin norma. According to Canguilhem, ‘“anomaly”
is, etymologically, an-omalos, that which is uneven, rough, irregular, in the
sense given these words when speaking of a terrain’.⁶⁶ Rather than a norma-
tively inscribed deviation from the normal, anomaly refers to what is simply an
irregular existence. Like a terrain, anomaly is an asperity, leading Canguilhem
to argue that anomaly, unlike normative abnormality, is simply descriptive.
Even though anomalies are also suffused with normative assumptions, Can-
guilhem’s retrieval of the specificity of anomaly in the history of medicine helps
us situate it as a supplementary term, irreducible to abnormality or pathology.
In medicine, an anomaly is not necessarily a sign of disease or abnormal devel-
opment. Moreover, an anomaly is not marked negatively as it can also mean an
improvement of the normal. In an additional comparison, Canguilhem sees
anomaly as ‘an irregularity like the negligible irregularities found in objects
cast in the same mold’.⁶⁷
Canguilhem’s distinction between anomaly and abnormality resonates with
the two objectives of anomaly detection developed historically by statistics and
machine learning.⁶⁸ The first statistical approach tends to identify anomalies
as errors or noise that must be eliminated for a statistical regularity to hold.
As O’Neil has observed, ‘statisticians count on large numbers to balance out
exceptions and anomalies’.⁶⁹ Irregularities in the object that Canguilhem talks
about would be eliminated in such a statistical approach, unless they reached
a point where they became too large. The second approach makes anomalies
the object of analysis through minor differences, which machine learning has
perfected. Here, anomalies do not need to be ‘very much different from other
instances in the sample’.⁷⁰ A ‘minor deviation from the normal pattern’ is suf-
ficient to designate an anomaly.⁷1 Appearing as a minor deviation, anomaly
detection is inflected by the knowledge of security professionals, who assume
that someone trying to hide suspicious behaviour would make it look as ‘real’
as possible. In research supported by the US Air Force Research Laboratory,
experts in anomaly detection advise that ‘if some set of data is represented as
a graph, any nefarious activities should be identifiable by small modifications,
insertions or deletions to the normative patterns within the graph’.⁷2 We are
back to the small detail or ‘almost nothing’ that could be barely noticeable and
that lives on the threshold of the normal.
As this chapter has shown, the detection of small discrepancies or anomalies
in the structure of data leads to the production of a different figure of otherness.
Although vocabularies of anomaly detection have not received much analytical
attention in the critical literature on big data or algorithmic governmentality,
anomalies have become increasingly problematized in other social and scien-
tific fields. For instance, sociologist Nikolas Rose has suggested that, in the
field of neuroscience, there has been a mutation from the binary of normality
and abnormality to variation as the norm and anomaly without abnormal-
ity.⁷3 For security professionals, anomaly detection names the promise of big
data and algorithms to partition discrepancies from the general patterns and
tendencies in data and addresses the limitations of statistical knowledge and
risk governmentality.⁷⁴
Rather than statistical abnormalities or deviations from the norm, anoma-
lies are supplementary terms that disturb binaries of normal–abnormal,
friend–enemy, self–other. As we have argued, an anomaly is identifiable nei-
ther with an individual nor with a statistically formed category. Anomalies do
not rely on categorizations of high-risk or low-risk groups and do not work
with stabilized social norms. We are far from the radically evil other or the
‘crude image of pathological individuals and groups, involving the trope of
the “barbarian” with whom political engagement is unthinkable’.⁷⁵ We are also
far from the statistical production of abnormal others who are to be governed
argued that ‘he has failed to allege adequately a link between SKYNET and
the Kill List’.⁸⁰ Zaidan made his case of being wrongly targeted based on the
NSA’s data-driven inferences in documents disclosed by Snowden. Kareem
employed frequentist inferences to support his case—having been targeted five
times, at different locations, and having narrowly escaped death. For Zaidan,
the judge explains that ‘[w]hile it is possible that there is a correlation be-
tween a list like SKYNET and the Kill List, the Court finds no allegations in
the Complaint that raise that possibility above mere speculation’.⁸1 While the
judge accepts Kareem’s frequentist and experiential argument as plausible, she
rejects Zaidan’s argument as conjectural.
Even as both journalists have most likely been targeted by employing similar
data collection and algorithmic operations, their targeting can only be the ob-
ject of litigation when it becomes perceptible and knowable within everyday
experience. Algorithmic operations of partitioning data spaces and tracking
anomalies through networks and machine learning remain infra-sensible and
do not raise to the surface of perception. They are also supra-sensible as ratio-
nalities of partitioning are part of wide-ranging human-machine workflows, to
which security professionals add further ambiguity under the guise of secrecy.
The connection between the SKYNET programme and the Kill List remains
speculative, as it is materialized in dispersed and diffuse practices, even more
so than in Kareem’s argument.
The workflows of anomaly detection are entangled with unknown intelli-
gence decisions, the translation of the SKYNET programme into military op-
erational decisions, and questions of secrecy and accountability. Algorithmic
infra-sensible operations are inflected by the list as ‘a preemptive security de-
vice’, which targets individuals for what they might do in the future.⁸2 Such
opacities are reinforced at the intersection of computational and security
practices. Given that the US government has a panoply of actions target-
ing suspect terrorists, Zaidan might or might not have been ‘approved for
lethal action’. The formation of a counter-list of lethal and non-lethal mea-
sures reinforces the speculative nature of security practices, even as it appears
to ‘arrange disparate items into a coherent semantic field’.⁸3 Furthermore,
Kareem’s case is stopped short by the US government’s invocation of state se-
crecy. Indeed, the US government’s invocation of the state secrets privilege is
⁸⁰ Zaidan et al. v Trump et al., ‘Memorandum Opinion’, 11. On 24 September 2019, Rosemary Col-
lyer also dismissed the second case, Bilal Abdul Kareem v Gina Haspel, given the US government’s
invocation of the state secrets privilege. Kareem v Haspel, ‘Memorandum Opinion’.
⁸1 Zaidan et al. v Trump et al., ‘Memorandum Opinion’, 12.
⁸2 Sullivan, The Law of the List, 23.
⁸3 De Goede and Sullivan, ‘The Politics of Security Lists’, 82.
88 others
accepted as reasonable, given that its disclosure might endanger national se-
curity. In Kareem’s case, the judge accepts the government’s argument that
‘disclosure of whether an individual is being targeted for lethal action would
permit the individual to alter his behaviour to evade attack or capture and
could risk intelligence sources and methods if an individual learns he is under
surveillance’.⁸⁴ She repeats almost word by word the arguments made by
the then US Acting Secretary of Defense, Patrick M. Shanahan.⁸⁵ This com-
monsense inference does not account for the continuous recomposition of
data to detect anomalies, and it does not consider the conditions of possi-
bility of human action. A suspect terrorist is deemed to hold vast—if not
unlimited—capacities of knowledge and action.
Later, the US Court of Appeal for the District of Columbia Circuit reversed
Collyer’s judgement of plausibility by entangling Kareem’s experience of near
miss targeting into the ambiguities of a diffuse and opaque war. Because Idlib
City and Aleppo were the sites where the strikes occurred, the judge argues
that it is not clear that the US was responsible. There were ‘numerous ac-
tors involved in the Syrian conflict in the specific areas identified in Kareem’s
complaint’, from state actors including Russia, Iran, Turkey, and the US, to
pro-Assad government forces and many different factions. Even Kareem’s al-
legation that one of the strikes was launched by a Hellfire missile, which is
‘employed by numerous U.S. allies’ is not sufficient given the circulation of
drones and impossibility of seeing with certainty what type of drone it was.⁸⁶
Moreover, even assuming that a Hellfire missile was launched by the US, the
appeal judge holds that there is no plausible inference that can be made about
Kareem having been targeted by US drone strikes.
These cases highlight the distinctiveness of anomaly detection through a
scene of controversy at the intersection of algorithmic operations, security,
and legal practices. As metadata, algorithms, drones, and intelligence meth-
ods are summoned upon the legal scene, their materializations eschew both
accountability and responsibility. The anomaly shapes distinctions between
hierarchies of lives that count and lives that do not, while remaining both
infra-sensible and supra-sensible, both beneath and beyond the threshold of
legal and public perceptibility. In his book Politiques de l’inimitié (Politics of
Enmity), Mbembe reflects on a new form of racism—nanoracism—which has
come to supplement what he calls the ‘hydraulic racism’ of the state apparatus.
The institutional macroracism of the state is supplemented by microracisms.
⁸⁷ Mbembe, Politiques de l’inimitié. Chapter 2 has been translated as Mbembe, ‘The Society of
Enmity’. The full English translation was published under the title Necropolitics in 2020.
⁸⁸ Mbembe, Critique of Black Reason, 24.
⁸⁹ Sociologist Linsey McGoey has defined strategic ignorance as ‘any actions which mobilize, man-
ufacture or exploit unknowns in a wider environment to avoid liability for earlier actions, (McGoey,
The Unknowers, 3).
⁹⁰ We refer here to Ruth Wilson Gilmore’s definition of racism as ‘the state-sanctioned and/or extra-
legal production and exploitation of group-differentiated vulnerabilities to premature death’ (Gilmore,
‘Abolition Geography’, 301).
⁹1 Zaidan et al. v Trump et al., ‘Complaint’, 8.
90 others
government relies on ambiguities in their lists and the transitions between dif-
ferent lists, so that a connection between the documents disclosed by Snowden
and their practices of targeting becomes untenable. As the distinction between
lives to be protected and dispensable lives is intensified by exposing some
to the possibility of being killed, these distinctions are made possible by the
algorithmic reason of partitioning out anomalies.
In tethering anomaly detection to the practices of targeting and warfare,
its constitutive ambiguity is mobilized in the erasure of responsibility rather
than the contestation of the designation of ‘anomaly’. As a detected anomaly,
Zaidan hovers on the threshold between life and death. Neither social norms
nor statistical normalization can account for his selection as a potential tar-
get and attribution of danger to his banal acts. Anomaly detection promises
to capture the ‘unknown unknowns’, thereby addressing limitations of statis-
tical and probabilistic knowledge with its emphasis on frequencies and the
assumption that certain behaviours are repeated. In materializing partition-
ing through the composition and recomposition of small differences, anomaly
detection enacts new hierarchies and (de)valuations of life. The two journal-
ists’ lives become ‘ungrievable’ so that potential loss neither registers as a loss,
nor can it be contested so that their lives come to matter equally.⁹2 Can this
materialization of othering practices and the nanoracism of insignificant but
potentially deadly irregularities become the objects of friction or resistance?
Before answering this question in Part III of this book, we need to trace two
further materializations of algorithmic reason in the power of platforms and
the economies of value.
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0005
92 platforms
⁹ https://werobotics.org/.
1⁰ Hosein and Nyst, ‘Aiding Surveillance’.
11 Human Rights Watch, ‘UN Shared Rohingya Data’.
12 Honig, Public Things, 21.
94 platforms
How have digital platforms become so powerful? How are these different from
traditional infrastructures of modernity? The fast and global platformization
of all digital spaces has given rise to a wide range of theories explaining why
these new infrastructures have become so powerful. Infrastructures promise
stability and continuity. They have come to order relations between state
and citizens, require long-term planning and vast collective efforts.13 In-
frastructures of modernity are often invisible until they fail. This perceived
invisibility might hide how they are entwined with the asymmetries and in-
equities of social and political life globally. As the editors of a volume on
The Promise of Infrastructure point out, ‘infrastructures are critical locations
through which sociality, governance and politics, accumulation and disposses-
sion, and institutions and aspirations are formed, reformed and performed’.1⁴
As infrastructures stabilize and render asymmetries of power invisible, critical
work has focused not only on bringing visibility to infrastructural exclusions
and dispossessions, but also on tracing controversies and frictions afforded by
socio-technical infrastructures. In these approaches, infrastructures are ‘hy-
brids that join and rely on elements too often separated under the (bogus)
headings of “technical” and “social”.’1⁵
Like other infrastructures, platforms are hybrids of different technical and
social elements that become visible in controversies about their power. Com-
pared to the stability of other infrastructures, platforms are often associated
with a sense of deep disruption. The authors of Platform Revolution welcome
this disruption and its challenge to our understanding of modern infrastruc-
tures which otherwise ensure ‘the sense of stability of life in the developed
world, the feeling that things work, and will go on working, without the
need for thought or action on the part of users beyond paying the monthly
bills’.1⁶ Unlike the supposed stability and durability of infrastructures, plat-
forms appear as permanently changing, as they are not evenly distributed but
characterized by a dynamic opposition of a central core and its peripheries.
parts of the country that are otherwise difficult to reach. Once the integration
of WhatsApp into the Facebook marketing world is complete, the platform
will have drawn book readers in diverse Brazilian communities into its social
graph. Through APIs, platforms create connections between everything on the
Web and act as single access points—in Facebook’s case for social connections,
in Google’s case for information, while Amazon started with books and now
does almost everything. This allows platforms to appear as a central point of
control and an enabler of heterogeneity at the same time.3⁷ Then, WhatsApp
can be used to control the distribution of books in a heterogeneous Brazilian
environment.
APIs are small changes that enable the effective integration of outsides into
a larger platform. They were, however, only the first step of platformization.
In the 2000s, a new business model called ‘platform-as-a-service’ (PaaS) was
created, which was tasked not only with the internalization of external web-
sites, but also with the externalization of platform internals. Through PaaS,
platforms offer internal services, allowing everybody access to their largeness
by promising, for example, (limitless) storage options or advanced compu-
tational processing services such as facial recognition and natural language
processing. Having concentrated on connecting peripheries through APIs,
PaaS provided platforms with the means to strengthen their core and re-
main at the centre. PaaS (together with its siblings software-as-a-service (SaaS)
and infrastructure-as-a-service (IaaS)) became the basis for commercial cloud
platforms such as the Google Cloud Platform and Amazon Web Services.
Where the APIs shape the outside of the platforms, PaaS and Clouds form its
inside. Therefore, we need to understand platforms as blurring inside–outside
boundaries through the dual move of taking the inside out and bringing the
outside in.
If the programmable Web and APIs were enablers of platformization, the
principles of PaaS have transformed the power of platforms. Google’s PaaS,
for example, has made it indispensable for mobile and web applications. In
2008, Google launched its App Engine, which kickstarted a whole new indus-
try of providing advanced computational resources to everyone.3⁸ Microsoft
has used its Azure platform to reinvent itself and end its dependency on
Windows-based desktop applications. As a PaaS provider, Microsoft won at
first the $10 billion Joint Enterprise Defence Infrastructure (JEDI) contract,
the largest contract ever to provide cloud services for the US Department of
Defense.3⁹ In 2021, the contract with Microsoft was cancelled with the expecta-
tion of creating a new programme open to several cloud platforms. The most
successful PaaS example, however, is Amazon, whose Web Services division
has been its greatest driver of growth,⁴⁰ making it the largest Internet company
at the time of writing.
Although late arrivals to the digital world, humanitarian organizations have
become active users (and creators) of PaaS and clouds. Google Crisis Response
became famous for demonstrating the use of Google services during the 2010
Haiti earthquake, also inspiring Meier and his seminal book. It has since
customized the Google platform to provide crisis tracking and identification
services such as Google Person Finder, which helps identify missing persons,
or the Google Maps Engine for real-time disaster information to the public.⁴1
Google Maps has become a vital PaaS solution for many humanitarian appli-
cations. GeoCloud, which was set up as an integrated digital humanitarian
solution, used Google as its ‘geospatial data backbone’.⁴2
To kickstart such work, Google and several other tech companies like Ama-
zon have funding programmes to provide cloud ‘credits’ and ‘consultancies’
for humanitarian and crisis response purposes.⁴3 In a typical example for
such a collaboration, the Humanitarian OpenStreetMap Team used ‘portable
Amazon Web Services (AWS) servers’ to identify target areas for surveil-
lance and mapping drones.⁴⁴ Based on the success of the commercial plat-
forms, humanitarian organizations have begun to replicate PaaS businesses.
DroneAI is a project by the European Space Agency for humanitarian and
emergency situations. They developed a PaaS ‘covering all the requirements
in term of application hosting, deployment, security and scaling’,⁴⁵ which
exploits neural networks to analyse drone data for on-time disaster as-
sessment. Meier also cofounded the Digital Humanitarian Network, which
3⁹ BBC, ‘Microsoft Pips Amazon for $10bn AI “Jedi” Contract’. The contract was widely expected to
be awarded to Amazon, after a controversy that saw Oracle challenge an earlier award in an adminis-
trative court. Services, ‘Amazon Web Services, Inc.’s Response’. Google had decided to withdraw from
the bid, following the internal controversy concerning its participation in Project Maven, which we
discuss in Chapter 6. In 2021, the Pentagon cancelled the entire $10 billion contract given continued
legal challenges (Conger and Sanger, ‘Pentagon Cancels a Disputed $10 Billion Technology Contract’).
⁴⁰ Amazon.com, ‘News Release’.
⁴1 Google, ‘Helping People’.
⁴2 PRNewswire, ‘NJVC Platform as a Service’.
⁴3 Fuller and Dean, ‘The Google AI Impact Challenge’; AWS, ‘AWS Disaster Response’.
⁴⁴ Fitzsimmons, ‘Fast, Powerful, and Practical’.
⁴⁵ European Space Agency, ‘DroneAI—DroneAI Solution for Humanitarian and Emergency Situa-
tions’.
100 platforms
Pew Research Center found that 24% of all workers had to use these platforms
to make a living.⁵1 Another earlier report by the International Labour Organi-
zation about platform microwork, which was based on interviews with 3,500
workers living in seventy-five countries around the world and working on
five major globally operating microtask platforms, found that average earn-
ings were $3.31/hour.⁵2 Stories persist of tasks on Amazon Mechanical Turk
that pay $1/hour but last up to two-three hours. The New York Times reported
that the Cambridge Analytica scandal from Chapter 1 also began on the Me-
chanical Turk site, where users were invited for $1 or $2 to install a Facebook
app and complete a survey in order to collect their profile information.⁵3
This brief history of platformization shows how digital platforms have
reshaped economic transactions, social interactions, work, and even human-
itarian action. As the analysis of digital platforms has focused on the big
international companies, digital humanitarianism has been largely neglected
in these discussions. How have humanitarian practices been reshaped through
platformization? As we have seen, platforms have become essential infras-
tructures of digital life, even though they ‘were not infrastructural at launch,
[and] rather gained infrastructural properties over time by accumulating
external dependencies through computational and organisational platform
integrations.’⁵⁴
While there are different types of platforms, their emergence can be traced
to the programmable Web and then centralized PaaS/clouds as new engines
of platform growth. In the section ‘Platform power: Decomposing and recom-
posing’, we show how the power of platforms can be understood through the
ways in which they have transcended the inside–outside boundaries by offer-
ing their own components to be decentrally embedded across the Web. This
has always been part of their architectures but it has significantly increased in
recent years, leading to a new microphysics of platform power. This new mi-
crophysics allows Google to collect more and better data from all its Google
Map users and to make its services indispensable for disaster management,
while Amazon knows which kinds of machine-learning algorithms are de-
ployed to process drone images for humanitarian work. These practices of
offering platform components appear mundane and innocuous even if their
effects are debilitating and produce relations of dependency.
Not too long ago, only technical experts working on PaaS or programmable
Web applications knew of platforms, defined then as ‘the extensible code-
base of a software-based system that provides core functionality shared by the
modules that interoperate with it and the interfaces through which they inter-
operate’.⁵⁵ Platforms in this sense served specific developer needs of modular-
ization and reuse. The benefits of such an extensible codebase, which provides
modules for code to be assembled into larger applications, are based on shar-
ing existing solutions across organizations and—if needed—with the outside
world. Since those early days when platforms were discussed mainly within
technical communities, they have become the subject of many controversies,
both public and academic.
Most of the critical research on platforms has highlighted their monopoly or
oligopoly character and their concentration of power. Media theorists José van
Dijck, Thomas Poell, and Martijn de Waalon describe how platform monop-
olies are driven by processes of datafication and commodification.⁵⁶ Alphabet
(Google’s parent company) can be taken as indicative of how platforms operate
and concentrate value and power:
Tarleton Gillespie similarly puts the movement of control at the centre of plat-
form interests.⁵⁸ For him, platforms permanently work on making themselves
economically valuable by moderating and curating the content they organize,
while at the same time publicly claiming that they do not influence what their
users do and are ‘just’ platforms to surface their activities. Across disciplines,
researchers concur that platforms are not neutral and actively shape social-
ity, as they ‘extend analyses of concrete configurations of power and identify
control points, structural dynamics and crucial resources’.⁵⁹
As the theorist of Platform Capitalism Nick Srnicek has argued, the logic
of concentration of power and monopoly is built into platforms as ‘the more
numerous the users who interact on a platform, the more valuable the en-
tire platform becomes for each one of them’.⁶⁰ Through such network effects,
achieved by means of increased numbers of participants, global platforms
have emerged as a ‘new business model, capable of extracting and control-
ling immense amounts of data, and with this shift we have seen the rise of
large monopolistic firms.’⁶1 According to this view, all platforms tend to be-
come monopolies, as they integrate third parties, replace the computational
capacities of all Internet actors with their own, and accumulate large amounts
of data. However, platforms also do not fit the traditional understanding
of monopolies, as they work through the small and distributed forms such
as ‘complementors’ and APIs. This makes it difficult to track platform power,
as one needs to trace microrelations. Their expansion is as much asymmetric
as it is centralized. We can say that platforms have reinvented a microphysics of
power. Following Foucault’s methodological advice, we need to ‘decipher in it
a network of relations, constantly in tension, in activity, rather than a privilege
that one might possess’.⁶2
Attention to this microphysics should not ignore that power is ‘the over-
all effect of strategic positions’.⁶3 Google and Facebook, but also their Chinese
counterparts Alibaba or Tencent have taken up strategic positions in all digi-
tal ecosystems. They are reminiscent of the large-scale railway monopolists of
the nineteenth century or the US Steel Corporation controlling the essential
building material of the Industrial Age. The big Internet companies provide
the services to make all things digital and extract data and information as the
essential building materials of the Digital Age. Thus, they display the extrac-
tive and colonial characteristics of companies such as the East India Company.
However, the microphysics of platform power also entails practices of breaking
up, decomposing, and recomposing existing digital components rather than
simply extracting and expanding, as past monopolies had often done. These
practices were already present in the history of platforms and realized as APIs
and PaaS but have further accelerated.
While most political and economic commentators on digital platforms are
concerned with the power to centralize and even become monopolistic, we ar-
gue that platform power emerges through the dual move of decomposing into
small components and recomposing these across the Web. Amazon became a
platform by breaking up its book-selling application into smaller and smaller
parts that can be recombined to add value in new environments, offered first to
the outside world as a PaaS and through APIs. The Amazon platform embraced
heterogeneity once Amazon found out that it could sell its cloud comput-
ing platform independently from its book-selling activities. This made it the
biggest Internet company in the world. Value is hidden in the many parts of
the platform, and the concealed history of a platform is that of broken-down
and decomposed applications.⁶⁴
The latest step in platformization emerges through a shift from a ‘monolith’
application to the holder of a well-defined set of functionalities that can be
reused, which are called ‘services’ in the computing world. Such services are
modular and can be composed and recomposed indefinitely. There is not a sin-
gle Facebook (or Google) anymore since they are not single Web applications
and stacks to bring together Harvard students (or Stanford searchers). There
are assemblages of services that together make up digital platforms. The biggest
platforms provide their users with almost global reach through their instant as-
semblage of underlying services that appear to the various users as one. As the
services of the platform have become integrated into online applications, the
Internet user is permanently connected to platforms—often without realizing
it. Through their computing services, platforms provide a ‘rhizomatic’ form of
integration.⁶⁵
The mobile ecosystem offers a clear picture of the new platform
microphysics.⁶⁶ Based on an analysis of almost 7,000 Android apps for their
permissions and embedded services, one of us conducted research together
with social AI scholar Jennifer Pybus on the technical integration of services
within apps. We investigated the repeated co-occurrences of services within
the same apps. The largest platforms dominate the mobile ecosystem because
they provide key services for everybody else. They offer monetization services
that others depend on to make money from apps. As we use our mobile phones,
we are permanently connected to some parts of the Google and Facebook
platforms as decomposed into services. For Facebook, dominance through
mobile services has become its central concern because, as of the third-quarter
of 2019, 90% of its advertising revenue came from the mobile ecosystem.⁶⁷
The section ‘Platform humanitarianism’ will analyse in more detail what this
mobile expansion means for humanitarian action and organization.
combine the business dimension with the technical one at the large scale of
permanent change and modularity. Whereas application domains like digi-
tal humanitarians are beginning to follow microservices, the Airbnb platform
already deploys 3,500 microservices per week, with a total of 75,000 produc-
tion deploys per year. Airbnb now has what it calls ‘democratic deploys’, which
means that ‘all developers are expected [to] own their own [microservice]
features from implementation to production.’⁷2
Although microservices seem like a new concept, they continue many of
the existing mundane platform practices we have already encountered. APIs
and PaaS come into their own within the concept of microservices. Microser-
vices are the building blocks by which a PaaS is exposed to the outside world
while maintaining control at the core. APIs allow them to connect with each
other and collaborate.⁷3 This flexibility has aided the grand emergence of mi-
croservices across platforms. Airbnb, Amazon, eBay, etc. have all become
microservice-based architectures, and Netflix is scaling to billions of requests
for content every day. As Netflix explains on the company’s technology blog,
their ‘API is the front door to the Netflix ecosystem of microservices…. So,
at its core, the Netflix API is an orchestration service that exposes coarse
grained APIs by composing fine-grained functionality provided by the mi-
croservices.’⁷⁴
The scale and speed of platforms built around microservices make full hu-
man control impossible. At any moment in time, Airbnb’s algorithms enable
and make decisions about which microservices are deployed and constitute
the Airbnb platform. The platform becomes a socio-technical self-composing
system run by machines for machines, which also organizes the required in-
put from human developers and microworkers. Platforms are so dispersed
and heterogeneous that only machine learning can bring them together again.
Machines learn to complete the tasks necessary to keep platform insides and
outsides together. They have become part of the workflows of digital produc-
tion as permanent interpretation engines of our likes and dislikes or which
disaster to react to next.
Thus, digital platforms have evolved into infrastructures that allow for the
permanent recomposition of their component services and devices that they
themselves permanently compose and decompose. The big platforms have
already perfected this movement and smaller ones like humanitarian plat-
forms have begun to follow. Operating through microservices, platform power
is both more insidious and more dispersed than the power of monopolies.
Platform monopolies are much harder to identify than Big Oil or US Steel
were. Of course, we can still see the Facebook application shaping social re-
lations or Google’s worldwide network of ‘campuses’ that provide services and
community places to start-ups globally. However, digital platforms have be-
come so much more than these visibilities. Platform power is as deep as it
is wide. Through the compositions of the small and large forms of services,
platformization shapes practices that at first sight appear removed from the so-
cial media and advertising platforms attracting much public attention. In the
section ‘Platform humanitarianism’, we unpack the effects of platformization
through the production of mobile apps for digital humanitarianism.
Platform humanitarianism
become part of digital platforms and how platforms have become the core of
digital humanitarianism and everything else digital.
Apps are particularly interesting for understanding how platform power
shapes more and more areas of social and political life. They are generally
accessible from only one or more of the major platforms—like the Apple App-
store or Google Playstore—and have quickly become the subject of public
suspicion of platform control. They have been shown to ‘spy’ on Internet users
and track their behaviour for the big Internet companies. Computer scien-
tists have investigated the scale of the penetration of apps by the big Internet
companies and other parties interested in tracking users’ behaviour.⁸⁰ In ex-
ploring almost one million apps from the Google PlayStore, they found that
most of these apps contain some kind of tracking through services by out-
side providers, with News and Games apps being the worst offenders. The
biggest Internet companies also provide the largest and most widely used
tracker services. Many of the trackers work transnationally and many are
based outside European jurisdictions. Other investigations into API systems
and microservices for mobiles have found similar results globally. Liu et al.,
for instance, have analysed how analytics services track users’ in-app be-
haviour and leak data to outside actors, mainly in the Chinese mobile ecosys-
tem.⁸1 This creates a strong capacity for the analytics companies to profile
users.
These contributions reveal how much digital platforms track everything
and how they achieve this by offering their platforms as services. Critical re-
search on platform monopolies has traced the digital materiality of third-party
actors and the accumulation of data for the purpose of value extraction.⁸2
In attending to the materiality of platforms, this critique brings an impor-
tant perspective to platform power. Investigating mobile ad networks, Meng
et al. call data leakage to the big Internet providers the ‘prize of free’.⁸3
This has developed to such an extent that we cannot speak of ‘data leakage’
anymore, as data circulations have become an unexceptional, mundane prac-
tice of how platforms work.⁸⁴ Thus, the problem of ‘humanitarian metadata’
generated by humanitarian actors through the increasing use of digital tech-
nologies, digital interactions, and digital transactions with tech companies
cannot be addressed through the lens of privacy and data protection alone.⁸⁵
⁸⁶ We draw in this section on our work on apps for refugees, as described in Aradau, Blanke, and
Greenway, ‘Acts of Digital Parasitism’.
⁸⁷ UNHCR, ‘Connecting Refugees’, 8.
⁸⁸ At the time of finishing the book, even the website collecting apps for refugees has stopped work-
ing: http://appsforrefugees.com/. We will reflect on this systematic obsolescence of apps and other
digital technologies developed for humanitarian action later.
⁸⁹ In Aradau, Blanke, and Greenway, ‘Acts of Digital Parasitism’, we discussed the methods we
developed through hacking as ‘acts of digital parasitism’ to analyse apps and their APIs.
110 platforms
mean that they co-occur within the same apps. The thicker the link the more
frequent is the co-occurrence. The size and shade of the nodes in the network
correspond to the number of links.
Overall, at this microlevel of code, the big platforms clearly dominate.
Google and Facebook are at the centre of the network and define the digital
humanitarian ecosystem as we know them to define the whole mobile ecosys-
tem. They provide the essential service building blocks even for humanitarian
apps that do not aim to monetize their users. They have succeeded in be-
coming the technological foundations of the (mobile) Internet itself. Users are
immediately connected to them once they open an app.
While Google and Facebook clearly dominate the exchanges that underpin
the production of refugee apps, there are many other interesting connections
from and into digital humanitarianism on the microlevel of code. There are, for
instance, service links that are used in the day-to-day development of apps and
are an indicator that our investigation concentrated on the deeper technical
building blocks of apps. Apache.org relates to open-source tools provided by
Apache Software Foundation projects and often used in software production.
Services such as fasterxml and okhttp3 respond to specific common challenges
platform humanitarianism 111
⁹⁰ Petzel, AirMapView.
⁹1 Cebula, ‘Airbnb, from Monolith to Microservices’.
112 platforms
built to facilitate the work done by human rights advocates, journalists, elec-
tion monitors and those responding to disaster and crisis’.⁹⁴ The diagnosis of
centralization and monopolization does not account for how platforms break
up and disperse their components across the Web. Digital platforms material-
ize algorithmic reason in that they transcend binaries of small and large forms
by splintering large forms and recomposing small forms. Platform power as
indefinite decomposition and recomposition does not only conquer new ar-
eas of the practice, expand to different spaces, and reconfigure tech users. It
also creates new forms of dependency and debilitation for many actors such
as humanitarian organizations and refugees themselves.
In 2021, the digital rights organization Access Now wrote a letter to the Spo-
tify CEO raising questions about a Spotify patent on speech recognition.1 The
Access Now letter was followed by another letter written by a group of almost
200 ‘concerned musicians’ and human rights organizations.2 Both letters chal-
lenged the claims in the speech recognition patent that Spotify could detect
‘emotional state, gender, age or accent’ to improve its music recommendations.
Both letters highlighted privacy concerns, data security, potential discrimina-
tion against trans and non-binary groups, and the possibility of manipulation.
Access Now pointed out that Spotify already had ‘troves of data on the people
that use its service, down to the specific neighbourhoods where they live’.3 Fur-
ther intrusive surveillance could not be justified. The letter by the concerned
musicians also drew attention to the effects of AI and big data on the mu-
sic industry. They argued that ‘[u]sing artificial intelligence and surveillance
to recommend music will only serve to exacerbate disparities in the music
industry’.⁴ The musicians based their argument on the strong normative po-
sition that ‘[m]usic should be made for human connection, not to please a
profit-maximizing algorithm’. In their response letter, Spotify reiterated their
commitment to ethics, privacy, and responsible innovation and argued that
the technology in the respective patent had not been implemented at the com-
pany.⁵ While Spotify claimed that they did not have plans to implement this
technology, the question remained why Spotify was patenting it if there were
no plans to use it. In this chapter, we suggest that one answer to this question
lies in the production of economic value.
Value has indeed emerged as one of the key dimensions of big data and
its algorithmic operations. Expressing a widely held opinion, the Economist
muses that ‘[t]he world’s most valuable resource is no longer oil, but data’.⁶
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0006
value 115
digital surveillance. Finally, the Amazon model has been widely discussed for
its capacity to move beyond the traditional model of the market and extract
value from the ‘free labour’ of users.
Between precarization, extraction, surveillance, and monopolization, how
are we to understand the production of value in digital economies? In this
chapter, we focus on the production of economic value and its political con-
sequences and therefore speak of processes of valorization. As we have seen,
governing rationalities foster relations between people and things, and hier-
archies of subjectivity. Processes of valorization add another dimension to the
government of self and other, of individuals and populations. We argue that
algorithmic reason is newly materialized within a specific form of economic
value, which relies on combining small datasets to produce new situations of
commodity consumption.
To develop this argument, we expand the public scene around Spotify’s
patent with other patent applications and granted patents to the big tech com-
panies, so that we can trace how valorization is imagined in the daily business
of digital companies. Patents also trace problems and limitations, which they
aim to address through innovation. The Spotify patent from the beginning of
this introduction and the controversies attached to it exemplify how we can
productively use patents. Independent of whether a specific patent has been
implemented, patents offer a site of inquiry into value. Indeed, they should not
be understood to be direct translations of how a company produces, as too lit-
tle is known about the status of the patented products within the company.12
Nevertheless, they help us recognize which actors are involved, their interests
in valorization, and how the problem of value is formulated and addressed.
Patents are particularly useful for shedding light on value when juxtaposed to
other company documents and legal or news items about their practices of
valorization. Our reading of the patents investigates how value is not only ma-
terialized through the extraction of personal data and global exploitation of
labour, or just through surveillance or network effects.
The chapter starts with an analysis of value as developed by scholars who fo-
cus on the continuities between digital and industrial capitalism. In a second
section, we turn to authors who have diagnosed a new stage in the develop-
ment of capitalism and new forms of digital value. Here, we concentrate on
the controversies that surrounded Shoshana Zuboff ’s idea of behavioural sur-
plus value in surveillance capitalism and Nick Srnicek’s network value through
platform domination. We have selected these authors from a vast range of
literature on digital economy as their work has been mobilized in public de-
bates about digital value. In a final section, we show how analyses of value in
digital capitalism need to be supplemented by a new form of value accumula-
tion from small data contexts. Patents help us explain how companies attempt
to overcome human subjectivity limitations that hinder valorization by focus-
ing on ever smaller details of human experience. The various forms of value
production underpin different practices and imaginaries of politics, to which
we return in the chapter’s conclusion.
book reviews on Amazon also do not rent out their assets for capitalization.
Attempts to define social media ‘prosumers’ as ‘productive workers’ seem to
mainly serve the interest that they can be organized in labour movements to
overcome social media capital according to Marxist political ideas of a struggle
between labour and capital.
The capitalization of unpaid time and a range of other new values would
have been unknown to Marx, as he concentrates on industrial production with
well-defined factory floors. Yet, feminist scholars have already drawn atten-
tion to the invisibilization of reproductive and other forms of feminized and
racialized labour in Marx that happens outside factory floors. In the 1970s,
feminist Marxists argued that reproductive labour was indispensable to cap-
italism. According to one of its key voices, Silvia Federici, feminist theorists
discovered that ‘unpaid labour is not extracted by the capitalist class only from
the waged workday, but that it is also extracted from the workday of mil-
lions of unwaged house-workers as well as many other unpaid and un-free
labourers’.23 Black feminists highlighted the invisibilization of black women’s
labour within transatlantic chattel slavery, ‘in which women labored but also
bore children who were legally defined as property and were circulated as
commodities’.2⁴ Today, feminist scholars attend to the ‘heterogeneity of living
labor’ and differentials of exploitation in order to shed light on what Verónica
Gago has called the ‘very elasticity of the accumulation process’.2⁵ Not only
is valorization not limited to labour officially guided by capital, but it is also
made possible through the constitution of gendered and racialized hierarchies
of labour. These hierarchies cut across geopolitical borders, as we will also see
in Chapter 8 on the International.
This perspective that attends to the heterogeneity of capitalism and valoriza-
tion has inspired political theorists Mezzadra and Neilson to expand the idea of
exploitation in capitalism and focus on ‘extraction’ to understand how capital
extricates value from its ‘outsides’, whether understood in spatial terms or as
non-capitalist ‘outsides’ of social activity. According to them, extraction names
‘the forms and practices of valorization and exploitation that materialize when
the operations of capital encounter patterns of human cooperation and so-
ciality external to them’.2⁶ Mezzadra and Neilson do not argue that extraction
is the exclusive logic of capitalism, but that extractive operations intersect
2⁷ As such, the operations differ from Nick Couldry and Ulises Mejias’s analysis of data colonial-
ism, which poses appropriation and extraction as the homogenizing logic of a new mode of capitalism
(Couldry and Mejias, The Costs of Connection). Similarly, Kate Crawford argues that ‘practices of data
accumulation over many years have contributed to a powerful extractive logic, a logic that is now a
core feature of how the AI field works’ (Crawford, The Atlas of AI, 121).
2⁸ Crawford, The Atlas of AI, Chapter 2.
2⁹ Boullier, Sociologie du numérique, 211 (translation ours).
3⁰ Reese, ‘Data Labeling’.
new capitalism? surveillance and networks 121
that can power China’s A.I. ambitions.’31 According to Mark Graham and Mo-
hammad Amir Anwar, AI has led to a ‘planetary labour market’ of exploitation
and extraction.32
How economic value is produced remains a central political question and
even more so if all our time produces value. As we have seen, for many Marx-
ist scholars, the political question remains that of worker organization and
intensifying the struggle between labour and capital. Feminist and postcolo-
nial scholars have expanded these questions to the differentials of labour that
subtend both digital and non-digital extraction. They have renewed questions
about capitalism’s ‘outsides’ and the conflict over capitalist expansion through
extractive logics. In the section ‘New capitalism? Surveillance and networks’,
we discuss two understandings of value in digital capitalism, which build upon
these new forms of exploitation and extraction, and which speak to wider con-
troversies about what is new in digital capitalism and related political struggles.
In her book The Age of Surveillance Capitalism, shortlisted for the Financial
Times 2019 Business Book of the Year, Shoshana Zuboff has coined ‘behavioral
surplus’ to render the new value that emerges through the companies’ attempt
to control and ultimately predict our behaviour based on how we spend our
time online, where life is rendered as data.33 Generating value from all our
time has been made possible by new forms of digital surveillance, leading to
surveillance capitalism, which ‘claims human experience as free raw material
for translation into behavioural data’.3⁴ The extracted ‘behavioural surplus’ is
transformed into prediction products traded on ‘behavioral futures markets’.3⁵
Zuboff is not the first scholar to point out the new centre stage of behavioural
data for valorization through extensive surveillance by digital platforms, but
she offers a comprehensive theorization of ‘surveillance capitalism’, where she
goes beyond existing theories of the ‘quantified self ’ with a deeper and more
detailed understanding of value and digital surveillance.3⁶ Our social and cul-
tural world is transformed through the unprecedented growth in the data
generated about ourselves at all times. In this process, we are made into big
data through the billions of pieces of content shared daily on Facebook, the
millions of daily tweets, etc.3⁷ According to Zuboff, the self is not just quan-
tified in becoming behavioural data but radically transformed. Surveillance
capitalism is ultimately about behavioural change. Zuboff identifies Google’s
Hal Varian as the new Adam Smith. For Varian, queries into Google’s search
engine describe how users feel and act right now: what they are interested in,
which disease they are worried about, or which house they want to buy.3⁸
Throughout her book, Zuboff remains preoccupied with Google as the ‘mas-
ter’ of secondary data exploitation, which is data removed from its primary use,
such as entering a search query, and employed to predict secondary future
behaviour. Google’s patents seem to support Zuboff ’s version of surveillance
capitalism. It has numerous patents to exploit secondary data such as, for in-
stance, a system to predict which Web content users might be interested in
after they have searched for and visited several websites. The patent defines
‘navigation events’, which ‘may be predicted by various indicators, including
but not limited to a user’s navigation history, aggregate navigation history, text
entry within a data entry field, or a mouse cursor position.’3⁹
Google was not the only company to generate surplus from its users’ online
behaviour in the 2000s. However, compared to Yahoo!’s clickstream analysis,
its data was better ‘raw’ material. Anonymous search queries became a gold
mine for marketers. All they had to do was to link a product to an information
need in the query by means of Google’s many ad services. Google is a master
of what we called in Chapter 1 ‘truth-doing’. Since Google paved the way for
exploiting behavioural futures, others have followed. Facebook owns numer-
ous patents to predict user behaviour and extend its social graph. For example,
the company has registered a US patent that focuses on analysing textual in-
formation to track and enable links between users and predict character traits.
‘Based on the linguistic data and the character’, the patent proposes, ‘the so-
cial networking system predicts one or more personality characteristics of the
user’ so that ‘inferred personality characteristics are stored in a user profile’.⁴⁰
Yet, there is more to Google’s behavioural data valorization than just the
primary user interactions during online searching. In a critical reading of
surveillance capitalism, Internet critic Evgeny Morozov has shown that Zuboff
leaves aside other practices of digital value production that are crucial to
3⁷ For an overview of up-to-date Internet usage data, compare Statistica, ‘Business Data Platform’.
3⁸ Choi and Varian, ‘Predicting the Present with Google Trends’.
3⁹ Hamon, Burkard, and Jain, ‘Predicting User Navigation Events’, 2.
⁴⁰ Nowak and Eckles, ‘Determining User Personality’, 1.
new capitalism? surveillance and networks 123
Google’s success.⁴1 Without the underlying content that the Google search en-
gine ranks, it could not exploit behavioural surplus. This view is confirmed by
Google’s patents (and especially earlier ones), whose primary concern is of-
ten to exploit online content created by others. It has, for instance, patented a
‘user-interaction analyzer’ that checks for specific interests in particular parts
of digital media on the Web, comparing ‘normal’ and ‘specific’ interests: ‘Nor-
mal user behavior with respect to the media is determined and stored….
Whether … user behavior of a particular media segment deviates from normal
relative to the determined normal user behavior is determined.’⁴2
What Zuboff calls ‘instrumentarian power’ is the ability to shape extracted
behaviour to instrumentalize it for new ends. The concern with instrumen-
tarian power of capital is not new. In industrial capitalism, all of nature was
instrumentalized, which famously led the Frankfurt School of critical theory
to make instrumentalization a focus of their critique of contemporary society.
According to Max Horkheimer, instrumental reason is only concentrated on
the means to an aim without reflecting on the aim itself.⁴3 This led to an abso-
lute drive to dominate nature in capitalism. The targeting of behavioural value
and how we spend our time is a manifestation of this drive by concentrating on
a particular part of nature—human nature—and the creation of new subjects.
Our actions are abstracted to become a set of behavioural data items, which
are readily modifiable for the purpose of creating new capital.⁴⁴
Zuboff remains focused on the instrumentarian power of one form of val-
orization from behavioural data. Not only does she ignore the time needed
to create online content by others for Google and the free (and not so free)
labour that goes into it, but she also misses out on a crucial other compo-
nent of Google’s success. Google’s economic success also came from exploiting
‘network effects’ to support the monetization of advertising.⁴⁵ Google Ad-
Sense, still the main source of Google’s income, places ads on websites based
on their content and thus matches advertisers to larger and smaller sites. As
far as we can see, AdSense is only mentioned once in Zuboff ’s book in the
⁴⁶ Ibid., 83.
⁴⁷ Jordan, The Digital Economy, 169.
⁴⁸ Marlow et al., ‘Network-Aware Product Rollout’.
⁴⁹ Zhou and Moreels, ‘Inferring User Profile’, 1.
⁵⁰ Mayer-Schönberger and Ramge, Reinventing Capitalism, 63.
⁵1 Benkler and Nissenbaum, ‘Commons-Based Peer Production and Virtue’.
new capitalism? surveillance and networks 125
⁵2 Ibid., 395.
⁵3 Singh and Gomez-Uribe, ‘Recommending Media Items’, 2.
⁵⁴ Srnicek, Platform Capitalism, 113.
⁵⁵ Arthur, ‘Increasing Returns and the New World of Business’.
⁵⁶ Sundararajan, ‘Network Effects’.
126 value
Whereas Apple seems to still care about the quality of its products (or at least
their design) to achieve positive user feedback, this is not required for net-
work valorization. Facebook started off in a tiny market when it was launched
as a website to connect Harvard students. Its product played a secondary role.
Facebook did not care about the quality of the posts of its users and still seems
to not care much today.
Public controversies about the seemingly unstoppable monopolization and
expanding size of very large platforms persist—while often ignoring the
specifics of digital platforms, which we have analysed in Chapter 4 as algorith-
mic composition and recomposition of inside/outside, core/periphery. Given
such enduring concerns about platform monopolies and expansion, it is not
surprising that many of the critics of new forms of capitalism have recourse
to regulation or state control to limit the effects of network valorizations and
behavioural surplus. Srnicek moves from the private to the public by arguing
for platforms to become public utilities supported by state resources—and by
implication, we could add, by state regulation.⁵⁷ Increased political regulation
and protection of privacy is also the answer for those concerned by surveil-
lance capitalism. Zuboff asks for legal regulation given her diagnosis that the
surveillance capitalists ‘vigorously lobby to kill online privacy protection, limit
regulations, weaken or block privacy enhancing legislation, and thwart ev-
ery attempt to circumscribe their practices because such laws are existential
threats to the frictionless flow of behavioral surplus’.⁵⁸
The literature that has analysed a new mode or stage of capitalism—
independent of the attributes attached to it—has tended to focus on the
processes of value production, expansion, and exploitation through networks
and surveillance. Both the enthusiasts of network effects and behavioural sur-
plus and their strongest critics generally agree on how effective and seemingly
limitless the expansion of the network and behavioural surveillance are. Less
attention is paid to how what we called ‘truth-doing’ in Chapter 1 underpins
new mechanisms of valorization and their limits. Valorizing the small entails
the permanent reconfiguration of differences between and within subjects to
generate new consumption situations. While the Spotify patent with which we
started this chapter focused on gendered differences, we argue that new forms
of value generally emerge through the proliferation of smaller and smaller dif-
ferences. Companies like Spotify do this by focusing not so much on ever more
content and connections, but by casting already existing products as ‘new’
through valorizing small actions and predicting ever more different subjecti-
vations. In the section ‘Algorithmic valorization through difference’, we enter
the abode of Spotify’s patents again to explore these practices of valorization.
Since the 1990s, critical theorists have coined the phrase ‘attention economy’
to capture the claim that ‘human attention is productive of value’.⁶2 In a 1997
Wired article, Michael Goldhaber defined for the first time a ‘radical theory
of value’, which specified that the ‘[t]he currency of the New Economy won’t
be money, but attention.’⁶3 Since then, attention has featured in the titles of
numerous books: from The Attention Economy and The Attention Complex to
The Ecology of Attention.⁶⁴ Two decades later, big data and AI have added the
limitless attention of algorithms to the original ‘attention economy’. Hayles has
highlighted the supposed benefits of machine-learning technologies as they
have ‘the huge advantage of never sleeping, never being distracted by the other
tasks’.⁶⁵ Algorithms always pay attention, while human attention is limited and
can only be harnessed with difficulty. The rendition of attention as human–
machine capacity turns it into a key component of value creation and orients
the analysis towards the algorithmic reproduction of an attentive subjectivity
through granular digital traces.
However, attention is too generic a term here, as it points to a common
human process of selecting subjective objects of interest. Spotify and other
platforms care about the continued consumption of their products and the
creation of marketable user data. From the perspective of consumption, the
attention economy folds onto analyses of surveillance, as attention is deemed
to be productive of consumerist subjectivity, which also justifies the extraction
of ‘behavioural surplus’. In a critique of earlier industrial capitalism, critical
theorist Günther Anders, whom we have already encountered in Chapter 2,
coined the phrase ‘consumerist continuum’.⁶⁶ While specified for industrial
capitalism and its material ‘disjunction between what we produce and what
we can use’,⁶⁷ ‘consumerist continuum’ fits digital valorizations even better, as
this disjunction is exacerbated through the seemingly infinite production and
reproduction of digital products. Valorization implies that there is the danger
of not using enough, of not needing enough products, which can only be over-
come by the consumption of other/new commodities so that individual lives
can become the ‘consumerist continuum’. In the patents of Spotify, this plays
out as the intensive search for new datafication possibilities that will help with
the algorithmic capture of subjectivities that keep consumption going.
We now have more technology than ever before to ensure that if you’re the
smallest, strangest musician in the world, doing something that only 20 peo-
ple in the world will dig, we can now find those 20 people and connect the
dots between the artist and listeners.⁷1
User profiles are created, and music consumption is built around finding
people through data that ‘are just like you—musically at least’.⁷2
In the Spotify world, producing value takes the shape of a search for new data
and different infrastructures to intensify predictions about how to keep users
consuming music. Spotify needs to always generate new situations of digital
music consumption when it does not produce new commodities. A typical
patent activates a number of devices that a user carries with them ‘to iden-
tify patterns of user interaction that account for both context and listening
behavior, where, for example, a same behavior could indicate different mean-
ings in different contexts’.⁷3 Small changes in lived situations can make for
new music experiences so that ‘old’ music is rendered as ‘new’. By identify-
ing user interaction patterns, machine-learning algorithms learn to classify
preferences within a user situation. The algorithm queries whether music is
skipped or whether shuffle mode is activated in order to cluster the sessions
into an endless ‘consumerist continuum’.
Permanent modulations of lived situations from small data have become
very important for Spotify. Another one of its patents shows how small data
makes it possible to match music experience to real-world sports activities.⁷⁴
The patent provides the example of a fictional Playlist 23 about rock music,
which is linked to the ‘afternoon run’, while ‘jogging’ might go better with
Country Playlist 15. In the patent, running is identified as ‘repetitive mo-
tion activity’ and as such datafiable using sensors from smartphones. It can be
transformed algorithmically into ‘cadence-based’ playlists using correlations
with different music tempo ranges. An afternoon run, for example, would re-
quire music with a 140–5 tempo. Figure 5.1 illustrates the idea that cadence
are employing ‘repetitive motion data’, ‘user interaction data’ like skipping,
‘speech recognition data’, and many more.
Algorithms produce new situations of consumption by decomposing and
recomposing small data fragments to predict how more content is accessed af-
ter existing content is already consumed. In another patent, Spotify connects
parking suggestions with media provision, thus producing a new experience
of music commodities and consumption while parking cars.⁷⁵ The use of car
environmental metadata is particularly revealing for Spotify’s search for new
situations to keep consumption going. Environmental metadata is seen as
productive for making consuming subjects, because it reveals ‘a physical en-
vironment (e.g., bus, train, outdoors, school, coffee shop), as well as a social
environment (e.g., alone, small group, large party)’.⁷⁶ Considering this wealth
of data possibilities, it is not surprising that Spotify could distance itself from
the patent that attracted the attention of digital rights activists, as we discussed
at the beginning of the chapter. The Access Now letter emphasized emotion
and gender as problematic categories both scientifically and in terms of dis-
crimination. The patent, however, lists a multitude of other data that could be
used: age, accent, physical environment, or number of people.
With big data and AI, large data assets are shaped from small data signals
and human–machine labour enables the generation of new situations of con-
sumption. For Spotify, a global, very large, and expensive machine-learning
infrastructure employing a range of devices is now largely concentrated on
predicting how to link data to users’ music consumption.⁷⁷ The prediction
looks to modulate subjectivities. A typical Spotify patent sets out to predict
‘taste profiles’: ‘In response to a query [the algorithm] retrieves terms and
weights associated with an artist, song title, or other preferences of a user
and use the terms and weights to predict demographic data or other taste
preferences’.⁷⁸ This decomposition and recomposition of data and small
changes in context can also be used to ‘output value and confidence level … for
a target demographic metric’ such as ‘age’, ‘gender’, or ‘relationship status’.⁷⁹ In
predicting taste profiles and demographic metrics, subjectivities are algorith-
mically recomposed to make new connections between musical artefacts and
contexts of situated consumption.
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0007
140 ethics
AI and is generally concerned that ‘high-risk’ applications are ‘safe’ and ‘fair’.
It remains unspecific on what qualifies as high risk and how hierarchies of
risky AI are to be established. More generally, the AI regulation concentrates
on AI system providers who need to undertake ‘assessments’ and comply with
‘regulatory requirements’. It has been criticized for largely ignoring multiple
subjects affected by AI systems, a tendency that continues those of earlier de-
bates around ethics and AI, which also mainly focused on ‘experts’, ‘providers’,
and ‘developers’. For instance, the Institute of Electrical and Electronics Engi-
neers (IEEE), which describes itself as the world’s largest technical professional
organization, published its second version of ethically aligned design in 2019,
targeting the behaviour of engineers.⁷ One of the largest and most influen-
tial AI conferences, NeurIPS, has installed an ethical review system for its
submissions.⁸ The tech industry followed along by designing and publishing its
own ethics guidelines, with most Silicon Valley companies professing adhesion
to some form of ethical guidance or ethical responsibility.⁹ Elon Musk has do-
nated $10 million to keep AI beneficial,1⁰ while Google’s DeepMind launched
a separate unit on ethics and AI.11 The list of AI ethics targeting engineers,
experts, and providers keeps growing. How are we to understand this rush to
make algorithms and AI ethical through expert knowledge, and this almost vi-
ral spread of ethics from the European Parliament and European Commission
to IEEE and Google’s DeepMind?
Alongside governmental institutions, many media organizations, think
tanks, universities, and civil society groups have been engaged in the search
for ethical principles in response to the challenges that algorithms and AI raise
for our lives individually and collectively. The turn to ethics to manage algo-
rithms is not that surprising, given the long history of mobilizing ethics in
response to difficult social, political, and economic questions. Scholars in the
humanities and social sciences have devoted a lot of attention to the limits of
ethics formulated as ‘as an achieved body of principles, norms and rules al-
ready codified in texts and traditions’.12 The recent formulations of an ethics
for algorithms rely on ‘abstract ethical principles’ based on fundamental rights
commitments, which are envisaged to apply to almost any possible situation
of AI tech. What matters here are less the different ethical commitments that
inform these documents and the exact principles that are selected by various
authorities, but the appeal to and desire for ethics as a political technology for
governing algorithms.
Ethics has played an important role in expert and public discussions of the
societal impact of algorithms and as an intervention that aims to shape algo-
rithmically mediated relations. In this chapter, we analyse ethics as a political
technique deployed to tame power relations and make corrective interventions
on algorithmic reason. As a technology of government, ethics has a limiting or
constraining role on the failures or excesses of social action. We do not pro-
pose an alternative type of ethics to displace or supplement ethics as code.
We approach ethics as political practice to understand how it emerges rela-
tionally. We discuss its limitations by inquiring into its speedy adoption by AI
researchers, engineers, and big tech companies alike.
We make a two-pronged argument about the ethics of algorithms. For us,
the move to ethics pre-emptively eliminates dissensus and draws lines of sep-
aration between humans and things. Firstly, the ethics of algorithms effaces
dissensus by focusing on certain categories of subjects, who are interpellated
to become ethical subjects. As we have already indicated, these subjects are
the technologists and engineers who will develop and implement ethical prin-
ciples, and who will need to consider the concerns of ‘users’ to be designed
into the technology. The racialized and gendered bodies most affected by algo-
rithmic operations are generally not imagined as participants in the dissensus
over what an ethics of algorithms is. Algorithms are, secondly, assumed to be
mouldable at will, as tools to be subsumed to the ethical decisions of engi-
neers, coders, and computer scientists. We address these differential exclusions
through altered modes of ethico-political interventions opening scenes of
friction that turn algorithms into public things.
To unpack this proposal, we start by showing how recent invocations of
ethics for big data, algorithms, and AI are geared towards consensus. While
there are many different ethics guidelines between the various actors involved,
including academic authors, ethics as a technique of government is deployed
to pre-empt dissensus and render invisible racialized and gendered bodies
that challenge the algorithmic distribution of the sensible or the perceptible.13
In a second section, we reformulate the relation between ethics and politics
through what Bonnie Honig has called ‘public things’, which make political
action in concert possible.1⁴ Thirdly, we explore two ethico-political scenes
Ethics of consensus
1⁷ The IEEE contributors to Ethically Aligned Design for AI proposed to include non-Western ethical
principles in future drafts, such as principles from the Chinese and the Vedic traditions (Mattingly-
Jordan, ‘Becoming a Leader in Global Ethics’).
1⁸ Whittaker et al., ‘AI Now Report 2018’, 29.
1⁹ Eckersley, ‘How Good Are Google’s New AI Ethics Principles?’.
2⁰ Whittaker et al., ‘AI Now Report 2018’, 30.
144 ethics
closure whereas ethics remains in the realm of reflection as it does not have
force of law’.21 In these readings, ethics itself is limited and cannot therefore
provide a fully corrective or limiting intervention. Ethical guidebooks lack the
force of legal codebooks.
Next to these established criticisms, it is important to also ask what the pro-
liferation of ethics does, what form of control it enables, and within which
limits this happens. Ethics is effectively deployed as a technique of govern-
ing algorithms, which relies on codes, coordinates issues of implementation,
and shapes subjectivities as well as possibilities of action. To borrow Foucault’s
terms, ethics is a political technique for the ‘conduct of conduct’ of individuals
and groups. Governing through ethics is deemed to ‘build and maintain pub-
lic trust and to ensure that such systems are developed for the public benefit’.22
To function, ethics should not be asking for the impossible, the philosopher of
technology Luciano Floridi tells us.23 Ethics needs to straddle the gap between
what should be done and what can be done. As the IEEE initiatives on ethics
point out, ethics is about the alignment between implementation and prin-
ciples and the generation of standards to coordinate conduct.2⁴ Ultimately,
ethics is subordinated to rationalities of feasibility and consensus-building
between multiple and distributed actors.
International relations scholar Maja Zehfuss is particularly instructive for
an analysis of the effects of governing through ethics. While not discussing al-
gorithms directly but focusing on ethically justified war, Zehfuss sheds light on
the effects of ethical invocations that present war ‘as making the world a better
place for others’,2⁵ which resonates with by now infamous self-descriptions of
Silicon Valley tech companies such as Google’s ‘Don’t be evil’ or Facebook’s
‘Build Social Value’. Instead of making war more benign and less violent, ‘this
commitment to and invocation of ethics has served to legitimize war and even
enhance its violence’,2⁶ because it makes violence ‘justified through its aims and
made intelligible’.2⁷ Zehfuss’s argument about how ethics enables a particular
form of war, which is neither more benign nor less violent, offers a different
prism to understand the turn to algorithmic and AI ethics. What algorithms
are and do will be shaped by vocabularies and practices of ethics. Paraphrasing
21 Hildebrandt, Law for Computer Scientists and Other Folk 283 (emphasis in text).
22 Winfield and Jirotka, ‘Ethical Governance’, 1.
23 Floridi, ‘Soft Ethics’, 5.
2⁴ IEEE, ‘Ethically Aligned Design’, 1.
2⁵ Zehfuss, War and the Politics of Ethics, 2.
2⁶ Ibid., 9.
2⁷ Ibid., 186. A similar argument is made by Grégoire Chamayou, who draws attention to the
emergence of necroethics as a ‘doctrine of killing well’ (Chamayou, A Theory of the Drone, 146).
ethics of consensus 145
Zehfuss’s claim that war is made through ethics, we can say that algorithms and
AI are now made through the political technique of ethics.
This diagnosis of ethics as an enabler of governance and violence leads Ze-
hfuss to the question of politics and an understanding of ethics as limit or
constraint. For her, it is this ‘cordon[ing] off against the real world, against pol-
itics’ that is the limit of ethical engagements.2⁸ A different conceptualization of
ethics would not solve the problems of ethical war. According to Zehfuss, po-
litical decisions would surpass any existing rule and ethical quest for clarity, as
they are inseparable from uncertainty, ambiguity, and the possibility of nega-
tive consequences. This understanding of politics rather than ethics is tethered
to decisions and decision-makers. Uncertainty and ambiguity endure on the
side of the decision-maker, the one who gauges the reality of the world, rather
than the individuals and collectives who experience the consequences of deci-
sions. Decisions remain hierarchical and exclusionary, as those who become
the target of technologies of killing cannot reconfigure decisions and only ap-
pear as a silent concern to the decision-maker. As we saw in Chapter 2, not
only are decisions much more dispersed and mediated through work and in-
frastructures, but the subjects most affected by algorithmic operations have
generally no say in these decisions.
Ethics as a political technique of governing algorithms similarly turns those
potentially most affected by these decisions into what philosopher Jacques
Rancière has called the ‘part of those who have no part’.2⁹ The part of no part
is formed by those made invisible by the dominant arrangement of people
and things. This invisibilization allows for the reproduction of and policing
of consensus. For Rancière, dissensus politicizes this invisibility through col-
lective action and makes visible ‘whoever has no part—the poor of ancient
times, the third estate, the modern proletariat’.3⁰ Whereas AI ethics guidelines
render the ‘part of no part’ invisible or absent in the consensual rendition of
the world, dissensus can redistribute what is visible and sensible. Take for in-
stance, Google’s ethical promise that ‘[w]e will seek to avoid unjust impacts on
people, particularly those related to sensitive characteristics such as race, eth-
nicity, gender’.31 The category of ‘people’ remains a general one like in many
other AI ethics guidelines, which can be divided in processable sociological
categories without any residuals or absences. Ethical algorithms ‘promise to
render all agonistic political difficulty as tractable and resolvable’, as Amoore
Public things
3⁶ Ibid.,
3⁷ Amoore and Raley, ‘Securing with Algorithms’, 7.
3⁸ Pichai, ‘AI at Google’.
3⁹ Mittelstadt et al., ‘The Ethics of Algorithms’.
⁴⁰ IEEE, ‘Ethically Aligned Design’.
⁴1 The GDPR also enacts boundaries between citizens and non-citizens, given that Article 23 re-
stricts the application of rights when national security, public security, defence, or public interest are
considered (General Data Protection Regulation, ‘Regulation (EU) 2016/679’).
148 ethics
them anymore. Booking.com has been particularly well known for providing
an interface for finding hotel accommodation. The email alerted users that the
company would start sharing information between the different companies
that are affiliated with Booking Holdings Inc. to create new services, develop
new brands, and prevent and detect fraud.⁴2 The company emphasizes that
data sharing is about personalization and experience:
In short, it means a better experience for you across all Booking Holdings
brands. We’ll be able to offer you exactly the kind of accommodation that’s
right for you, along with providing a much more inclusive service when it
comes to booking your next trip. This will be done through website per-
sonalisation, more personalised communications and improvements to our
products and services.⁴3
the outside in and take the inside out. Ethics guidebooks render the rela-
tions between people and technologies, humans, and algorithms as relations
of mastery and control. From the mastery of technology, ethics reimagines
the human as a sovereign who can mould material objects and algorithms at
will or passive subjects who are controlled by technologies and AI. Action
thus remains unencumbered by power, materials, objects, instruments, de-
vices, and bodies. The abstract human of ethics reveals itself as an engineer
or technologist, the professional body replacing the public body.
How can ethico-political interventions hold together materiality as both
embodiment and technology? Political theorist Bonnie Honig has proposed
to recast political action as mediated by public things.⁴⁸ Moving beyond the
‘object turn’ in social studies, she argues that we need to pay attention not
just to relations between humans and things, but more specifically to political
things, the things that mediate democratic political action. For Honig, a pub-
lic thing does not mean that it is opposed to the private. Rather, a public thing
constitutes a public, as it assembles a collectivity around a thing and ‘bind[s]
citizens within the complicated affective circuitries of democratic life’.⁴⁹ If pub-
lic things have often been associated with the infrastructures of democracy,
Honig extends this understanding to objects around which citizens constellate
in political life. ‘Public things’, she argues, ‘depend on being agonistically taken
and retaken by concerted action’.⁵⁰ Public things are constitutive of democratic
life, which otherwise would be reduced to ‘procedures, polling, and policing’.⁵1
Democratic politics entails the redistribution of the sensible, the disruption of
arrangements of people and things.
Public things assemble a collective, they require action in concert and move
us away from anthropocentric ethics. Contra Latour’s contention that political
theory has excluded things, Honig reclaims ‘public things’ from the perspec-
tive of political theory and democratic politics. As she puts it, ‘[w]ithout public
things, action in concert is undone and the signs and symbols of democratic
life are devitalized’.⁵2 The commitment to public things needs to be understood
A letter to Google
to justify its development of AI in a digital arms race with China. The letter-
turned-petition disturbs the distribution of the sensible and what is given to
take a position that at first sight would appear impossible. The petition stands
for acting in concert, and particularly acting in concert in one of the undemo-
cratic sites of democracy—the workplace. The force of the petition was the
force of emerging collective subjects as public actors. Initially signed by 3,100
employees, the numbers rapidly rose to over 4,000 employees who remained
publicly anonymous. The petition also politicized the use of targeted killings
by the US government. The US drone programme not only operates outside
established legal frameworks and definitions of war, but it has also been beset
by ‘credible allegations of unlawful killings’.⁶3
The petition has subsequently assembled further publics beyond the num-
bers internal to Google. More than 1,000 researchers working on digital
technologies signed an open letter in support of the Google employees. Their
letter reiterates some of the points of the employees’ petition, including the
request not to be involved in the development of military technologies. Most
significantly, it draws attention to the politics of US targeted killings:
With Project Maven, Google becomes implicated in the questionable practice
of targeted killings. These include so-called signature strikes and pattern-of-
life strikes that target people based not on known activities but on prob-
abilities drawn from long range surveillance footage. The legality of these
operations has come into question under international and U.S. law.⁶⁴
At the beginning of 2019, the Arms Control Association announced that the
4,000 Google employees were voted arms control persons of the year.⁶⁵
However, if Rancière’s politics of dissensus was focused on the redistribu-
tion of the sensible, the petition limits this reconfiguration to ‘weaponized’
AI, thus leaving unquestioned the work of ‘normal’ AI. In that sense, we speak
of frictions as actions that slow down, try to move in a different direction,
or otherwise produce hindrances in the ‘smooth’ distribution of the sensible.
Friction depends upon the materiality of things constitutive of political action
in concert.⁶⁶ Google employees are not making claims to rights, yet they open a
scene for political action in concert. They produce frictions by creating publics
around Project Maven. It is thus not surprising that, in the wake of Google’s
decision to stop cooperating with the Pentagon on Project Maven and other
military AI, the Defence Innovation Board was tasked with developing ethi-
cal principles for the use of AI by the military. Led by the former chairman of
Alphabet and Google, Eric Schmidt, the board released ethical guidelines in
June 2019, which were in line with ‘existing legal norms around warfare and
human rights’.⁶⁷
Reading the letter as a little tool of friction does not mean that Google’s
withdrawal from the direct ‘business of war’ in Project Maven spells the end
of Google’s (or other tech) involvement in the business of war. Rather than
proposing a form of ‘pure’ ethics or politics, the letter initiates frictions that
open a democratic scene of dissensus. The Silicon Valley companies remain
part of the military–industrial–media–entertainment complex both in the US
and internationally.⁶⁸ As Google withdrew from further collaboration with
the DoD, they also dropped their application for providing integrated cloud
services, the Joint Enterprise Defense Infrastructure (JEDI) project, which we
introduced in Chapter 4. The JEDI contract was initially awarded to Microsoft.
After the contract award, Oracle filed a complaint against the DoD for its
biased specifications that privileged a single vendor—Amazon Web Services
(AWS)—despite Congressional and other concerns and the direct involvement
of individuals with links to AWS.⁶⁹ AWS similarly started a lawsuit against
the DoD alleging undue political influence on the award. As we discussed in
Chapter 4, the Pentagon withdrew the contract in 2021, following this exten-
sive litigation. However, the contract looks likely to be reissued and awarded
to a consortium rather than single companies.
Other frictions emerged as the scene opened by the letter unfolded. Follow-
ing a commitment to AI principles, Google set up an ethics advisory board
only to dissolve it a week later over public criticisms about the choice of board
members.⁷⁰ Later on, Google forced out Timnit Gebru, co-lead of the Ethical
AI team and an internationally renowned researcher in the field of ethics and
AI. She was the co-author of a paper criticizing very large language models,
which we take up in Chapter 7.⁷1 While it was reported that the paper passed
internal research reviews, ‘product leaders and others inside the company had
deemed the work unacceptable’.⁷2 Since then, Google has totally transformed
its Ethical AI research division and appointed Marian Croak, a software en-
gineer, as its lead. In a Google blog, Croak, who had previously been vice
president of engineering, outlined her vision and subtly shifted from ‘ethical
AI’ to ‘responsible AI’, while promising to overcome dissensus and ‘polarizing’
conflict in the company.⁷3
A year after the public debate around the Google employees’ letter on Project
Maven, the Intercept disclosed an internal email at Google that showed that
Google continues to cooperate with the DoD on other AI projects.⁷⁴ Based
on further internal emails at the company the Intercept also revealed that the
infamous AI contribution to Project Maven relied on low-skilled workers or
so-called ‘data labellers’.⁷⁵ In order for algorithms to recognize objects in the
drone video footage, they need to be trained on datasets that accurately sep-
arate different types of objects and people. This work is often crowdsourced
and done by people around the world who can be paid as little as 1$/hour to
correctly label images.⁷⁶ The ethics of algorithms and AI does not extend into
the hidden abodes of digital capitalism we discussed in Chapter 5. It does not
account for the invisibilized labour of making data processable by algorithms,
and it does not disrupt the international asymmetries that foster exploitation
and extraction, as data labellers are drawn from the poor around the world.
Yet, this does not mean that the letter to Google has simply failed. A scene
does not succeed or fail, it is not felicitous or infelicitous, but it continues to
unfold. The frictions around AI continue to unravel and unsettle distinctions
between ethics and politics, human and nonhuman, consensus and dissensus.
Hacking in concert
⁷⁸ Coleman, Coding Freedom, 210. See also Kelty, ‘Hacking the Social?’.
⁷⁹ Coleman, ‘High-Tech Guilds’.
⁸⁰ Lodato and DiSalvo, ‘Issue-Oriented Hackathons’, 555.
156 ethics
understand what location information apps track about their users.⁸⁵ Action
in concert was mediated not only by big tech platforms and algorithms, but
also by MobileMiner as a little tool of friction.
MobileMiner is designed to require as few user permissions as possible.
For instance, the movements of a mobile phone user can be tracked based
on the cell towers they connect to, without requesting permission for the
phone’s location systems. To this end, MobileMiner queries the Android API
for information on communications with the cell towers by individual apps.⁸⁶
The data is converted into approximate location data using the gazetteer of
cell tower locations provided by opencellid.org. This allowed the hackathon
participants to experiment with visualizations of frequently visited locations
using OpenStreetMaps. Another approach developed for MobileMiner per-
manently surveys the Android filesystem to determine the port, IP address,
and protocol of each network socket for each app. This enables the detection
of activities on the Chrome Web browser, for instance, along with those of
apps such as Facebook, Skype, Foursquare, Spotify, and many game apps on
the mobile phones. In making these invisible and smooth processes of algo-
rithmic datafication visible, the hackathon rendered digital technologies and
their algorithmic operations intelligible in a collective setting.
The hackathons focused on creating MobileMiner and also on what could be
done with the mobile data through predictive algorithms. As data-gathering
devices, mobiles are vital sources for algorithmic prediction. They intensify
the uneven distributions of power and capacity, as they are increasingly ori-
ented towards possibilities of action. Using crowdsourced information from
OpenCellID and the data from MobileMiner, participants drew on clustering
techniques to explore the datafication of their everyday actions.⁸⁷ A simple
cluster analysis identified several regular patterns in mobile data. For instance,
one participant was present in three UK cities on two days. The cities are
known as locations of major universities, and a subsequent discussion con-
firmed that they attended the open days of the universities and then potential
interviews. This kind of data extraction can produce value for digital mar-
keters, as it reveals patterns and interests. What matters for them is not the
‘truthfulness’ of conscious doing, but the patterns that emerge without the
conscious involvement of individuals and the meanings they attach to these
actions.
⁸⁵ Pybus, Coté, and Blanke, ‘Hacking the Social Life of Big Data’.
⁸⁶ Blanke et al., ‘Mining Mobile Youth Cultures’.
⁸⁷ Greenway et al., ‘Research on Online Digital Cultures’.
158 ethics
⁸⁸ An anonymized hackathon participant quoted in Pybus, Coté, and Blanke, ‘Hacking the Social
Life of Big Data’.
⁸⁹ Koul and Shaw, ‘We Built Google’.
little tools of friction 159
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0008
accountability 161
Facial recognition systems have been most present in public debates about
discriminatory algorithms and biased data. Many of the public controversies
around facial recognition have focused on the problems of training data, par-
ticularly the scrapping of facial images on the Internet. To create training
datasets, public and private actors have taken ‘images manually culled and
bound together from sources as varied as university campuses, town squares,
markets, cafes, mugshots and society-media sites such as Flickr, Instagram or
YouTube’.⁶ In 2021, Google made it illegal to collect content from YouTube and
identify a person without their consent.⁷ This move adds pressure on com-
panies like Clearview AI, which allows law enforcement agencies to search
billions of images in its database of 10 billion faces, scraped from the Inter-
net, including from YouTube videos. Since its practices have become known,
privacy activists and data protection organizations have started several legal
challenges against Clearview AI in the US and Europe, given public con-
cern about ‘the end of privacy as we know it’, as the New York Times puts
it.⁸ While facial recognition has rallied most concern globally, its contesta-
tion has also tended to reproduce geopolitical lines and forms of othering,
with countries such as China assumed to develop advanced facial recognition
unimpededly.
In this chapter, we analyse how accountability has been enacted in these con-
troversies over facial recognition algorithms and systems. Initial disclosures
of the extensive experimentation with facial recognition by law enforcement
in Europe and the US has led to calls for algorithmic accountability through
tools such as auditing. Algorithmic and data audits have been proposed as
public instruments of accountability. They target ‘inclusive benchmarks’ and
balanced training data.⁹ In the UK, the Information Commissioner’s Office
is developing a method for auditing algorithms.1⁰ The AI Now Institute in
New York proposed to cultivate algorithmic impact assessments for public
accountability.11
We argue that facial recognition exposes a particular enactment of algorith-
mic accountability through auditing error. To render algorithms accountable
entails ‘putting them to the test’ by making visible their errors, resulting in
bias and discrimination. This enactment of accountability requires profes-
sional practices and devices for internal or external verification. More recently,
a second mode of accountability has emerged as ‘Explainable AI’, where algo-
rithms are asked to give an account of their decision-making. ‘Explainable AI’
has even been lauded as democratizing accountability, given that it does not
require a class of professionals to ‘reverse engineer’ and audit algorithms.
Error analysis and explainability have been key sites for claims of algorith-
mic accountability, but have also given rise to global scenes of controversy
about how to make algorithms governable. Calls to ban facial recognition or
otherwise refuse its deployment enact accountability differently, which we call
‘accountability through refusal’. Refusal disturbs the hierarchies of error op-
timization and trust in algorithmic explanations. In January 2020, the city of
Moscow launched what was claimed to be to be the largest facial recognition
system worldwide. The system was employed during the Covid-19 pandemic
to target ‘quarantine breakers’.12 At the time of writing, privacy activist Alyona
Popova took a law case against Moscow’s Department of Technology, which
manages the video surveillance, to the European Court of Human Rights.13 A
similar case concerning facial recognition had been filed in the UK against
the South Wales Police force.1⁴ At about the same time, China also had its
first lawsuits against facial recognition technologies, as we will discuss later.1⁵
We understand refusal as a continuum that ranges from mundane ways of
saying ‘no’ to extended practices of litigation and mobilizations to ban the
development or use of facial recognition in certain cases.
The chapter proceeds in four steps. We start with a discussion of accountabil-
ity as a subject of scholarly and public controversy. Is accountability another
technology in the bureaucratic toolbox or can accountability become a de-
vice of contestation? In the first two sections, ‘(Un)accountable algorithms’
and ‘Accouting for error: politics of optimization’, we unpack the emergence
of ‘accountability through error’ and ‘accountability through explanation’ and
their respective politics of optimization and trust. In the final section, we draw
on dispersed practices that open scenes of ‘accountability through refusal’. We
show how refusal disrupts optimization and trust.
(Un)accountable algorithms
to the one accountants and auditors filled when they emerged in the early
twentieth century to handle the new deluge of financial information.’2⁰ If for
them algorithmists follow the formalism and bureaucracy of internal financial
auditors, others have seen auditing algorithms as an external accountability
mechanism. Christian Sandvig and colleagues have acknowledged the diffi-
culties of auditing algorithms and have proposed the development of new
methods ‘to ascertain whether they are conducting harmful discrimination
by class, race, gender’.21 The AI Now Institute focuses on algorithmic impact
assessments, which are largely modelled on risk assessment.22
Although algorithmists have not yet been fully established as a separate pro-
fession, we can already find elements of their work in contemporary efforts by
researchers and companies. The focus is often on auditing data, where bias is
more easily quantifiable. A typical IBM research audit on facial recognition
data traces the kinds of facial features missing from widely used training data
and argues that in order to achieve ‘facial diversity’, more datasets that reflect
global and local differences in faces are required.23 Lack of diversity is a com-
mon issue with facial recognition data. One such famous example is CelebA,
a facial dataset that helped to produce excellent recognition rates of well over
90%, but less than 15% of its records show darker skin colours. CelebA is a
dataset of celebrity faces, considered to be in the public domain. Microsoft
has produced another facial dataset MS Celeb, which contained more than 10
million images of nearly 100,000 persons harvested from the Internet using
a flexible definition of celebrities, which included journalists and academics.
The individuals in MS Celeb were not asked for their consent and privacy au-
dits led to an attempt to remove of the data, which turned out to be arduous.
Although Microsoft tried to delete the database, several copies had already
been in circulation, and it was difficult to stop its distribution by community
sites such as GitHub.2⁴ While MS Celeb was originally planned for academic
use, companies such as IBM, but also Chinese companies such as Alibaba and
SenseTime, used the database. SenseTime’s facial recognition technology is
suspected to be part of the Chinese government’s surveillance of the Uyghur
population.
In the wake of the public controversy over its racialized and gendered bias in
facial recognition data, Microsoft reported on its AI blog that it had assessed
its data and algorithms and made changes to reduce problems of classifying
gender across skin tones. The company claimed it was able to ‘reduce error
rates for men and women with darker skin by up to 20 times’.2⁵ Microsoft’s
reporting of its challenges has been translated in the media as ‘Microsoft says
its racist facial recognition tech is now less racist’.2⁶ However, Microsoft re-
searchers acknowledge that reducing error rates through algorithmic and data
audits is not simply a technical challenge but a difficult political issue of how
and when to ‘mitigate AI systems that reflect and amplify societal biases not
because of dataset incompleteness or algorithmic inadequacies, but because
human societies are biased’.2⁷
Focused on societal biases, audits might reveal the past and present of power
relations. Accounting for the training data of facial recognition algorithms
can unravel histories of data extraction, racialization, and criminalization.
Databases of facial images have historically started with mugshots held in
police files and therefore focused on the figure of the criminal.2⁸ As Ruha
Benjamin has argued, audits can become abolitionist tools, which disrupt
bureaucratization and disciplining.2⁹ Accountability can modify power rela-
tions through contestation and collective agency. It could thus shift from what
the authors of Data Feminism have called concepts that ‘secure power’ to
concepts that ‘challenge power’.3⁰ In avoiding the pitfalls of ‘bad actors’ and
‘bad algorithms’, accountability can question the ‘very hierarchical logic that
produces advantaged and disadvantaged subjects in the first place’.31 More re-
cently, Latin American digital rights activists Joana Varon and Paz Peña have
proposed a feminist toolkit to question AI systems, which challenges domi-
nant discourses of accountability, inclusion and transparency by highlighting
several dimensions of domination:
Facial recognition has come under increased public scrutiny given its failures,
errors, and fallibilities. As one of the most widely deployed AI applications,
the errors of facial recognition appear frequently in public debates. Many
researchers have tried to connect algorithmic errors to social questions of dis-
crimination and oppression. Facial recognition for law enforcement has too
high error rates, highlights Big Brother Watch in a case against the Metropoli-
tan Police in London. Errors are systematic rather than accidental, revealing
underlying patterns of bias and discrimination. The US Technology Policy
Committee of the Association for Computing Machinery (ACM), a high-
profile association of computer scientists, acknowledges that, ‘when rigorously
evaluated, the technology too often produces results demonstrating clear bias
based on ethnic, racial, gender, and other human characteristics recognizable
by computer systems’.3⁴ However, in computer science, making algorithmic
errors visible through such ‘rigorous evaluation’ relies on particular assess-
ment indicators, which need to be computable. Therefore, such errors can
only surface certain forms of bias or discrimination. The same association
of computer scientists adds that ‘[facial recognition] technology is not suffi-
ciently mature and reliable to be safely and fairly utilized without appropriate
safeguards against adversely impacting individuals, particularly those in vul-
nerable populations’.3⁵ They recommend its temporary suspension rather than
an indefinite ban, in light of a future where errors can be corrected, and facial
recognition can become ‘unbiased’.
The ACM Technology Policy Committee’s statement comes in the wake of
activist and scholarly work to make errors of facial recognition visible. Yet,
there is also a split between civil society calls to ban facial recognition, partic-
ularly for use by law enforcement and decision-making by public and private
actors, and professional demands to just reduce, if not eliminate, bias. We
see a dual professionalization of the work of algorithms: that of auditing as
well as the work of optimizing the performance of algorithms. Data scientists
themselves have suggested that auditing algorithms will ‘become the purview
of a learned profession with proper credentialing, standards of practice, dis-
ciplinary procedures, ties to academia, continuing education, and training
in ethics, regulation, and professionalism.’3⁶ The author of Weapons of Math
Destruction set up her own consultancy to audit algorithms.3⁷ O’Neil wants
companies to open their data and algorithms to outside reviews that deter-
mine their fairness. Auditing involves checking for correspondence with ‘real
life’ and leads to a seal of approval, which she sees as equivalent to the label
‘organic’ for food production. Auditing algorithms becomes a way of ordering
and ranking and not just correcting algorithms.
For auditors and computer scientists, algorithmic errors and failures are
often indicators that AI systems are not yet good enough and do not reveal
fundamental issues. When information studies scholar Safiya Noble investi-
gated Google’s search engine, she argued that ‘search engine results perpetuate
particular narratives that reflect historically uneven distributions of power in
society’.3⁸ While Noble focuses on the representation of Black girls in Google
search engine results, the intersection between opaque algorithms, commer-
cial interests, and the effacement of multiple perspectives through ‘ranking’
has not only harmful, but also anti-democratic effects. Noble’s analysis shows
that racist and sexist results are the effects of an algorithm shaped by adver-
tising requirements as well as existing structures of racism and sexism. In the
conclusion to her book, she acknowledges that Google made modifications to
the algorithm in the wake of her earlier article that highlighted the pornifi-
cation of Black girls and that Google hid certain search results.3⁹ In a typical
algorithmic auditing move, Google treated Noble’s findings as errors to be cor-
rected. Through error correction, Google has computationally optimized the
algorithm but has not addressed the wider political consequences for public
and democratic life that Noble highlights.
As public demands for accountability have focused on the errors of algo-
rithms, there has been less attention to the discussion of error optimization in
the machine-learning community. Google could react directly to accusations
of racist and sexist rankings by treating racism and sexism as errors. Error
analyses such as Google’s optimization of its own rankings are key to making
an algorithm ‘work’ in specific domains. While errors are publicly rendered
in terms of ‘mis-takes’, the implication is that the algorithm can lead to a cor-
rect ‘take’. In 2016, Richard Lee, a New Zealand citizen of Asian descent, was
blocked from renewing his passport by a robot, because his eyes were identi-
fied as closed. He was asked to change the passport picture.⁴⁰ Lee’s answer to
the algorithmic failure to renew his passport shows the trust in an algorithmic
corrective epistemology: ‘It was a robot, no hard feelings. I got my passport
renewed in the end.’⁴1
Since the epistemic transformation of AI from logical to statistical models,
artificial intelligence work has mainly focused on building models that can
solve particular problems by iteratively adjusting and reducing the remaining
error.⁴2 The ‘winter of AI’ was related to an over-reliance on logical models to
simulate human reasoning and their subsequent failure. Computer scientists
wanted to create an artificial intelligence that replicated human intelligence
but was separate from humans. Peter Norvig, former Director of Research at
Google, explains an epistemic transformation from logical models to statistical
models, which ‘have achieved a dominant (although not exclusive) position’
since the 1980s.⁴3 Unlike earlier logical models, statistical models focus on as-
sociations of humans and machines that can learn from data by iteratively
adjusting what has already been learned using calculated error rates. These
can be measured by comparing what has been learned with what had been ex-
pected to be learnt. Modelling becomes a workflow of increased algorithmic
performance through error optimization, which compares effects in data with
expected inputs and outputs in a finite number of iterations.
When we attended an exhibition dedicated to AI and Big Data in London in
2019, logical models of AI were hardly mentioned but error played a prominent
How do we know what the optimal error rate is? For tasks that humans are
reasonably good at, such as recognizing pictures or transcribing audio clips,
you can ask a human to provide labels then measure the accuracy of the hu-
man labels relative to your training set. This would give an estimate of the
optimal error rate. If you are working on a problem that even humans have a
hard time solving (e.g., predicting what movie to recommend, or what ad to
show to a user) it can be hard to estimate the optimal error rate.⁴⁶
⁴⁴ Digital technologies have increasingly blurred the distinction between testing or experimentation
and implementation. Digital technologies, from mundane devices to AI systems, are now tested in real
life directly. See Bunz, ‘The Calculation of Meaning’; Aradau, ‘Experimentality, Surplus Data and the
Politics of Debilitation’.
⁴⁵ Research Notes, 15 April 2019, https://www.ai-expo.net/global/.
⁴⁶ Ng, Machine Learning Yearning, 46.
⁴⁷ Chio and Freeman, Machine Learning and Security, 259.
⁴⁸ Freeman, ‘Data Science vs. the Bad Guys’.
170 accountability
⁴⁹ Ibid.,
⁵⁰ Chio and Freeman, Machine Learning and Security, 259.
⁵1 R (Bridges) v CCSWP and SSHD, ‘Judgment’.
self-accountable algorithms: explainability and trust 171
Confronted with public pressure about algorithmic errors, bias, and discrim-
ination, computer scientists have promoted another form of accountability
through explainability. The international working group on Fairness, Ac-
countability, and Transparency in Machine Learning (FAT-ML) has defined
explainability as the principle which ‘[e]nsures that algorithmic decisions
as well as any data driving those decisions can be explained to end-users
⁵⁹ Denning criticizes this principle as ‘too austere’, as ‘it only hints at the full richness of the discipline’
(Denning, ‘Computer Science: The Discipline’).
⁶⁰ Dick, ‘Artificial Intelligence’.
⁶1 Knight, ‘The Dark Secret at the Heart of AI’.
⁶2 Rudin, ‘Stop Explaining Black Box Machine Learning’, 2. On the larger question of opacity in
machine learning, see Burrell, ‘How the Machine “Thinks”’.
⁶3 Rudin, ‘Stop Explaining Black Box Machine Learning’.
174 accountability
one person’s photo with N registered photos and predicts the person with the
highest degree of similarity. Facial recognition has been increasingly deployed
at the European Union’s borders with a growing number of databases and
capacities. The EU Agency for the Operational Management of Large-Scale
IT Systems (eu-LISA) describes how deploying one-to-many facial identifi-
cation at borders ‘requires both significant processing power and highspeed
network connectivity to ensure high quality and speed of biometric recogni-
tion and identification’.⁶⁴ One-to-many identification is furthermore used in
criminal investigations, where algorithms detect faces from security cameras
and compare them with a database of known faces to identify suspects.
As facial identification is so widely deployed, it has attracted strong interest
in XAI. A DARPA-funded project extends the definition of XAI to Explain-
able Face Recognition (XFR).⁶⁵ XFR is about identifying regions in the digital
picture of a face, which work best to determine similar faces and distinguish
dissimilar ones. It does not aim to establish an explanation of facial recognition
as such, but to convince non-experts of ‘truth-doing’: that the right regions of
the face are activated for identification purposes.⁶⁶ Many popular explanatory
techniques in AI-based facial recognition highlight regions in the facial im-
age that made the machine decide that a face is identified and why that face is
different from other faces. However, such ‘attention maps’⁶⁷ only visualize fa-
cial identities and differences and require further explanations. Just because
a system identifies a part of an image that is important to facial matches
correctly, it does not mean that these matches are also correct. On the con-
trary, self-accounting of algorithms through visualizations is known to instil
over-confidence. Human–computer interaction research has shown that visu-
alizations like attention maps led to a tendency to ‘over-trust’ the outcomes
of the algorithms by their designers.⁶⁸ Explainability results in the opposite of
explanations and gives rise to an ‘automation bias’ or the faulty confidence in
the designers that the system gets it right.
Researchers from the US National Institute of Standards and Technology
(NIST) have also begun to work on the principles of XFR in the context of
the US justice system, where XFR would compete with human ‘forensic fa-
cial recognizers’.⁶⁹ These experts prepare detailed reports within the US justice
should aim to respond to the ‘basic, emotional needs to understand and trust
[algorithms].’⁷⁵ Those with such ‘basic, emotional needs’ are generally not
those who understand the complex relations of error rates and algorithmic
performance. Data scientist Cynthia Rudin has criticized the field of XAI for
serving an AI industry that has an interest in keeping models opaque and away
from the public.⁷⁶ The field lacks in quality of explanations, which can be less-
than-satisfactory and even misleading. Refusing these developments, Rudin
promotes the focus on models that can be more expensive to produce, as they
require more human labour and are specific to certain domains, but are ‘inher-
ently interpretable’ rather than just ‘explainable’.⁷⁷ Interpretable models allow
designers and users to understand why a model has made a prediction.⁷⁸
Making self-accountable algorithms is addressed to non-expert consumers
of algorithmic optimization and not active algorithmic citizens. As we have
shown, rather than an epistemic account of algorithmic operations, explain-
ability becomes economized as part of an automation move to reduce human–
machine interactions. As error is optimized according to the socio-economic
logics of different domains of implementation, explainability enacts a politics
of trust in algorithms. However, accounting for error and giving an account by
explaining are not the only forms of accountability, although they have been
often mobilized in both expert and public controversies. In the section ‘Ac-
countability through refusal’, we show how accountability has been enacted
through refusal in ways that subvert and disrupt error optimization and the
trust in the truth-doing of Explainable AI.
Controversies over facial recognition have emerged not only in the US and
the UK, but also in China and Russia. China is rarely mentioned in the earlier
discussions of algorithmic accountability. As with other digital technologies,
China’s use of facial recognition is seen as a dystopian present of inescapable
surveillance and often set in opposition to Western developments in AI and
facial recognition. Chinese citizens can use facial recognition to pay for food,
unlock their homes, or check in at airports. The Chinese government plans
recognition in the contract’.⁸⁹ Guo’s lawsuit was also followed by other cases
against facial recognition. Another law professor, Lao Dongyan, said ‘no’ to
the installation of facial recognition technology for access to her residential
buildings.⁹⁰ While these cases received wide coverage in Chinese (social) me-
dia, reports in Western media often minimized their impact and claims. The
Economist Intelligence Unit, for instance, reported that ‘Mr Guo … has said
he is happy to submit to facial scans by the government that are in the public
interest. All he wants is his money back on his season ticket.’⁹1
In 2020, China published the first draft of its new Personal Information Pro-
tection Law (PIPL), which advances the protection of personal data in China.⁹2
The law came into effect in late 2021. PIPL is seen to have similarities with
the EU’s GDPR given their common limitations on data processing and sim-
ilar terminologies of ‘consent’ or ‘lawful processing’.⁹3 ‘Personal information
rights’ are subject to particular protections, although there are notable differ-
ences to the GDPR. PIPL seems to be less specific on these rights as ‘it lacks
more precise GDPR language addressing such rights, including where certain
restrictions or exemptions may apply.’⁹⁴ Like in the West, regulations in China
are the result of a growing public contestation of big data and AI technology.
In spring 2021, China released a draft of security standards for facial recogni-
tion data, which suggests among other things that ‘[i]ndividual authorization
is required for collecting facial recognition data’.⁹⁵ Against unified representa-
tions of techno-dystopia in China, sociologist Chuncheng Liu has urged us to
examine the ‘exclusions, inconsistencies, and contradictions they [algorithms]
foist upon social life without falling into an oversimplified fatalist narrative’.⁹⁶
Bias and failures of algorithmic systems have received wide public attention in
China. Facial recognition technology is here particularly visible because it is
so much part of everyday life.
There is by now a long list of errors and failures of everyday algorithmic
surveillance in China. Chinese facial recognition systems have been widely
used not just for law enforcement, but also to detect ‘public nuisances’ like
jaywalkers. Businesswoman Dong Mingzhu was shamed as a jaywalker af-
ter a facial recognition system had identified her and displayed her image
publicly on a large screen. However, the system had mistaken the reflection
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0009
international 183
claiming that Free Basics will achieve ‘digital equality for India’.⁵ He could not
understand how anybody could be against this project. In fact, Free Basics was
rolled out in more than 60 countries of the Global South.⁶
Free Basics was meant to be an app downloadable on a mobile phone. Face-
book would strike deals with local mobile providers so that users could have
access to certain parts of the Internet for free. The Internet that users could
access would be closed and encircled by Facebook’s definition of acceptable
websites. The story of Free Basics highlights the constitutive role of encircling,
borders, and (re)bordering for international politics in the digital age. It also
brings to the fore struggles over these borders and technology as an empirical
question. States and markets, private and public have been historically ‘com-
mingling’, with commingling understood to be either an extension of statecraft
or a dismantlement of sovereign power.⁷ In their analysis of data colonialism,
sociologists Nick Couldry and Ulises Mejias underscore that ‘powerful corpo-
rations operating in collaboration with powerful states … are defining the new
colonial geographies and constructing a different social and economic order’.⁸
As we saw in Chapter 4, big tech companies and digital platforms are frequently
analysed as contemporary forms of companies such as the East India Com-
pany, having the power to order international politics. Yet, these hierarchical
and ordering sets of relations characteristic of imperial and colonial power
have been challenged by historical accounts of empire.⁹ At the same time, the
victory of the ‘Save the Internet’ campaign in India was made possible through
a ‘practice of technopolitics that resonated within the broader narrative of
technocultural nationalism championed by the current ruling party’.1⁰ The
borders between the domestic and the international are differently imagined
and politicized by the actors in the controversy over Free Basics.
If Free Basics conjures a joint imaginary of imperial power of companies that
are datafying the world and sovereign tech nationalism, another discourse of
the international has been framed in terms of war. ‘Buying Huawei technol-
ogy [is] “like buying Chinese fighter planes”’, warned a Forbes article, using one
of the tropes most associated with international politics, that of war.11 ‘[T]he
Kremlin has attempted to interfere in numerous electoral processes around the
world in recent years’, cautioned the European Union in launching a new site,
EU vs Disinfo, to combat disinformation, and reactivating imaginaries of Cold
War and great power politics.12 These stories about election interference, ma-
nipulation, propaganda, and information warfare are supposed to alert citizens
but are also warnings directed at the tech companies that they are involved
in this war, because they provide the digital platforms making the produc-
tion and viral circulation of digital content possible and accessible across the
world. For policy makers, digital platforms play a role in recasting interna-
tional politics into a new form of digital power politics almost analogous to
the Cold War. Interference and meddling are indicative of an imaginary of the
international centred around state sovereignty and territorial borders, while
military power through digital technology reproduces the trope of a globally
expanding war.
These imaginaries of international politics underpin calls upon the state to
make digital platforms and their algorithmic operations governable. In this
chapter, we analyse how the international is rendered governable through bor-
ders and boundaries. How are increasingly global platforms and algorithmic
operations governed, through which modes of bordering and rebordering?
What is absent or unknown in these processes? Our empirical scene in this
chapter is the struggle between Facebook and states, Facebook and its users,
and finally Facebook and its workers on bordering and rebordering the digital.
We will see how tensions like those that defined the Free Basics controversy
are repeated in struggles about sovereignty over content and citizen protec-
tion, user relations, and worker rights. The Facebook scenes of this chapter
showcase how social media companies have produced different modes of gov-
erning and have become entangled with states in international practices. We
argue that, in producing the figure of the citizen and the user of digital plat-
forms, states and companies at the same time erase the figure of the worker in
international politics.
To shed light on the erasure of work and workers in these controversies, we
start by analysing how borders enable different modes of governing between
public and private, markets and states, domestic and international, global,
and local. Secondly, we consider controversies between states and social me-
dia companies around the algorithmic circulation of content. A legislation
proposed in Germany for the regulation of hate speech on digital platforms,
known as NetzDG, requires social media companies with more than two mil-
lion users to remove unlawful content from their platforms or face a fine of
legitimately exercised’.22 Walker argues that the ‘double outside’ of the state
and of the system of states leads to a paradox of both conceptualization and
action. The double reverts upon itself, and social movements are caught within
these imaginaries, as their alternatives can be plagued by ‘either imperial pre-
tensions (one world) and/or a new set of distinctions ... between acceptable
and unacceptable forms of human being (two worlds)’.23
Recent work by political theorist Wendy Brown has drawn attention to
how borders are rendered more blurred and dispersed through the neolib-
eral economization of social relations, but also through the mobilization of
‘familialism’ in conjunction with markets as a mode of neoliberal govern-
mentality. For Brown, neoliberalism displaces democratic procedures and
processes through the ‘perfect compatibility’ between markets and tradi-
tional morality. Traditional morality is that of the ‘familial’, personal sphere,
where ‘diktat is the basis of household authority, and force is how it le-
gitimately defends itself against intruders’.2⁴ Brown’s analysis alerts us to a
reconfiguration of markets and morals, where traditional and familial moral-
ity is neither a relic nor an incidental or opportunistic supplement to ne-
oliberalism. The compatibility between neoliberalism and ‘heteropatriarchal
Christian familialism’ constitutes the specificity of the present according to
Brown.2⁵
While political theorists have shown how bordering and rebordering enable
and are underpinned by different rationalities of government, there has often
been less attention to the multiplication of struggles over borders and bound-
aries and not just the multiplication of bordering mechanisms.2⁶ If Walker
attends to how borders and boundaries are enabled and enable particular
modes of politics, Brown highlights the blurring of borders, which separate
economics and politics, private and public action. However, the struggles
and modes of resistance over practices of bordering and rebordering are less
present in their analyses. Walker’s theorization of the international focuses
on distinctions between citizens and humans, states and empires, universal
and particular, which ultimately ‘traps’ the imaginaries of social movements.
Brown’s analysis of neoliberalism highlights important alliances between do-
mesticity and neoliberalism, markets and morals, but does not attend to the
providers are defined under NetzDG as those platforms that allow sharing of
content by users, with some exceptions such as journalistic platforms, where
there is editorial control over published content, or professional networks like
LinkedIn. Alongside the usual social media suspects such as Facebook, Twit-
ter, YouTube, and Instagram, this originally also included surprising members
like Change.org, an online petition site.3⁰
To comply with the law, social network providers have to offer their users
in Germany a means of reporting potentially unlawful content. If the plat-
forms are based outside of Germany, they need an authorized representative
within Germany.31 In one of the first NetzDG reports in summer 2018, Twitter
stated that it had removed 260,000 posts, YouTube 215,000 entries, and Face-
book only 1,704. By 2019, Facebook had caught up and had removed 160,000
pieces of content, where 70% of this content had been discovered by content
moderators rather than Facebook users.32 Google runs a live dashboard on the
content it deletes under NetzDG.33 In the latter half of 2020, it had erased more
than 70,000 pieces of content from its YouTube site. Over 88% of the content
had been removed within twenty-four hours of the complaint being received,
in most cases globally, as not only NetzDG was violated but also YouTube’s
community guidelines.
The NetzDG legislation has been widely hailed as an advance for the reg-
ulation of digital platforms, their algorithms, and social media companies
more broadly, with several other states wanting to follow Germany and de-
velop their own regulation laws.3⁴ The legislation first drew widespread media
attention following a high-profile case against Facebook in 2019, which led
to a EUR 2 million fine, due to ‘incomplete information provided in its pub-
lished report on the number of complaints received about unlawful content’.3⁵
Moreover, the ‘form used to file a complaint under NetzDG was harder to find
on Facebook than other social media sites.’3⁶ Public reporting emphasized the
symbolic value of the fine given the overall amount of money linked to Face-
book, while highlighting a double antagonism between states and companies,
on the one hand, and Europe and America, on the other:
3⁰ For Change.org, ‘[t]he expense of implementing NetzDG was high’ (Pielemeier, ‘NetzDG’).
31 Wessing, ‘Germany’s Network Enforcement Act’.
32 Facebook, ‘NetzDG Transparency Report’.
33 Google, ‘Removals under the Network Enforcement Law’.
3⁴ France, for instance, had plans to adopt a similar law.
3⁵ Bundesamt für Justiz, ‘Fine against Facebook’.
3⁶ Barton, ‘Germany Fines Facebook $2.3 Million’.
190 international
that a European country has sanctioned an American social media giant for
failing to be transparent about the way it handles hate speech.3⁷
3⁷ Deckler, ‘Germany Fines Facebook €2m for Violating Hate Speech Law’.
3⁸ Brown, Walled States, Waning Sovereignty, 52.
3⁹ The community standards in German are available at https://de-de.facebook.com/
communitystandards/.
⁴⁰ Hoppensted, ‘Facebooks Löschzentrum’.
arts of governing: borders and thresholds 191
Yet, most content work does not take place in the Global North, and it does
not afford workers these kinds of conditions. Digital platforms employ a dis-
persed force of microworkers around the world,⁷1 and even public outcry does
not seem to improve their conditions. The Philippines are probably the largest
hub of the global call centre industry and have thus also become a global centre
of content moderation for social media platforms:
Unlike moderators in other major hubs, such as those in India or the United
States, who mostly screen content that is shared by people in those countries,
workers in offices around Manila evaluate images, videos and posts from all
over the world. The work places enormous burdens on them to understand
foreign cultures and to moderate content in up to 10 languages that they don’t
speak, while making several hundred decisions a day about what can remain
online.⁷2
Information studies scholar Sarah Roberts has argued that the outsourc-
ing of content moderation to the Global South is ‘a practice predicated on
long-standing relationships of Western cultural, military, and economic dom-
ination that social media platforms exploit for inexpensive, abundant, and
culturally competent labor’.⁷3 Unlike the intimacy and proximity of user-to-
user relations, microworkers are rendered invisible through both proliferation
and dispersion across the world. Yet, as we will see in the section ‘Resistance
beyond borders’, microwork and microworkers have enabled forms of resis-
tance by recasting intimacy and dependence, and challenging borders and
boundaries. We propose to understand their resistance as opening up a scene
of internationalism, which is transversal rather than bound by borders and
nationalism.
stress disorder (PTSD) and trauma, given intense exposure to graphic content
and extreme violence. The main claim of unlawfulness against Facebook is that
the company created standards for the well being and resilience of content
moderators as part of the Technology Coalition but failed to implement them.
The case brought together work, citizenship rights (the content moderators
speak in the name of ‘California citizens’ who perform content moderation),
and action in concert (class action).
Firstly, the three Facebook workers recast the domesticized and intimate
governmentality that Facebook has produced for ‘users’ but not ‘workers’. The
lawsuit is traversed by tensions between rights claims and work conditions, on
the one hand, and domestic logics of trust, dependency, and protection, on
the other. The plaintiffs’ complaint describes working conditions as disman-
tling human subjectivity through the randomness, speed, and stress that the
Facebook content moderation algorithms create:
The moderator: in the queue (production line) receives the tickets (reports)
randomly. Texts, Pictures, Videos keep on flowing. There is no possibility to
know beforehand what will pop up on the screen. The content is very diverse.
No time is left for a mental transition. It is entirely impossible to prepare
oneself psychologically. One never knows what s/he will run into. It takes
sometimes a few seconds to understand what a post is about. The agent is
in a continual situation of stress. The speed reduces the complex analytical
process to a succession of automatisms. The moderator reacts. An endless
repetition. It becomes difficult to disconnect at the end of the eight hour
shift.⁷⁵
By starting a class action, the three former content moderators reclaim public
roles and renounce the duty of loyalty and discretion. As Sarah Roberts has
noted, ‘a key to their [the content moderators’] activity is often to remain as
discreet and undetectable as possible’.⁷⁶ They are to remain discreet, which is
enforced through non-disclosure agreements, and undetectable by being dis-
persed globally, their microwork not surfacing in the publicized personalized
relations between users or between users and platforms. In 2021, Irish MPs
began to move against these non-disclosure agreements, after a group of out-
sourced Facebook employees in Ireland gave evidence against them.⁷⁷ Such
agreements violate workers’ right to assemble and are leveraged by companies
to force complaints into silence.
⁷⁵ Ibid., §60.
⁷⁶ Roberts, Behind the Screen, 1.
⁷⁷ Bernal, ‘Facebook’s Content Moderators Are Fighting Back’.
200 international
and thus renders relations between platforms and states as antagonisms over
law and sovereignty. All these concerns and interventions efface the scenes of
resistance that unfold through class actions in the name of all social media
content moderators. While NetzDG stands for the problems of attempts by
states to regain sovereignty and leave the global work of digital platforms un-
touched, the class action disrupts the domesticized governmentality of social
media companies.
Bordering matters in international politics, as it turns categories of in-
side/outside, domestic/foreign, national/international into resources for gov-
erning by social media companies and states. This chapter has argued that
social media companies aim to ‘conduct the conduct’ of users globally through
relations of dependency, trust, authenticity, and the proliferation of thresh-
olds rather than borders. This does not imply that geopolitical borders are
no longer relevant for social media platforms. Geopolitical borders are entan-
gled with sovereign law and remain very real as companies tackle regulations
and taxes. In 2008, for instance, Facebook moved its European headquar-
ters to Ireland to take advantage of low corporate tax. Once the EU General
Data Protection Regulation (GDPR) came into force, Facebook argued that
it should only apply to its users in the EU, thus taking 1.5 billion Facebook
users in Africa, Asia, Australia, and Latin America out of the purview of
the legislation.⁸⁷ While the power of states over very large tech companies is
undeniable, state rebordering and reterritorialization of tech companies also
remains limited by the emerging art of government through thresholds. These
two arts of government configure citizens and users as their respective political
subjects.
Rather than a struggle over sovereignty or a new form of private–global colo-
nialism, we have shown how social media companies deploy techniques of gov-
erning through thresholds and community guidelines. States have not stood by
and discovered the advantages of Facebook’s fluent global community of users.
While their public discourses lament the extension of big tech companies be-
yond sovereign boundaries, they use the formation of a different international
and the ‘foreign’ as a resource for their own state practices. In a case brought
by Big Brother Watch and a coalition of NGOs against the UK security agen-
cies’ practices of mass surveillance, the distinction internal/external makes
possible the intensification of state surveillance.⁸⁸ What counts as ‘external
communication’ enables the extension of surveillance into spheres that would
⁸⁹ Ibid., §75.
Conclusion
Democratic scenes
Big Data World, AI & Big Data Expo, and AI Cloud Expo are some of the
exhibitions of digital products, software, and hardware that take place regu-
larly in cities around the world, from Singapore and Hong Kong to Madrid
and London. At a Big Data and Artificial Intelligence exhibition, participants
promised to unleash the big data revolution, avoid ‘data drudgery’, and ‘shoot
for the moon’.1 The expo brought together data scientists, engineers, develop-
ers, lawyers, and sales representatives to discuss the challenges and promises
of big data, AI, and machine learning. Despite the ‘hype’ around these tech-
nologies, many of the talks in the AI Lab Theatre, which we attended, started
by diagnosing the problems of AI. Drawing on public controversies, from
the ProPublica research on racism in predictive policing to Apple’s gender-
biased credit card limits, engineers and data scientists engaged with the social
and political diagnoses of their own work. This awareness of bias and dis-
crimination was widespread among the audience, who put their hands up
when prompted to express recognition of these controversies. Professional and
public controversies were entangled on this global scene of digital capitalism.
Following these diagnoses, engineers and scientists proceeded to offer so-
lutions, often in the form of yet another technological development. For
instance, one speaker criticized algorithms for giving a flat view of things
and recommended to contextualize them in order to reach a better under-
standing of data. Context became equivalent to engineering challenges of
knowledge graphs and Linked Data. Another talk proposed to address ques-
tions of bias and ethics and alerted participants to the range of devices available
for implementing ‘ethical’ AI. The IBM AI Fairness 360 toolkit is an ‘open-
source toolkit of metrics to check for unwanted bias in datasets and machine
Algorithmic Reason. Claudia Aradau and Tobias Blanke, Oxford University Press.
© Claudia Aradau and Tobias Blanke (2022). DOI: 10.1093/oso/9780192859624.003.0010
conclusion 205
Unfolding controversies
We have proposed that the methodology of the scene developed in the book
makes it possible to attend to how controversies unfold in relation to al-
gorithms, and their latest instalments as big data and AI. Unfolding means
to open from the folds, but also to expand and to disclose. In that sense,
scenes are neither events nor situations, but contain elements of both. As
scenes unfold, we can trace how arguments, tools, and practices are entangled
and contested across social worlds. Scenes are socio-temporal arrangements
where heterogeneous subjects and objects co-appear and give rise to var-
ied contestations. These scenes materialize differently around the world, but
they are also transversal operations and concerns. Security agencies attempt
to develop capacities globally to find the anomalous needle in the data hay.
Digital humanitarianism operates transnationally by being interlinked with
digital platforms. Demands for accountability of facial recognition technol-
ogy are shared in the US, across many European countries, and China. Digital
platforms have led to new kinds of worker associations across borders.
Scenes are incisions in the world where conflicts play out over what is
perceivable, what garners political value, and what becomes infra-sensible,
supra-sensible, or imperceptible in some way. By focusing on scenes
and their controversies, we have adopted a polymorphous approach to
contestation. Controversies have an interdisciplinary history, from scientific
13 Marres, Material Participation; Barry, Material Politics. As we saw in Chapter 6, Bonnie Honig
challenges the reading of political theorists as not attending to the materialities and objects of political
contestation and agonistic democracy.
1⁴ William Walters has argued that work on governmentality has at times ‘eclipsed a proper
consideration of politics’ (Walters, Governmentality, 5).
1⁵ Latour et al., “‘The Whole Is Always Smaller Than Its Parts”’, 591 (emphasis in text).
unfolding controversies 211
need to follow the details of the recompositions of power and its effects in the
present. This allows us to understand how historical bias is not just transmitted
or amplified by data, as critics of the racializing effects of digital technologies
have argued. There are many additional factors such as how targets of algo-
rithms are determined away from white-collar crime suspicions in Manhattan,
how algorithmic input is justified against diverse features that record places in
heterogeneous ways, or how algorithms focus on some data at the expense of
other data. Chapter 2 has illustrated how neural networks and decision trees
are set to concentrate on different features. Algorithms are not simply devices
of ‘self-fulfilling prophecies’ of historically discriminating data.
The second part of the book, ‘Materializations’, has unpacked the practices
of decomposition, recomposition, and partitioning across three scenes of con-
troversy: over targeting in war, digital humanitarian action, and valorization
by tech companies. Through these scenes, we have traced different aspects of
governing through algorithms: the production of dangerous others, the power
of digital platforms, and the valorization of data. In Chapter 3, we have shown
how security agencies are producing dangerous ‘others’ by reconstituting them
algorithmically as anomalies, where small details draw the line between what
is conceived as regular and what is irregular. Anomaly detection relies on the
continuous composition and recomposition of subjects as data points so that
calculations of distance can produce differences between data. Anomalous
others are unlike enemies, criminals, and other risky suspects as these have
been the targets of governmental interventions historically.
Drawing on materials from the Snowden archive and computer science
literature on anomaly detection, we have argued that security practices of
contact-chaining to find new suspects are transformed into detecting different
types of network anomalies where a suspect does not need to be in any known
relationship with other suspects. In fact, the greatest promise of anomaly de-
tection with machine learning is that suspicious behaviour can be extended
to digital traces like the length of a telephone call, which are otherwise not
known to induce suspicion. As anomaly detection can entail ‘the premature
exposure to death and debility that working with or being subjected to digital
technologies accelerates’,1⁶ these modes of racialization are indicative of a rel-
atively new form of nanoracism, a racism that remains at the threshold of the
perceptible even as its effects are deadly. It is possible that a target can be simul-
taneously a well-known journalist and a dangerous other because anomalies
could potentially be both, without any tension or contradiction.
Recompositions of the small and the large have also remade societal and
technical infrastructures. Algorithmic reason has materialized in platforms
and enabled their global expansion. As discussed in Chapter 4, platforms are
composites of small and large forms; it is this composition and recomposi-
tion that makes their plasticity possible and leads to concerns about their
imperial reach, global surveillance, and economic monopoly. While digital
platforms extract, appropriate, and extend, we have argued that they also break
up and decompose. Their material history shows how algorithmic reason can
effectively internalize externals and externalize internals through processes
of recomposition. Using distributed services like application programming
interfaces, platforms pull in outside parts, while cloud technologies offer plat-
form elements outside. Through the dual move of taking the inside out and
bringing the outside in, a few platforms have become the building blocks of
most things digital. In dispersing the various elements of platforms and re-
embedding them, they constitute a new form of micropower. We have analysed
the effects of platform micropower for humanitarian action and organiza-
tions. Humanitarian actors have been late comers to the digital world, but
they have embraced many of the digital technologies, particularly biomet-
rics and other technologies of data extraction. They also connect on all levels
to platforms, as these promise global instantaneous reach. Digital platforms
have been much less visible in discussions of humanitarianism, but insidiously
produce humanitarianism as control.
The third materialization of algorithmic reason we investigated is that of
economic value. We do not claim to develop a new theory of digital economy
but investigate the political effects of widely debated analyses of economic
value. By taking the patents of Internet companies as the place where con-
troversies over value production play out, new forms of valorization emerge,
which focus on the recomposition of smaller and smaller details. In academic
and non-academic apprehensions of digital economies, labour-centric po-
litical critiques of economies are complemented by controversies about the
universality of surveillance and monopolies of platform capitalism. These con-
cerns are often followed by political desires for states to regulate, protect, and
organize. Yet, other new forms of valorization present in patents might be
ignored.
Like other platforms, Spotify places its users more and more under capi-
talist surveillance and uses network effects to cement its platform power in
order to appropriate and circulate music products from around the world. Its
patents, however, also tell the story of a growing anxiety that this might not
be enough because of a limited ability to produce new content and the angst
to fail at consumption. Spotify’s patents display a new form of valorization
214 conclusion
friction, refusal, and resistance. These analyses diverge from diagnoses of al-
gorithmic governmentality, which see it as eviscerating democratic politics,
spelling the end of emancipation and even of political subjectivity. How are
friction, refusal, and resistance folded onto scenes of democratic politics?
From the petition of Google employees protesting Pentagon contracts to the
algorithmic discriminations and separations where new forms of nanoracism
are hidden behind seemingly neutral ideas such as anomaly detection, the
stakes could not be higher. In this part of the book, we advance the analysis of
scenes of controversies by attending to the effects that these have upon govern-
mental and anti-governmental interventions. We reconnect controversies to
an expanding vocabulary of contestation, which includes friction, refusal, and
resistance, where each is responding to different modes of making algorithms
governable.
In Chapter 6, an ethico-politics of friction reconfigures scenes of algorithmic
controversies by slowing down, interrupting, or otherwise making algorithmic
operations more costly. When science and technology studies (STS) scholar
Paul Edwards defines friction as ‘the costs in time, energy, and attention
required simply to collect, check, store, move, receive, and access data’, his
focus is on the work that generating data requires.1⁷ Friction is indicative of
effort, difficulty, cost, and slowing down. For us, frictions are not just socio-
technical occurrences but can be instigated, as we show in the final part of
the book. Such frictions slow down and differentially inflect the unfolding of
scenes of controversy. Frictions are material as much as social, they are col-
lective and dissensual, working towards a redistribution of the sensible. When
Google employees write a petition, it is its wider circulation that leads to unex-
pected unfoldings and wider discussions beyond the question of AI weaponry.
The unfoldings begin to recast understandings of AI technologies as social
phenomena and can initiate the formation of transversal collectives.
While the Google employees’ letter has at times been dismissed by crit-
ical scholars as not going far enough, the prism of friction allows us to
understand the move to slow down and inflect technologies differently. In
all our scenes, algorithmic operations have been presented as more similar
to other human–machine labour processes than the general idea of artificial
intelligence might suggest. As labour processes and workflows, they draw at-
tention both to the mundane, unexceptional practices of algorithms, which are
step-by-step operations, and to the limitations on the emergence of collective
dissensus.
1⁸ Haraway, Staying with the Trouble, 1. Legacy Russell has coined the term ‘glitch feminism’ (Russell,
Glitch Feminism). On algorithm trouble, see Meunier, Ricci, and Gray, ‘Algorithm Trouble’.
1⁹ Kane, High-Tech Trash, 15.
friction, refusal, resistance 217
2⁴ Wajcman, Technofeminism, 6.
2⁵ We use the prefix ‘anti’ here in the sense that Balibar has given it as ‘the most general modality of
the act of “facing up”’ (Balibar, Violence and Civility, 23).
2⁶ See, for instance, D’Ignazio and Klein, Data Feminism, 60.
2⁷ Balibar, ‘Democracy and Liberty in Times of Violence’.
References
Abbott, Dean. Applied Predictive Analytics: Principles and Techniques for the Professional
Data Analyst (London: John Wiley & Sons, 2014).
Access Now. ‘Dear Spotify: Don’t Manipulate Our Emotions for Profit’. 2021. Available at
https://www.accessnow.org/spotify-tech-emotion-manipulation/, [cited 28 May 2021].
Access Now. ‘Spotify, Don’t Spy: Global Coalition of 180+ Musicians and Human Rights
Groups Take a Stand against Speech-Recognition Technology’. 2021. Available at
https://www.accessnow.org/spotify-spy-tech-coalition/, [cited 28 May 2021].
ACM U.S. Technology Policy Committee. ‘Statement on Principles and Prerequisites
for the Development, Evaluation and Use of Unbiased Facial Recognition Tech-
nologies’. 2020. Available at https://www.acm.org/binaries/content/assets/public-policy/
ustpc-facial-recognition-tech-statement.pdf, [cited 4 June 2021].
Acxiom. ‘Consumer Insights Packages’. n.d. Available at https://www.acxiom.com/what-
we-do/data-packages/, [cited 18 February 2019].
Adadi, Amina, and Mohammed Berrada. ‘Peeking inside the Black-Box: A Survey on
Explainable Artificial Intelligence (XAI)’. IEEE Access 6 (2018): 52138–60.
Aggarwal, Charu C. Outlier Analysis (New York: Springer, 2013).
Aggarwal, Charu C., and Chandan K. Reddy. Data Clustering: Algorithms and Applications
(Boca Raton, FL: CRC Press, 2013).
Agyemang, Malik, Ken Barker, and Rada Alhajj. ‘A Comprehensive Survey of Numeric and
Symbolic Outlier Mining Techniques’. Intelligent Data Analysis 10(6) (2006): 521–38.
AI Now. ‘Algorithmic Accountability Policy Toolkit’. AI Now Institute, 2018. Available at
https://ainowinstitute.org/aap-toolkit.pdf, [cited 1 June 2021].
Akhgar, Babak, Gregory B. Saathoff, Hamid R. Arabnia, Richard Hill, Andrew Staniforth,
and Petra Saskia Bayerl, eds. Application of Big Data for National Security: A Practitioner’s
Guide to Emerging Technologies (Amsterdam: Butterworth-Heinemann, 2015).
Akoglu, Leman, Hanghang Tong, and Danai Koutra. ‘Graph-Based Anomaly Detec-
tion and Description: A Survey’. Data Mining and Knowledge Discovery 29(3) (2015):
626–88.
Alpaydin, Ethem. Introduction to Machine Learning (Cambridge, MA: MIT Press, 2014).
Amazon.com. ‘News Release—Amazon.com Announces Fourth Quarter Sales up 20% to
$72.4 Billion’. Amazon.com, 2019. Available at https://press.aboutamazon.com/news-
releases/news-release-details/amazoncom-announces-fourth-quarter-sales-20-724-
billion, [cited 24 October 2019].
Amicelle, Anthony, Claudia Aradau, and Julien Jeandesboz. ‘Questioning Security Devices:
Performativity, Resistance, Politics’. Security Dialogue 46(5) (2015): 293–306.
Amoore, Louise. The Politics of Possibility: Risk and Security Beyond Probability (Durham,
NC: Duke University Press, 2014).
Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others
(Durham, NC: Duke University Press, 2020).
Amoore, Louise, and Volha Piotukh. ‘Life Beyond Big Data: Governing with Little Analyt-
ics’. Economy and Society 44(3) (2015): 341–66.
220 references
Amoore, Louise, and Rita Raley. ‘Securing with Algorithms: Knowledge, Decision,
Sovereignty’. Security Dialogue 48(1) (2017): 3–10.
Ananny, Mike ‘Toward an Ethics of Algorithms: Convening, Observation, Probability, and
Timeliness’. Science, Technology, & Human Values 41(1) (2016): 93–117.
Ananny, Mike, and Kate Crawford. ‘Seeing without Knowing: Limitations of the Trans-
parency Ideal and Its Application to Algorithmic Accountability’. New Media & Society
20(3) (2018): 973–89.
Anders, Günther. Burning Conscience: The Case of the Hiroshima Pilot Claude Eatherly, Told
in His Letters to Günther Anders’ (London: Weidenfeld and Nicolson, 1961).
Anders, Günther. ‘Theses for the Atomic Age’. The Massachusetts Review 3(3) (1962): 493–
505.
Anders, Günther. Die Antiquiertheit des Menschen 2. Über die Zerstörung des Lebens im
Zeitalter der dritten industriellen Revolution. 5th edition (München: C. H. Beck, 2018
[1980]).
Anders, Günther. The Obsolescence of Man, Vol. II: On the Destruction of Life in the Epoch
of the Third Industrial Revolution. Translated by Josep Monter Pérez (available at https://
libcom.org/files/ObsolescenceofManVol%20IIGunther%20Anders.pdf [1980]).
Anders, Günther. Hiroshima Ist Überall (Munich: Verlag C. H. Beck, 1982).
Anders, Günther. Nous, fils d’Eichmann. Translated by Sabine Cornille and Philippe Ivernel
(Paris: Rivages Poche, 2003 [1964]).
Anders, Günther. Et si je suis désespéré, que voulez-vous que j’y fasse? Entretien avec Mathias
Greffrath. Translated by Christophe David (Paris: Editions Allia, 2010 [1977]).
Anderson, Chris. The Long Tail: How Endless Choice Is Creating Unlimited Demand
(London: Random House, 2007).
Anderson, David. ‘Report of the Bulk Powers Review’. Independent Reviewer of Terrorism
Legislation, 2016. Available at https://terrorismlegislationreviewer.independent.gov.uk/
wp-content/uploads/2016/08/Bulk-Powers-Review-final-report.pdf, [cited 30 August
2016].
Andrejevic, Mark, and Kelly Gates. ‘Big Data Surveillance: Introduction’. Surveillance &
Society 12(2) (2014): 185–96.
Apel, Hannah, Nikhil Anand, and Akhil Gupta. ‘Introduction: Temporality, Politics and the
Promise of Infrastructure’. In The Promise of Infrastructure, edited by Nikhil Anand, Akhil
Gupta, and Hannah Appel, 1–40 (Durham, NC: Duke University Press, 2018).
Apprich, Clemens, Wendy Hui Kyong Chun, Florian Cramer, and Hito Steyerl. Pattern
Discrimination (Minneapolis: meson press, 2018).
Aradau, Claudia. ‘Security That Matters: Critical Infrastructure and Objects of Protection’.
Security Dialogue 41(5) (2010): 491–514.
Aradau, Claudia. ‘Risk, (In)Security and International Politics’. In Routledge Handbook
of Risk Studies, edited by Adam Burgess, Alexander Alemanno, and Jens Zinn, 290–8
(London: Routledge, 2016).
Aradau, Claudia. ‘Experimentality, Surplus Data and the Politics of Debilitation in Border-
zones’. Geopolitics 27(1) (2022): 26–46.
Aradau, Claudia, and Tobias Blanke. ‘Politics of Prediction: Security and the Time/Space of
Governmentality in the Age of Big Data’. European Journal of Social Theory 20(3) (2017):
373–91.
Aradau, Claudia, and Tobias Blanke. ‘Governing Others: Anomaly and the Algorithmic
Subject of Security’. European Journal of International Security 2(1) (2018): 1–21.
Aradau, Claudia, and Tobias Blanke. ‘Algorithmic Surveillance and the Political Life of
Error’. Journal for the History of Knowledge 2(1) (2021): 1–13.
references 221
Aradau, Claudia, Tobias Blanke, and Giles Greenway. ‘Acts of Digital Parasitism: Hack-
ing, Humanitarian Apps and Platformisation’. New Media & Society 21(11–12) (2019):
2548–65.
Aradau, Claudia, and Rens van Munster. Politics of Catastrophe: Genealogies of the Unknown
(London: Routledge, 2011).
Arms Control Association. ‘Group of 4,000 Anonymous Google Employees Urging Com-
pany Not to Be “in the Business of War” Voted 2018 Arms Control Persons of the Year’.
Arms Control Association, 2019. Available at https://www.armscontrol.org/pressroom/
2018-acpoy-winner, [cited 24 January 2019].
Arora, Payal. ‘Decolonizing Privacy Studies’. Television & New Media 20(4) (2019): 366-78.
Arora, Payal. ‘General Data Protection Regulation—a Global Standard? Privacy Futures,
Digital Activism, and Surveillance Cultures in the Global South’. Surveillance & Society
17 (5) (2019).
Arthur, W. Brian. ‘Increasing Returns and the New World of Business’. Harvard Business Re-
view, 1996. Available at https://hbr.org/1996/07/increasing-returns-and-the-new-world-
of-business, [cited 28 October 2019].
Atanasoski, Neda, and Kalindi Vora. Surrogate Humanity: Race, Robots, and the Politics of
Technological Futures (Durham, NC: Duke University Press, 2019).
Auerbach, David. ‘You Are What You Click: On Microtargeting’. 2013. Available
at https://www.thenation.com/article/you-are-what-you-click-microtargeting/, [cited 10
March 2018].
Austin, Jonathan Luke. ‘Security Compositions’. European Journal of International Security
4(3) (2019): 249–73.
AWS. ‘AWS Disaster Response’. 2021. Available at https://aws.amazon.com/government-
education/nonprofits/disaster-response/, [cited 19 July 2021].
Bacchi, Umberto. ‘Face for Sale: Leaks and Lawsuits Blight Russia Facial Recognition’.
2020. Available at https://www.reuters.com/article/us-russia-privacy-lawsuit-feature-
trfn-idUSKBN27P10U, [cited 1 June 2021].
Balibar, Étienne. We, the People of Europe? Reflections on Transnational Citizenship (Prince-
ton: Princeton University Press, 2004).
Balibar, Étienne. Violence and Civility. On the Limits of Political Philosophy. Translated by
G. M. Goshgarian. (New York: Columbia University Press, 2015).
Balibar, Étienne. ‘Reinventing the Stranger: Walls All over the World, and How to Tear Them
Down’. Symploke 25(1) (2017): 25–41.
Balibar, Étienne. ‘Democracy and Liberty in Times of Violence’. In The Hrant Dink Memorial
Lecture 2018. Boğaziçi University, Istanbul, 2018.
Balibar, Étienne. Citizenship (Cambridge, UK: Polity, 2019).
Balzacq, Thierry, Tugba Basaran, Didier Bigo, Emmanuel-Pierre Guittet, and Christian
Olsson. ‘Security Practices’. In International Studies Encyclopedia, edited by Robert A.
Denemark (London: Blackwell, 2010).
Barkawi, Tarak, and Mark Laffey. ‘Retrieving the Imperial: Empire and International
Relations’. Millenium: Journal of International Studies 31(1) (2002): 109–27.
Barnett, Vic, and Toby Lewis. Outliers in Statistical Data (New York: John Wiley & Sons,
1978).
Barry, Andrew. Material Politics: Disputes Along the Pipeline (Chichester: Wiley Blackwell,
2013).
Barton, Georgina. ‘Germany Fines Facebook $2.3 Million for Violation of Hate Speech
Law’. 2019. Available at https://www.washingtonexaminer.com/news/germany-fines-
facebook-2-3-million-for-violation-of-hate-speech-law, [cited 26 July 2021].
222 references
Big Brother Watch. ‘Face-Off Campaign’. Big Brother Watch, 2019. Available at https://
bigbrotherwatch.org.uk/all-campaigns/face-off-campaign/, [cited 27 November 2019].
Big Brother Watch, 10 Human Rights Organisations, and Bureau of Investigative
Journalism and Others. ‘Applicants’ Written Observations’. Privacy International,
2019, available at https://www.privacyinternational.org/sites/default/files/2019-07/
Applicants%27%20Observations%20-%20May%202019.pdf, [cited 6 March 2020].
Big Brother Watch and Others v the UK. ‘Applications Nos. 58170/13, 62322/14 and
24960/15). Grand Chamber Judgement’. European Court of Human Rights, 2021. Avail-
able at https://www.bailii.org/eu/cases/ECHR/2021/439.html, [cited 25 May 2021].
Big Data LDN. ‘To Intelligence and Beyond’. 2019. Available at https://bigdataldn.com/,
[cited 22 July 2021].
Bigo, Didier. ‘The Möbius Ribbon of Internal and External Security(ies) ‘. In Identities, Bor-
ders, Orders. Rethinking International Relations Theory, edited by Mathias Albert, David
Jacobson, and Josef Lapid, 91–116 (Minneapolis: University of Minnesota Press, 2001).
Bigo, Didier. ‘Freedom and Speed in Enlarged Borderzones’. In The Contested Politics of
Mobility: Borderzones and Irregularity, edited by Vicki Squire, 31–50 (London: Routledge,
2010).
Bigo, Didier. ‘The (In)Securitization Practices of the Three Universes of EU Border
Control: Military/Navy—Border Guards/Police—Database Analysts’. Security Dialogue
45(3) (2014): 209–25.
Bigo, Didier, Engin Isin, and Evelyn Ruppert. ‘Data Politics’. In Data Politics: Worlds, Sub-
jects, Rights, edited by Didier Bigo, Engin Isin, and Evelyn Ruppert, 1–18 (London:
Routledge, 2019).
Binns, Reuben, Ulrik Lyngs, Max Van Kleek, Jun Zhao, Timothy Libert, and Nigel Shad-
bolt. ‘Third Party Tracking in the Mobile Ecosystem’. In WebSci ’18 Proceedings of the
10th ACM Conference on Web Science, 23–31 (Amsterdam, Netherlands: ACM Library,
2018).
Birchall, Clare. Radical Secrecy: The Ends of Transparency in Datafied America (Minneapolis:
University of Minnesota Press, 2021).
Blanke, Tobias. Digital Asset Ecosystems: Rethinking Crowds and Clouds (Oxford: Elsevier,
2014).
Blanke, Tobias, Giles Greenway, Jennifer Pybus, and Mark Cote. ‘Mining Mobile Youth
Cultures’. In 2014 IEEE International Conference on Big Data, 14–17 (Washington, DC:
2014).
Blanke, Tobias, and Jennifer Pybus. ‘The Material Conditions of Platforms: Monopolization
through Decentralization’. Social Media & Society 6(4) (2020): 1-13.
Boltanski, Luc, and Ève Chiapello. The New Spirit of Capitalism. Translated by Gregory
Elliott (London: Verso, 2005).
Booking.com. ‘Privacy Statement’. Booking.com, 2019. Available at https://www.booking
.com/general.en-gb.html?label=37781_privacy-statement-anchor_v2-&tmpl=docs
%2Fprivacy-policy&auth_success=1#policy-personal, [cited 6 February 2019].
Borak, Masha. ‘Facial Recognition Is Used in China for Everything from Refuse Collection
to Toilet Roll Dispensers and Its Citizens Are Growing Increasingly Alarmed, Survey
Shows’. South China Morning Post, 27 January 2021.
Boullier, Dominique. Sociologie du numérique (Paris: Armand Colin, 2016).
Bousquet, Antoine. The Eye of War: Military Perception from the Telescope to the Drone
(Minneapolis: University of Minnesota Press, 2018).
Brantingham, P Jeffrey. ‘The Logic of Data Bias and Its Impact on Place-Based Predictive
Policing’. Ohio St. J. Crim. L. 15 (2017): 473–86.
224 references
Brantingham, P. Jeffrey, Matthew Valasik, and George O. Mohler. ‘Does Predictive Policing
Lead to Biased Arrests? Results from a Randomized Controlled Trial’. Statistics and Public
Policy 5(1) (2018): 1–6.
Brayne, Sarah, Alex Rosenblat, and danah boyd. ‘Predictive Policing’. In Data & Civil Rights:
A New Era of Policing and Justice. 2015. Available at http://www.datacivilrights.org/
pubs/2015-1027/Predictive_Policing.pdf, [cited 28 July 2021].
Brown, Wendy. Walled States, Waning Sovereignty (New York: Zone Books, 2010).
Brown, Wendy. Undoing the Demos: Neoliberalism’s Stealth Revolution (New York: Zone
Books, 2015).
Brown, Wendy. ‘Neoliberalism’s Frankenstein: Authoritarian Freedom in Twenty-First
Century “Democracies”’. Critical Times: Interventions in Global Critical Theory
1(1) (2018): 60–79.
Brown, Wendy. In the Ruins of Neoliberalism: The Rise of Antidemocratic Politics in the West
(New York: Columbia University Press, 2019).
Bruns, Axel. Are Filter Bubbles Real? (Cambridge: Polity, 2019).
Bucher, Taina. If … Then: Algorithmic Power and Politics (Oxford: Oxford University Press,
2018).
Bucher, Taina. ‘The Right-Time Web: Theorizing the Kairologic of Algorithmic Media’. New
Media & Society 22(9) (2020): 1699–1714.
Bueger, Christian. ‘Making Things Known: Epistemic Practices, the United Nations, and the
Translation of Piracy’. International Political Sociology 9(1) (2015): 1–18.
Bundesamt für Justiz. ‘Federal Office of Justice Issues Fine against Facebook’. 2019.
Available at https://www.bundesjustizamt.de/DE/Presse/Archiv/2019/20190702.
html?nn=3449818, [cited 20 September 2019].
Bunz, Mercedes. ‘The Calculation of Meaning: On the Misunderstanding of New Artificial
Intelligence as Culture’. Culture, Theory and Critique 60(3–4) (2019): 264–78.
Buolamwini, Joy. ‘Response: Racial and Gender Bias in Amazon Rekognition—
Commercial AI System for Analyzing Faces’. Medium, 2019. Available at https://medium
.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-
commercial-ai-system-for-analyzing-faces-a289222eeced, [cited 22 October 2019].
Buolamwini, Joy, and Timnit Gebru. ‘Gender Shades: Intersectional Accuracy Disparities
in Commercial Gender Classification’. In Proceedings of the 1st Conference on Fair-
ness, Accountability and Transparency, edited by Sorelle A. Friedler and Wilson Christo.
Proceedings of Machine Learning Research 81: 77–91 (2018).
Bur, Jessie. ‘Pentagon’s “Rebel Alliance” Gets New Leadership’. C4ISRNET, 2019. Avail-
able at https://www.c4isrnet.com/management/leadership/2019/04/23/the-pentagons-
tech-experts-get-a-new-leader/, [cited 30 October 2019].
Burns, Ryan. ‘New Frontiers of Philanthro-Capitalism: Digital Technologies and Humani-
tarianism’. Antipode 51(4) (2019): 1101–22.
Burrell, Jenna. ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning
Algorithms’. Big Data & Society 3(1) (2016).
Burrell, Jenna, and Marion Fourcade. ‘The Society of Algorithms’. Annual Review of
Sociology 47 (2021): 213–37.
Butcher, Mike. ‘Cambridge Analytica CEO Talks to Techcrunch About Trump, Hillary
and the Future’. TechCrunch, 2017. Available at https://techcrunch.com/2017/11/06/
cambridge-analytica-ceo-talks-to-techcrunch-about-trump-hilary-and-the-future/,
[cited 28 February 2018].
Butler, Judith. The Force of Nonviolence: An Ethico-Political Bind (London: Verso, 2021).
references 225
Craddock, R., D. Watson, and W. Saunders. ‘Generic Pattern of Life and Behaviour Anal-
ysis’. Paper presented at the 2016 IEEE International Multi-Disciplinary Conference on
Cognitive Methods in Situation Awareness and Decision Support (CogSIMA) (March
2016), 21–5.
Crawford, Kate. The Atlas of AI (New Haven, CT: Yale University Press, 2021).
Crawford, Kate. ‘Can an Algorithm Be Agonistic? Ten Scenes from Life in Calculated
Publics’. Science, Technology, & Human Values 41(1) (2015): 77–92.
Currie, T. C. ‘Airbnb’s 10 Takeaways from Moving to Microservices’. The New Stack, 2017.
Available at https://thenewstack.io/airbnbs-10-takeaways-moving-microservices/, [cited
23 October 2019].
Currier, Cora, Glenn Greenwald, and Andrew Fishman. ‘U.S. Government Designated
Prominent Al Jazeera Journalist as “Member of Al Qaeda”’. The Intercept, 2015. Avail-
able at https://theintercept.com/2015/05/08/u-s-government-designated-prominent-al-
jazeera-journalist-al-qaeda-member-put-watch-list/, [cited 5 July 2021].
Danaher, John. ‘The Threat of Algocracy: Reality, Resistance and Accommodation’. Philos-
ophy & Technology 29(3) (2016): 245–68.
Danaher, John, Michael J Hogan, Chris Noone, Rónán Kennedy, Anthony Behan, Aisling
De Paor, Heike Felzmann, et al. ‘Algorithmic Governance: Developing a Research Agenda
through the Power of Collective Intelligence’. Big Data & Society 4(2) (2017): 1–21.
Daroczi, Gergely. Mastering Data Analysis with R (Birmingham, UK: Packt Publishing,
2015).
DARPA. ‘Anomaly Detection at Multiple Scales’. Defense Advanced Research
Projects Agency (DARPA), 2010. Available at https://www.fbo.gov/download/2f6/
2f6289e99a0c04942bbd89ccf242fb4c/DARPA-BAA-11-04_ADAMS.pdf, [cited 26
February 2016].
Davenport, Thomas H, and John C Beck. The Attention Economy: Understanding the New
Currency of Business (Boston: Harvard Business School Press, 2001).
Davidshofer, Stephan, Julien Jeandesboz, and Francesco Ragazzi. ‘Technology and Secu-
rity Practices: Situating the Technological Imperative’. In International Political Sociology:
Transversal Lines, edited by Basaran Tugba, Didier Bigo, Emmanuel-Pierre Guittet, and
R. B. J. Walker, 205–27 (London: Routledge, 2016).
Davis, Angela, and Eduardo Mendieta. Abolition Democracy: Beyond Prisons, Torture,
Empire: Interviews with Angela Davis (New York: Seven Stories Press, 2005).
DCMS. ‘Disinformation and “Fake News”: Final Report’. Digital, Culture, Media, and
Sport Committee (DCMS), House of Commons, 2019. Available at https://publications
.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf, [cited 18 February
2019].
Deckler, Janosch. ‘Germany Fines Facebook €2m for Violating Hate Speech Law’. Politico,
2019. Available at https://www.politico.eu/article/germany-fines-facebook-e2-million-
for-violating-hate-speech-law/, [cited 20 September 2019].
Deep Mind. ‘Ethics & Society’. 2019. Available at https://deepmind.com/applied/deepmind-
ethics-society/, [cited 29 May 2019].
De Goede, Marieke. ‘Fighting the Network: A Critique of the Network as a Security
Technology’. Distinktion: Journal of Social Theory 13(3) (2012): 215–32.
De Goede, Marieke, and Gavin Sullivan. ‘The Politics of Security Lists’. Environment and
Planning D: Society and Space 34(1) (2016): 67–88.
Deleuze, Gilles. ‘Postscript on the Societies of Control’. October 59(Winter) (1992): 3–7.
references 227
Dencik, Lina, Arne Hintz, Joanna Redden, and Emiliano Treré. ‘Exploring Data Jus-
tice: Conceptions, Applications and Directions’. Information, Communication & Society
22(7) (2019): 873–81.
Denning, Peter J. ‘Computer Science: The Discipline’. Encyclopedia of Computer Science
32(1) (2000): 9–23.
Der Derian, James. Virtuous War: Mapping the Military-Industrial-Media-Entertainment
Network (Boulder, CO: Westview Press, 2005).
De Reuver, Mark, Carsten Sørensen, and Rahul C. Basole. ‘The Digital Platform: A Research
Agenda’. Journal of Information Technology 33(2) (2018): 124–35.
Desrosières, Alain. ‘Masses, individus, moyennes: La statistique sociale au XIXe siècle’.
Hermès 2(2) (1988): 41–66.
Desrosières, Alain. ‘Du singulier au général. L’argument statistique entre la science
et l’État’ in Cognition et information en société, edited by B. Conein and Laurent
Thévenot, 267–82 (Paris: Éditions de l’École des Hautes Études en Sciences Sociales,
1997).
Desrosières, Alain. The Politics of Large Numbers: A History of Statistical Reasoning.
Translated by Camille Naish (Cambridge, MA: Harvard University Press, 2002).
Desrosières, Alain. ‘Mapping the Social World: From Aggregates to Individual’. Limn 2
(2012).
Diakopoulos, Nicholas. ‘Accountability in Algorithmic Decision Making’. Communications
of the ACM 59(2) (2016): 56–62.
Dick, Stephanie. ‘Artificial Intelligence’. Harvard Data Science Review 1(1) (2019).
Digital Humanitarian Network. ‘Digital Humanitarian Network—History and Today’. 2021.
Available at https://www.digitalhumanitarians.com/, [cited 9 June 2021].
D’Ignazio, Catherine, and Lauren F Klein. Data Feminism (Cambridge, MA: MIT Press,
2020).
Doffman, Zak. ‘Buying Huawei Technology “Like Buying Chinese Fighter Planes”, Shock
Report Warns’. Forbes, 2019. Available at https://www.forbes.com/sites/zakdoffman/
2019/09/25/buying-huawei-technology-like-buying-chinese-fighter-planes-shock-new-
report-warns/, [cited 25 September 2019].
Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine
Will Remake Our World (New York: Basic Books, 2015).
Doran, Will. ‘How the Ushahidi Platform Works, and What Comes Next’. 2018. Avail-
able at https://www.ushahidi.com/blog/2018/11/05/how-the-ushahidi-platform-works-
and-what-comes-next, [cited 9 June 2021].
Doshi-Velez, Finale, and Been Kim. ‘Towards a Rigorous Science of Interpretable Machine
Learning’. arXiv preprint arXiv:1702.08608 (2017).
Drott, Eric A. ‘Music as a Technology of Surveillance’. Journal of the Society for American
Music 12(3) (2018): 233–67.
Duguy, Michel. ‘Poétique de la scène’. In Philosophie de la scène, edited by Michel Deguy,
Thomas Dommange, Nicolas Doutey, Denis Guénoun, Esa Kirkkopelto, and Schirin
Nowrousian, 145–53 (Besançon: Les Solitaires Intempestifs, 2010).
Dupré, John, and Regenia Gagnier. ‘A Brief History of Work’. Journal of Economic Issues
30(2) (1996): 553–9.
Dwoskin, Elizabeth, Jeanne Whalen, and Regine Cabato. ‘Content Moderators See It All –
and Suffer’. The Washington Post (2018), A01.
Eberle, William, and Lawrence Holder. ‘Anomaly Detection in Data Represented as Graphs’.
Intelligent Data Analysis 11(6) (2007): 663–89.
228 references
Eckersley, Peter. ‘How Good Are Google’s New AI Ethics Principles?’. Electronic Fron-
tier Foundation, 2018. Available at https://www.eff.org/deeplinks/2018/06/how-good-
are-googles-new-ai-ethics-principles, [cited 23 January 2019].
Edwards, Jane. ‘Defense Innovation Board Eyes Ethical Guidelines for Use of AI in War-
fare’. ExecutiveGov, 2019. Available at https://www.executivegov.com/2019/01/defense-
innovation-board-eyes-ethical-guidelines-for-use-of-ai-in-warfare/, [cited 28 January
2019].
Edwards, Paul N. ‘Infrastructure and Modernity: Force, Time, and Social Organization in
the History of Sociotechnical Systems’. In Modernity and Technology, edited by Thomas J.
Misa, Philip Brey, and Andrew Feenberg, 185–226 (Cambridge, MA: MIT Press, 2003).
Edwards, Paul N., Geoffrey C Bowker, Steven J Jackson, and Robin Williams. ‘Introduction:
An Agenda for Infrastructure Studies’. Journal of the Association for Information Systems
10(Special Issue) (2009):364–74.
Edwards, Paul N. A Vast Machine: Computer Models, Climate Data, and the Politics of Global
Warming (Cambridge, Mass.: The MIT Press, 2010).
Edwards, Paul N, Steven J Jackson, Melissa K Chalmers, Geoffrey C Bowker, Christine L
Borgman, David Ribes, Matt Burton, and Scout Calvert. ‘Knowledge Infrastructures: In-
tellectual Frameworks and Research Challenges’. Deep Blue, 2013. Available at http://hdl.
handle.net/2027.42/97552, [cited 28 July 2021].
Egbert, Simon, and Matthias Leese. Criminal Futures: Predictive Policing and Everyday Police
Work (London: Routledge, 2021).
Epstein, Zach. ‘Microsoft Says Its Racist Facial Recognition Tech Is Now Less Racist’. BGR,
2018. Available at https://bgr.com/2018/06/27/microsoft-facial-recognition-dark-skin-
tone-improvements/, [cited 16 February 2019].
Erickson, Paul, Judy L. Klein, Lorraine Daston, Rebecca Lemov, Thomas Sturm, and
Michael D Gordin. How Reason Almost Lost Its Mind: The Strange Career of Cold War
Rationality (Chicago: University of Chicago Press, 2013).
eu-LISA. ‘Artificial Intelligence in the Operational Management of Large-Scale IT Sys-
tems’. eu-LISA, 2020. Available at https://www.eulisa.europa.eu/Publications/Reports/
AI%20in%20the%20OM%20of%20Large-scale%20IT%20Systems.pdf, [cited 8 October
2020].
European Commission. ‘Proposal for a Regulation Laying Down Harmonised Rules on Ar-
tificial Intelligence’. 2021. Available at https://eur-lex.europa.eu/legal-content/EN/TXT/
?qid=1623335154975&uri=CELEX%3A52021PC0206, [cited 26 July 2021].
European Commission’s High-Level Expert Group on Artificial Intelligence. ‘Draft Ethics
Guidelines for Trustworthy AI’. 2018. Available at https://ec.europa.eu/digital-single-
market/en/news/draft-ethics-guidelines-trustworthy-ai, [cited 28 January 2019].
European Commission’s High-Level Expert Group on Artificial Intelligence. ‘Ethics Guide-
lines for Trustworthy AI’. 2019. Available at https://ec.europa.eu/digital-single-market/
en/news/ethics-guidelines-trustworthy-ai, [cited 28 May 2019].
European Economic and Social Committee. ‘The Ethics of Big Data: Balancing Economic
Benefits and Ethical Questions of Big Data in the EU Policy Context’. EESC, 2017.
Available at https://www.eesc.europa.eu/resources/docs/qe-02-17-159-en-n.pdf, [cited
19 January 2019].
European External Action Service. ‘EU vs Disinfo’. East StratCom Task Force, 2019.
Available at https://euvsdisinfo.eu/news/, [cited 29 September 2019].
European Parliament. ‘What If Algorithms Could Abide by Ethical Principles?’. European
Parliament, 2018. Available at http://www.europarl.europa.eu/RegData/etudes/ATAG/
2018/624267/EPRS_ATA(2018)624267_EN.pdf, [cited 3 May 2019].
references 229
European Space Agency. ‘DroneAI Solution for Humanitarian and Emergency Situations’.
2021. Available at https://business.esa.int/projects/droneAI, [cited 9 June 2021].
Ewald, François. ‘Omnes et singulatim. After Risk’. The Carceral 7 (2011): 77–107.
Facebook. ‘Community Standards’. 2019. Available at https://en-gb.facebook.com/
communitystandards/, [cited 16 September 2019].
Facebook. ‘Itaú’. 2021. Available at https://www.facebook.com/business/success/2-itau,
[cited 5 July 2021].
Facebook. ‘NetzDG Transparency Report’. Facebook, January 2019. Available at https://
www.facebook.com/help/1057152381103922, [cited 10 June 2019].
Facebook Files. ‘Hate Speech and Anti-Migrant Posts: Facebook’s Rules’. The Guardian,
2017. Available at https://www.theguardian.com/news/gallery/2017/may/24/hate-
speech-and-anti-migrant-posts-facebooks-rules, [cited 15 October 2019].
Fang, Lee. ‘Google Won’t Renew Its Drone AI Contract, but It May Still Sign Future Mili-
tary AI Contract’. The Intercept, 2018. Available at https://theintercept.com/2018/06/01/
google-drone-ai-project-maven-contract-renew/, [cited 1 June 2019].
Fang, Lee. ‘Google Hired Gig Economy Workers to Improve Artificial Intelligence in Con-
troversial Drone-Targeting Project’. The Intercept, 2019. Available at https://theintercept
.com/2019/02/04/google-ai-project-maven-figure-eight/, [cited 1 June 2019].
Fanon, Frantz. Conduits of Confession in North Africa (2). In Alienation and Freedom,
edited by Jean Khalfa, and Robert J. C. Young, 413–16 (London: Bloombury, 2018).
Fassin, Didier. Humanitarian Reason: A Moral History of the Present (Berkeley: University
of California Press, 2011).
FAT-ML. ‘Fairness, Accountability, and Transparency in Machine Learning’. 2019. Available
at https://www.fatml.org/, [cited 29 November 2019].
Federal Trade Commission. ‘Complaint against Cambridge Analytica, LLC, a Corpora-
tion. Docket No. 9383’. 2019. Available at https://www.ftc.gov/system/files/documents/
cases/182_3107_cambridge_analytica_administrative_complaint_7-24-19.pdf, [cited 5
December 2020].
Federal Trade Commission. ‘Statement of Chairman Joe Simons and Commissioners
Noah Joshua Phillips and Christine S. Wilson in Re Facebook, Inc.’. 2019. Available
at https://www.ftc.gov/public-statements/2019/07/statement-chairman-joe-simons-
commissioners-noah-joshua-phillips-christine, [cited 5 December 2020].
Federici, Silvia. ‘Social Reproduction Theory. History, Issues and Present Challenges’.
Radical Philosophy 2.04(Spring) (2019): 55–7.
Feldstein, Steven. ‘The Global Expansion of AI Surveillance’. Carnegie Endowment for In-
ternational Peace, 2019. Available at https://carnegieendowment.org/files/WP-Feldstein-
AISurveillance_final1.pdf, [cited 23 November 2019].
Ferguson, Andrew Guthrie. The Rise of Big Data Policing: Surveillance, Race, and the Future
of Law Enforcement (New York: New York University Press, 2019).
Fisher, Anna Watkins. ‘User Be Used: Leveraging the Play in the System’. Discourse
36(3) (2014): 383–99.
Fisher, Christine. ‘Facebook Increases Pay for Contractors and Content Modera-
tors’. 2019. Available at https://www.engadget.com/2019/05/13/facebook-increases-
contractor-content-moderator-pay/, [cited 14 October 2019].
FISWG. ‘Facial Identification Scientific Working Group’. 2021. Available at https://
fiswg.org/index.htm, [cited 21 May 2021].
Fitzsimmons, Seth. ‘Fast, Powerful, and Practical: New Technology for Aerial Imagery in
Disaster Response’. 2018. Available at https://www.hotosm.org/updates/new-technology-
for-aerial-imagery-in-disaster-response/, [cited 9 June 2021].
230 references
Floridi, Luciano. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality
(Oxford: Oxford University Press, 2014).
Floridi, Luciano. ‘Soft Ethics, the Governance of the Digital and the General Data Protection
Regulation’. Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences 376(2133) (2018).
Fortunati, Leopoldina. ‘For a Dynamic and Post-Digital History of the Internet: A Research
Agenda’. Internet Histories 1(1–2) (2017): 180–7.
Foucault, Michel. Power/Knowledge. Selected Interviews & Other Writings 1992-1977. Edited
by Colin Gordon (Brighton: Harvester, 1980).
Foucault, Michel. ‘Questions of Method’. In The Foucault Effect. Studies in Governmentality,
edited by Graham Burchell, Colin Gordon, and Peter Miller, 73–86 (Chicago: University
of Chicago Press, 1991).
Foucault, Michel. Discipline and Punish: The Birth of the Prison (London: Penguin, 1991
[1977]).
Foucault, Michel. ‘Omnes et singulatim: Toward a Critique of “Political Reason”’. In Power.
Essential Works of Foucault 1954-1984, edited by James D. Faubion, 298–325 (London:
Penguin, 2000).
Foucault, Michel. Security, Territory, Population. Lectures at the Collège de France, 1977–78
(Basingstoke: Palgrave, 2007).
Foucault, Michel. The Birth of Biopolitics: Lectures at the Collège de France, 1978–79
(Basingstoke: Palgrave Macmillan, 2008).
Foucault, Michel. Wrong-Doing, Truth-Telling: The Function of Avowal in Justice (Chicago:
University of Chicago Press, 2014).
Foucault, Michel. About the Beginning of the Hermeneutics of the Self. Lectures at Dar-
mouth College, 1980. Translated by Graham Burchell. (Chicago: Chicago University
Press, 2016).
Fourcade, Marion, and Jeffrey Gordon. ‘Learning Like a State: Statecraft in the Digital Age’.
Journal of Law and Political Economy 1(1) (2020): 78–108.
FRA. ‘Facial Recognition Technology: Fundamental Rights Considerations in the Context
of Law Enforcement’. European Union Agency for Fundamental Rights (FRA), 2019.
Available at https://fra.europa.eu/en/publication/2019/facial-recognition-technology-
fundamental-rights-considerations-context-law, [cited 16 April 2020].
Freeman, David. ‘Data Science vs. the Bad Guys: Defending LinkedIn from Fraud and
Abuse’. SlideShare, 2015. Available at https://www.slideshare.net/DavidFreeman14/data-
science-vs-the-bad-guys-defending-linkedin-from-fraud-and-abuse, [cited 22 October
2019].
Fry, Hannah. Hello World: How to Be Human in the Age of the Machine (New York: W. W.
Norton, 2018).
Fuchs, Christian. ‘The Digital Labour Theory of Value and Karl Marx in the Age of Face-
book, YouTube, Twitter, and Weibo’. In Reconsidering Value and Labour in the Digital Age,
edited by Eran Fisher, and Christian Fuchs, 26–41 (Basingstoke: Palgrave Macmillan,
2015).
Fuchs, Christian. ‘Günther Anders’ Undiscovered Critical Theory of Technology in the Age
of Big Data Capitalism’. tripleC: Communication, Capitalism & Critique 15(2) (2017):
582–611.
Fuchs, Christian, and Sebastian Sevignani. ‘What Is Digital Labour? What Is Digital Work?
What’s Their Difference? And Why Do These Questions Matter for Understanding Social
Media? ’. tripleC: Communication, Capitalism & Critique 11(2) (2013): 237–93.
references 231
Fuller, Jacquelline, and Jeff Dean. ‘Here Are the Grantees of the Google AI Impact Chal-
lenge’. 2019. Available at https://crisisresponse.google/, [cited 9 March 2021].
Fumagalli, Andrea, Stefano Lucarelli, Elena Musolino, and Giulia Rocchi. ‘Digital Labour
in the Platform Economy: The Case of Facebook’. Sustainability 10(6) (2018).
Gago, Verónica. Feminist International: How to Change Everything (London: Verso, 2020).
Galison, Peter. ‘The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision’.
Critical Inquiry 21(1) (1994): 228–66.
Garmark, Sten, Dariusz Dziuk, Owen Smith, Lars Christian Olofsson, and Nikolaus
Toumpelis. ‘Cadence-Based Playlists Management System’, Spotify AB Publisher. United
States Patent Office, 2015.
Gawer, Annabelle, ed. Platforms, Markets and Innovation (Cheltenham: Edward Elgar,
2011).
GCHQ. ‘HIMR Data Mining Research Problem Book’. Snowden Archive, 2011. Available
at https://edwardsnowden.com/wp-content/uploads/2016/02/Problem-Book-Redacted.
pdf, [cited 27 April 2016].
GCHQ. ‘GCHQ Analytic Cloud Challenges’. 2012. Available at https://search
.edwardsnowden.com/docs/GCHQAnalyticCloudChallenges2015-09-25nsadocs,
[cited 20 February 2016].
General Data Protection Regulation. ‘Regulation (EU) 2016/679 of the European Parlia-
ment and of the Council of 27 April 2016 on the Protection of Natural Persons with
Regard to the Processing of Personal Data and on the Free Movement of Such Data, and
Repealing Directive 95/46/EC’. Official Journal of the European Union, 2016. Available at
https://eur-lex.europa.eu/, [cited 25 November 2019].
Gibson, Clay, Will Shapiro, Santiago Gil, Ian Anderson, Mgreth Mpossi, Oguz Semerci,
and Scott Wolf. ‘Methods and Systems for Session Clustering Based on User Experience,
Behavior, and Interactions’, Spotify AB Publisher. Unites States Patent Office, 2017.
Gillespie, Tarleton. ‘The Politics of “Platforms”’. New Media & Society 12(3) (2010): 347–64.
Gillespie, Tarleton. ‘Governance of and by Platforms’. In Handbook of Social Media, edited
by Jean Burgess, Thomas Poell, and Alice Marwick, 254–78 (London: Sage 2017).
Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the
Hidden Decisions that Shape Social Media (New Haven: Yale University Press, 2018).
Gillespie, Tarleton. ‘The Relevance of Algorithms’. In Media Technologies: Essays on Com-
munication, Materiality, and Society, edited by Tarleton Gillespie, Pablo J. Boczkowski,
and Kirsten A. Foot, 167–93 (Cambridge, MA: MIT Press, 2014).
Gilmore, Ruth Wilson. ‘Abolition Geography and the Problem of Innocence’. In Futures
of Black Radicalism, edited by Theresa Gaye Johnson and Alex Lubin, 300–23 (London:
Verso, 2017).
Gitelman, Lisa. Raw Data Is an Oxymoron. (Cambridge, MA: MIT Press, 2013).
Goldhaber, Michael H. ‘Attention Shoppers!’. Wired 2019(23) (1997).
Goldstein, Brett Jonathan, and Maggie Kate King. ‘Rare Event Forecasting System and
Method’. Civicscape, LLC Publisher, United States Patent Office, 2018.
Goldstein, Markus, and Seiichi Uchida. ‘A Comparative Evaluation of Unsupervised
Anomaly Detection Algorithms for Multivariate Data’. PloS One 11(4) (2016): e0152173.
Gonzalez, Ana Lucia. ‘The “Microworkers” Making Your Digital Life Possible’. 2019.
Available at https://www.bbc.co.uk/news/business-48881827, [cited 9 June 2021].
Goodman, Bryce, and Seth Flaxman. ‘European Union Regulations on Algorithmic
Decision-Making and a “Right to Explanation”’. AI Magazine 38(3) (2017): 50–7.
Google. ‘Helping People Access Trusted Information and Resources in Critical Moments’.
2021. Available at https://crisisresponse.google/, [cited 5 July 2021].
232 references
Google. ‘Removals under the Network Enforcement Law’. Google, 2018. Available at https://
transparencyreport.google.com/netzdg/youtube, [cited 22 February 2019].
Goriunova, Olga. The Digital Subject: People as Data as Persons’. Theory, Culture & Society
36(6) (2019): 125–45.
Gourley, Bob, and Alex Olesker. ‘To Protect and Serve with Big Data’. CTOlabs, 2013.
Available at https://apo.org.au/node/34913, [cited 27 December 2015].
Graham, Mark, and Mohammad Amir Anwar. ‘The Global Gig Economy: Towards a
Planetary Labour Market?’. First Monday 24(4) (2019).
Graham, Stephen, and Nigel Thrift. ‘Out of Order: Understanding Repair and Maintenance’.
Theory, Culture & Society 24(3) (2007): 1–25.
Gray, Jonathan. ‘Data Witnessing: Attending to Injustice with Data in Amnesty Interna-
tional’s Decoders Project’. Information, Communication & Society 22(7) (2019): 971–91.
Giles Greenway, Pybus, Jennifer; Cote, Mark, and Blanke, Tobias. ‘Research on Online Dig-
ital Cultures-Community Extraction and Analysis by Markov and k-Means Clustering’.
In Personal Analytics and Privacy: An Individual and Collective Perspective: 1st Inter-
national Workshop, PAP 2017, Held in Conjunction with ECML PKDD 2017, Skopje,
Macedonia, September 18, 2017, Revised Selected Papers, edited by Riccardo Guidotti,
Anna Monreale, Dino Pedreschi, and Serge Abiteboul, 110–21 (London: Springer Verlag,
2017).
Grewal, Paul. ‘Suspending Cambridge Analytica and SCL Group from Facebook’. Facebook
News, 2018. Available at https://about.fb.com/news/2018/03/suspending-cambridge-
analytica/, [cited 7 December 2021].
Grossman, Lev. ‘You—Yes, You—Are Time’s Person of the Year’. 2006. Available at http://
content.time.com/time/magazine/article/0,9171,1570810,00.html, [cited 5 November
2019].
Grothoff, Christian, and J. M. Porup. ‘The NSA’s SKYNET Program May Be Killing
Thousands of Innocent People’. Ars Technica, 2016. Available at http://arstechnica.co.
uk/security/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent-
people/, [cited 21 June 2016].
GSMA. ‘The Data Value Chain’. 2018. Available at https://www.gsma.com/publicpolicy/wp-
content/uploads/2018/07/GSMA_Data_Value_Chain_June_2018.pdf, [cited 4 February
2019].
Gunning, David. ‘Explainable Artificial Intelligence (XAI). Programme Update’. DARPA,
2017. Available at https://s3.documentcloud.org/documents/5794867/National-
Security-Archive-David-Gunning-DARPA.pdf, [cited 29 October 2019].
Gurstein, Michael B. ‘Open Data: Empowering the Empowered or Effective Data Use for
Everyone?’. First Monday 16(2) (2011).
Guszcza, James, Iyad Rahwan, Will Bible, Manuel Cebrian, and Vic Katyal. ‘Why We Need
to Audit Algorithms’. Harvard Business Review (28 November 2018).
Gutiérrez, Miren. Data Activism and Social Change (Basingstoke: Palgrave, 2018).
Haggerty, Kevin D., and Richard V. Ericson. ‘The Surveillant Assemblage’. British Journal of
Sociology 51(4) (2000): 605–22.
Hall, Patrick, SriSatish Ambati, and Wen Phan. ‘Ideas on Interpreting Machine Learn-
ing’. 2017. Available at https://www.oreilly.com/radar/ideas-on-interpreting-machine-
learning/, [cited 9 June 2021].
Hamon, Dominic, Timo Burkard, and Arvind Jain. ‘Predicting User Navigation Events’.
Google. United States Patent Office, 2013.
Han, Byung-Chul. The Expulsion of the Other: Society, Perception and Communication Today
(Cambridge, UK: Polity, 2018).
references 233
Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature (New York:
Routledge, 1991).
Haraway, Donna J. Staying with the Trouble: Making Kin in the Chthulucene (Durham, NC:
Duke University Press, 2016).
Harcourt, Bernard E. Against Prediction: Profiling, Policing, and Punishing in an Actuarial
Age (Chicago: University of Chicago Press, 2008).
Harcourt, Bernard E. Exposed: Desire and Disobedience in the Digital Age (Cambridge, MA:
Harvard University Press, 2015).
Harvey, Adam, and Jules LaPlace. ‘Microsoft Celeb’. 2020. Available at https://exposing.ai/
msceleb/, [cited 1 June 2021].
Hayles, N. Katherine. How We Think: Digital Media and Contemporary Technogenesis
(Chicago: University of Chicago Press, 2012).
Hayles, N. Katherine. Unthought: The Power of the Cognitive Nonconscious (Chicago:
University of Chicago Press, 2017).
Heldt, Amélie Pia. ‘Reading between the Lines and the Numbers: An Analysis of the First
NetzDG Reports’. Internet Policy Review 8(2) (2019).
Helmond, Anne. ‘The Platformization of the Web: Making Web Data Platform Ready’. Social
Media & Society 1(2) (2015): 1–11.
Helmond, Anne, David B. Nieborg, and Fernando N. van der Vlist. ‘Facebook’s Evolution:
Development of a Platform-as-Infrastructure’. Internet Histories 3(2) (2019): 123–46.
Herrman, John. ‘Cambridge Analytica and the Coming Data Bust’. The New York Times
(10 April 2018).
Hey, Tony, Stewart Tansley, and Kristin M Tolle. The Fourth Paradigm: Data-Intensive
Scientific Discovery (Redmond, WA: Microsoft Research, 2009).
Hibou, Béatrice. The Bureaucratization of the World in the Neoliberal Era. Translated by
Andrew Brown. (New York: Palgrave Macmillan, 2015).
Hildebrandt, Mireille. Law for Computer Scientists and Other Folk (Oxford: Oxford Univer-
sity Press, 2020).
Hill, Kashmir. ‘The Secretive Company That Might End Privacy as We Know It’. The New
York Times (6 June 2021).
Hinchcliffe, Dion. ‘Comparing Amazon’s and Google’s Platform-as-a-Service Offerings’.
2008. Available at https://www.zdnet.com/article/comparing-amazons-and-googles-
platform-as-a-service-paas-offerings/, [cited 23 October 2019].
Hindess, Barry. ‘Citizenship in the International Management of Populations’. American
Behavioral Scientist 43(9) (2000): 1486–97.
Hindess, Barry. ‘Politics as Government: Michel Foucault’s Analysis of Political Reason’.
Alternatives: Global, Local, Political 30(4) (2005): 389–413.
Hoffmann, Anna Lauren. ‘Where Fairness Fails: Data, Algorithms, and the Limits of
Antidiscrimination Discourse’. Information, Communication & Society 22(7) (2019):
900–15.
Holmqvist, Caroline. Policing Wars: On Military Intervention in the Twenty-First Century
(Basingstoke: Palgrave, 2016).
Home Office. ‘Operational Case for Bulk Powers’. UK Government, 2016. Available
at https://www.gov.uk/government/publications/investigatory-powers-bill-overarching-
documents, [cited 1 March 2016].
Hong, Sun-ha. Technologies of Speculation: The Limits of Knowledge in a Data-Driven Society
(New York: New York University Press, 2020).
Honig, Bonnie. Emergency Politics: Paradox, Law, Democracy (Princeton: Princeton Uni-
versity Press, 2009).
234 references
Honig, Bonnie. Public Things: Democracy in Disrepair (New York: Fordham University
Press, 2017).
Hoppensted, Max. ‘Zu Besuch in Facebooks Neuem Löschzentrum, das gerade den
Betrieb aufnimmt’. Vice, 2017. Available at https://www.vice.com/de/article/qv37dv
/zu-besuch-in-facebooks-neuem-loschzentrum-das-gerade-den-betrieb-aufnimmt,
[cited 27 September 2019].
Horkheimer, Max. Critique of Instrumental Reason (London: Verso, 2013).
Hosein, Gus, and Carly Nyst. Aiding Surveillance. An Exploration of How Development and
Humanitarian Aid Initiatives Are Enabling Surveillance in Developing Countries (London:
Privacy International, 2013).
House of Commons. ‘Investigatory Powers Bill’. Volume 611, 2016. Available at https://
hansard.parliament.uk/Commons/2016-06-07/debates/16060732000001/Investigatory
PowersBill, [cited 1 August 2016].
House of Commons Science and Technology Committee. ‘Algorithms in Decision-
Making. Fourth Report of the Session 2017-2019’. House of Commons, 2018.
Available at https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/
351.pdf, [cited 20 June 2018].
Hulaud, Stéphane. ‘Identification of Taste Attributes from an Audio Signal’. Spotify AB
Publisher. United States Patent Office, 2018.
Human Rights Watch. ‘Germany: Flawed Social Media Law’. 2018. Available at https://
www.hrw.org/news/2018/02/14/germany-flawed-social-media-law, [cited 8 September
2019].
Human Rights Watch. ‘UN Shared Rohingya Data without Informed Consent’. 2021.
Available at https://www.hrw.org/news/2021/06/15/un-shared-rohingya-data-without-
informed-consent, [cited 15 June 2021].
Huysmans, Jef. The Politics of Insecurity: Fear, Migration and Asylum in the EU (London:
Routledge, 2006).
Huysmans, Jef. ‘What’s in an Act? On Security Speech Acts and Little Security Nothings’.
Security Dialogue 42(4–5) (2011): 371–83.
Huysmans, Jef. Security Unbound: Enacting Democratic Limits (London: Routledge, 2014).
IBM. ‘Everyday Ethics for Artificial Intelligence. A Practical Guide for Designers and
Developers’. IBM, 2018. Available at https://www.ibm.com/watson/assets/duo/pdf/
everydayethics.pdf, [cited 10 February 2019].
IBM. ‘Hybrid Data Management’. n.d. Available at https://www.ibm.com/analytics/data-
management, [cited 4 February 2019].
IBM. ‘Written Evidence’. House of Commons, 2017. Available at http://data.
parliament.uk/WrittenEvidence/CommitteeEvidence.svc/EvidenceDocument/
Science%20and%20Technology/Algorithms%20in%20decisionmaking/written/71691.
html, [cited 2 June 2019].
ICO. ‘Facebook Ireland Ltd. Monetary Penalty Notice’. 2018. Available at https://ico.org.uk/
media/2259364/facebook-noi-redacted.pdf, [cited 19 February 2019].
ICO. ‘Letter to UK Parliament. ICO Investigation into Use of Personal Information
and Political Influence’. 2020. Available at https://ico.org.uk/media/action-weve-taken/
2618383/20201002_ico-o-ed-l-rtl-0181_to-julian-knight-mp.pdf, [cited 5 December
2020].
ICO. ‘ICO Issues Provisional View to Fine Clearview AI Inc over £17 Million’. 2021. Avail-
able at https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2021/11/ico-
issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/, [cited 30 November
2021].
references 235
IEEE. ‘Ethically Aligned Design. A Vision for Prioritizing Human Wellbeing with
Artificial Intelligence and Autonomous Systems’. IEEE Global Initiative on Ethics of Au-
tonomous and Intelligent Sytems, 2019. Available at https://standards.ieee.org/industry-
connections/ec/autonomous-systems.html, [cited 28 May 2019].
ILO. Digital Refugee Livelihoods and Decent Work. Towards Inclusion in a Fairer Digital
Economy. (Geneva: International Labour Organization 2021).
Information Is Beautiful. ‘How Much Do Music Artists Earn Online?’. 2010. Available
at https://informationisbeautiful.net/2010/how-much-do-music-artists-earn-online/,
[cited 19 July 2021].
Instagram. ‘What’s the Difference between NetzDG and Instagram’s Community Guide-
lines?’. 2019. Available at https://help.instagram.com/1787585044668150, [cited 18
October 2019].
Institute of Mathematics and its Applications. ‘Written Evidence Submitted by the
Institute of Mathematics and Its Applications (Alg0028)’. UK House of Commons,
Science and Technology Committee, 2017. Available at http://data.parliament.uk/
writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-
committee/algorithms-in-decisionmaking/written/68989.pdf, [cited 18 December
2018].
International Committee of the Red Cross (ICRC), and Privacy International. ‘The Human-
itarian Metadata Problem: “Doing No Harm” in the Digital Era’. 2018.
Introna, Lucas D. ‘Algorithms, Governance, and Governmentality: On Governing Academic
Writing’. Science, Technology & Human Values 41(1) (2015): 17–49.
Irani, Lilly. ‘The Cultural Work of Microwork’. New Media & Society 17(5) (2015): 720–39.
Irani, Lilly, and M Six Silberman. ‘Turkopticon: Interrupting Worker Invisibility in Ama-
zon Mechanical Turk’. Paper presented at the SIGCHI Conference on Human Factors in
Computing Systems, Paris, France, 27 April 27–2 May 2013.
Isaac, Mike. ‘After Facebook’s Scandals, Mark Zuckerberg Says He’ll Shift Focus to Private
Sharing’. The New York Times (6 March 2019).
Isin, Engin F. Citizens without Frontiers (London: Bloomsbury, 2012).
Jabri, Vivienne. The Postcolonial Subject: Claiming Politics/Governing Others in Late Moder-
nity (London: Routledge, 2012).
Jackiewicz, Agata. ‘Outils notionnels pour l’analyse des controverses’. Questions de commu-
nication 31 (2017).
Jacobs, Adam. ‘The Pathologies of Big Data’. Communications of the ACM 52(8) (2009):
36–44.
Jacobsen, Katja Lindskov. The Politics of Humanitarian Technology: Good Intentions, Unin-
tended Consequences and Insecurity (London: Routledge, 2015).
Janert, Philipp K. Data Analysis with Open Source Tools (Cambridge, MA: O’Reilly Media,
2010).
Jaume-Palasi, Lorena, and Matthias Spielkamp. ‘Ethics and Algorithmic Processes for
Decision Making and Decision Support’. AlgorithmWatch, 2017. Available at https://
algorithmwatch.org/en/publication/ethics-and-algorithmic-processes-for-decision-
making-and-decision-support/, [cited 3 March 2019].
Jehan, Tristan, Dariusz Dziuk, Gustav Söderström, Mateo Rando, and Nicola Montecchio.
‘Identifying Media Content’. Google Patents. United States Patent Office, 2018.
Johnson, Chris. ‘From Idea to Execution: Spotify’s Discover Weekly’. 2015. Avail-
able at https://www.slideshare.net/MrChrisJohnson/from-idea-to-execution-spotifys-
discover-weekly/, [cited 27 July 2021].
Jordan, Tim. The Digital Economy (Cambridge, UK: Polity, 2019).
236 references
Kozlowska, Hanna. ‘The Pivot to Video Was Based on a Lie, Lawsuit Alleges’. 2018. Avail-
able at https://qz.com/1427406/advertisers-say-facebook-inflated-video-metrics-even-
further/, [cited 25 October 2019].
Krause, Till, and Hannes Grassegger. ‘Inside Facebook’. Süddeutsche Zeitung (15 December
2016).
Kushner, Scott. ‘The Instrumentalised User: Human, Computer, System’. Internet Histories
5(2) (2021): 154–70.
Kwet, Michael. ‘Digital Colonialism: US Empire and the New Imperialism in the Global
South’. Race & Class 60(4) (2019): 3–26.
Lafrance, Adrienne. ‘Facebook and the New Colonialism’. The Atlantic 11 (February 2016).
Lakshanan, Ravie. ‘China’s New 500-Megapixel “Super Camera” Can Instantly Recognize
You in a Crowd’. The next web, 2019. Available at https://thenextweb.com/security/
2019/09/30/chinas-new-500-megapixel-super-camera-can-instantly-recognize-you-in-
a-crowd/, [cited 20 October 2019].
Lalmas, Mounia. ‘Engagement, Metrics and Personalisation: The Good, the Bad and the
Ugly’. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and
Personalization (Larnaca, Cyprus: ACM, 2019).
Lanier, Jaron. Who Owns the Future? (New York: Simon and Schuster, 2013).
Latour, Bruno. ‘Gabriel Tarde and the End of the Social’. In The Social in Question. New
Bearings in History and the Social Sciences, edited by Patrick Joyce, 117–32 (London:
Routledge, 2002).
Latour, Bruno. ‘From Realpolitik to Dingpolitik’. In Making Things Public: Atmospheres of
Democracy, edited by Bruno Latour and Peter Weibel, 4–31 (Cambridge, MA: MIT Press,
2005).
Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory (New
York: Oxford University Press, 2005).
Latour, Bruno. ‘An Attempt at a “Compositionist Manifesto”’. New Literary History
41(3) (2010): 471–90.
Latour, Bruno, Pablo Jensen, Tommaso Venturini, Sébastian Grauwin, and Dominique
Boullier. “‘The Whole Is Always Smaller Than Its Parts”—A Digital Test of Gabriel Tarde’s
Monads’. British Journal of Sociology 63(4) (2012): 590–615.
Lau, Tim. ‘Predictive Policing Explained’. Brennan Center for Justice, 2020. Avail-
able at https://www.brennancenter.org/our-work/research-reports/predictive-policing-
explained, [cited 4 May 2021].
Laurence, McFalls, and Mariella Pandolfi. ‘The Enemy Live: A Genealogy’. In War, Police
and Assemblages of Intervention, edited by Jan Bachmann, Colleen Bell, and Caroline
Holmqvist (London: Routlege, 2014).
Leander, Anna. ‘Sticky Security: The Collages of Tracking Device Advertising’. European
Journal of International Security 4(3) (2019): 322–44.
Lee, Seungha. ‘Coming into Focus: China’s Facial Recognition Regulations’. Center
for Strategic and International Studies, 2020. https://www.csis.org/blogs/trustee-china-
hand/coming-focus-chinas-facial-recognition-regulations [cited 28 November 2021].
Lehikoinen, Juha, and Ville Koistinen. ‘In Big Data We Trust?’. Interactions 21(5) (2014):
38–41.
Lemieux, Cyril. ‘À quoi sert l’analyse des controverses ?’. Mil neuf cent. Revue d’histoire
intellectuelle 25(1) (2007): 191–212.
Liao, Shannon. ‘Chinese Facial Recognition System Mistakes a Face on a Bus for a Jay-
walker’. The Verge, 2018. Available at https://www.theverge.com/2018/11/22/18107885/
china-facial-recognition-mistaken-jaywalker, [cited 23 November 2019].
238 references
Lin, Yu-Sheng, Zhe-Yu Liu, Yu-An Chen, Yu-Siang Wang, Hsin-Ying Lee, Yi-Rong Chen,
Ya-Liang Chang, and Winston H. Hsu. 2020, ‘xCos: An Explainable Cosine Metric for
Face Verification Task’. arXiv preprint arXiv:2003.05383.
Liu, Chuncheng. ‘Seeing Like a State, Enacting Like an Algorithm: (Re)Assembling Contact
Tracing and Risk Assessment During the Covid-19 Pandemic’. Science, Technology, &
Human Values (2021). doi.org/10.1177/01622439211021916
Liu, Mingyang. ‘人脸识别黑产:真人认证视频百元一套 [Face Recognition Black Produc-
tion: A Set of One Hundred Yuan for Real-Person Authentication Video]’. 2021. Available
at http://www.bjnews.com.cn/detail/161824564315305.html, [cited 28 July 2021].
Liu, Xing, Jiqiang Liu, Sencun Zhu, Wei Wang, and Xiangliang Zhang. ‘Privacy Risk Anal-
ysis and Mitigation of Analytics Libraries in the Android Ecosystem’. IEEE Transactions
on Mobile Computing 19(5) (2019): 1184–99.
Liu, Yuxiu, and Wu Ren. ‘法学教授的一次维权:人脸识别的风险超出你所想 [a Rights
Defense by a Law Professor: The Risks of Face Recognition Are Beyond Your Imagina-
tion]’. The Paper (21 October 2020).
LLE ONE, LLC, et al. v Facebook, Inc. ‘Plaintiffs’ Motion for Preliminary Approval
and to Direct Notice of Settlement’. United States District Court for the Northern
District of California, Case No. 4:16-cv-06232-JSW, 2019. Available at https://assets.
documentcloud.org/documents/6455498/Facebooksettlement.pdf, [cited 25 December
2019].
Lodato, Thomas James, and Carl DiSalvo. ‘Issue-Oriented Hackathons as Material Partici-
pation’. New Media & Society 18(4) (2016): 539–57.
Lorenzini, Daniele, and Martina Tazzioli. ‘Confessional Subjects and Conducts of Non-
Truth: Foucault, Fanon, and the Making of the Subject’. Theory, Culture & Society
35(1) (2018): 71–90.
Luchs, Inga. ‘Free Basics by Facebook. An Interview with Nishant Shah’. Spheres: Journal for
Digital Cultures 3 (2016): 1–8.
Lum, Kristian, and William Isaac. ‘To Predict and Serve?’. Significance 13(5) (2016): 14–19.
Lupton, Deborah. The Quantified Self (Cambridge, UK: Polity, 2016).
Lyon, David. Surveillance after Snowden (Cambridge, UK: Polity, 2015).
McAfee, Andrew, and Erik Brynjolfsson. Machine, Platform, Crowd: Harnessing Our Digital
Future (New York: WW Norton & Company, 2017).
McCarthy, Kieren. ‘Facebook Puts 1.5B Users on a Boat from Ireland to California’. The
Register, 2018. Available at https://www.theregister.co.uk/2018/04/19/facebook_shifts_
users/, [cited 7 March 2019].
McCue, Colleen. Data Mining and Predictive Analysis: Intelligence Gathering and Crime
Analysis, 2nd edition (Oxford: Butterworth-Heinemann, 2015).
McEntire, David A. Introduction to Homeland Security: Understanding Terrorism with an
Emergency Management Perspective, 2nd edition (Hoboken, NJ: Wiley, 2019).
McGoey, Linsey. The Unknowers. How Strategic Ignorance Rules the Word (London: Zed,
2019).
McGranahan, Carole. ‘Theorizing Refusal: An Introduction’. Cultural Anthropology
31(3) (2016): 319–25.
McIntyre, David P., and Mohan Subramaniam. ‘Strategy in Network Industries: A Review
and Research Agenda’. Journal of Management 35(6) (2009): 1494–1517.
Mackenzie, Adrian. Machine Learners: Archaeology of a Data Practice (Cambridge, MA:
MIT Press, 2017).
McKinnon, John D. ‘Pentagon Weighs Ending Jedi Cloud Project Amid Amazon Court
Fight’. Wall Street Journal (10 May 2021).
references 239
Nix, Alexander. ‘Oral Evidence: Fake News, HC 363’. House of Commons, 2018. Available at
http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/
digital-culture-media-and-sport-committee/fake-news/oral/79388.pdf, [cited 10 March
2018].
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism (New
York: New York University Press, 2018).
Norvig, Peter. ‘On Chomsky and the Two Cultures of Statistical Learning’. 2012. Available
at http://norvig.com/chomsky.html, [cited 29 November 2019].
Nowak, Michael, and Dean Eckles. ‘Determining User Personality Characteristics from So-
cial Networking System Communications and Characteristics’. Google Patents. United
States Patent Office, 2014.
NSA. ‘SKYNET: Applying Advanced Cloud-Based Behavior Analytics’. The Intercept, 2007.
Available at https://theintercept.com/document/2015/05/08/skynet-applying-advanced-
cloud-based-behavior-analytics/, [cited 3 November 2018].
NSA. ‘XKeyScore’. The Guardian, 2008. Available at https://www.theguardian.com/
world/interactive/2013/jul/31/nsa-xkeyscore-program-full-presentation, [cited 30 June
2020].
NSA. ‘New Contact-Chaining Procedures to Allow Better, Faster Analysis’. Snowden
Archive, 2011. Available at https://search.edwardsnowden.com/docs/NewContact-
ChainingProcedurestoAllowBetterFasterAnalysis2013-09-28nsadocs, [cited 8 Novem-
ber 2016].
NSA. ‘SKYNET: Courier Detection Via Machine Learning’. Snowden Archive, 2012.
Available at https://search.edwardsnowden.com/docs/SKYNETCourierDetectionvia
MachineLearning2015-05-08nsadocs, [cited 29 July 2021].
NSA. ‘Data Scientist. Job Description’. 2016. Available at https://www.nsa.gov/psp/
applyonline/EMPLOYEE/HRMS/c/HRS_HRAM.HRS_CE.GBL?Page=HRS_CE_JOB_
DTL&Action=A&JobOpeningId=1076263&SiteId=1&PostingSeq=1, [cited 16 October
2016].
NSCAI. ‘Final Report. National Security Commission on Artificial Intelligence’. National
Security Commission on Artificial Intelligence (NSCAI), 2021. Available at https://www
.nscai.gov/, [cited 8 February 2021].
OCHA. ‘From Digital Promise to Frontline Practice: New and Emerging Technologies in
Humanitarian Action’. Geneva: United Nations Office for the Coordination of Humani-
tarina Affairs (UN OCHA), 2021.
OECD. ‘Recommendation of the Council on Artificial Intelligence’. Organisation for
Economic Co-operation and Development (OECD), 2019. Available at https://
legalinstruments.oecd.org/en/instruments/oecd-legal-0449, [cited 29 July 2021].
Office of Oversight and Investigations. ‘A Review of the Data Broker Industry: Collec-
tion, Use, and Sale of Consumer Data for Marketing Purposes’. United States Senate
Committee on Commerce, Science, and Transportation, 2013.
Ogborn, Miles. Indian Ink: Script and Print in the Making of the English East India Company.
(Chicago: University of Chicago Press, 2008).
O’Hear, Steve. ‘Facebook Is Buying UK’s Bloomsbury AI to Ramp up Natural Language
Tech in London’. TechCrunch, 2018. Available at https://techcrunch.com/2018/07/02/
thebloomsbury/, [cited 27 November 2019].
Olsson, Christian. ‘Can’t Live with Em, Can’t Live without Em: “The Enemy” as Practi-
cal Object of Political-Military Controversy in Contemporary Western Wars’. Critical
Military Studies 5(4) (2019): 359–77.
242 references
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy (New York: Crown Publishing Group, 2016).
O’Neil Risk Consulting & Algorithmic Auditing. ‘It’s the Age of the Algorithm and We Have
Arrived Unprepared’. 2021. Available at https://orcaarisk.com/, [cited 22 July 2021].
OpenEEW. ‘A Low Cost, Open Source, IoT-Based Earthquake Early Warning System’. 2021.
Available at https://openeew.com/, [cited 12 June 2021].
Oracle America Inc vs The United States and AWS Inc. ‘Pre-Award Bid Protest’. United States
Court of Federal Claims, 2018. Available at https://ecf.cofc.uscourts.gov/cgi-bin/show_
public_doc?2018cv1880-102-0, [cited 30 July 2021].
Oracle America v Department of Defense and Amazon Web Services. ‘Amazon Web Ser-
vices, Inc.’s Response to Oracle America, Inc.’s Motion to Complete and Supplement
the Administrative Record and for Leave to Conduct Limited Discovery’. 2019. Avail-
able at https://regmedia.co.uk/2019/01/23/190118_aws_submission_jedi_.pdf, [cited
28 January 2019].
Owen, Naomi. ‘At-a-Glance: China’s Draft Personal Information Protection Law’. 2020.
Available at https://gdpr.report/news/2020/10/30/at-a-glance-chinas-draft-personal-
information-protection-law/, [cited 8 December 2020].
Oxford English Dictionary. “‘Operation”’. In OED Online (Oxford: Oxford University Press,
2021).
Palonen, Kari. ‘Parliamentary and Electoral Decisions as Political Acts’. In The Decisionist
Imagination: Sovereignty, Social Science and Democracy in the 20th Century, edited by
Daniel Bessner and Nicolas Guilhot, 85–108 (Oxford: Bergahn Book, 2018).
Parker, Geoffrey G, Marshall W Van Alstyne, and Sangeet Paul Choudary. Platform Revo-
lution: How Networked Markets Are Transforming the Economy and How to Make Them
Work for You (New York: WW Norton, 2016).
Pasick, Adam. ‘The Magic That Makes Spotify’s Discover Weekly Playlists So Damn Good’.
Quartz, 2015. Available at https://qz.com/571007/the-magic-that-makes-spotifys-
discover-weekly-playlists-so-damn-good/, [cited 8 December 2020].
Pasquale, Frank. The Black Box Society (Cambridge, MA: Harvard University Press, 2015).
Peakin, Will. ‘ICO Appoints Researcher to Develop Method for Auditing Algorithms’. Fu-
tureScot, 2018. Available at https://futurescot.com/ico-appoints-researcher-to-develop-
method-for-auditing-algorithms/, [cited 21 October 2019].
Peck, Jamie, Neil Brenner, and Nik Theodore. ‘Actually Existing Neoliberalism’. In The SAGE
Handbook of Neoliberalism, edited by Damien Cahill, Melinda Cooper, Martijn Konings,
and David Primrose, 3-15 (London: SAGE, 2018).
Pedrycz, Witold, and Shyi-Ming Chen. Information Granularity, Big Data, and Computa-
tional Intelligence (New York: Springer, 2014).
Pentland, Alex (Sandy). Honest Signals: How They Shape Our World (Cambridge, MA: MIT
Press, 2008).
Pentland, Alex (Sandy). Social Physics: How Good Ideas Spread—The Lessons from a New
Science (New York: Penguin, 2014).
Perez, Juan. ‘Google Wants Your Phonemes’. 2007. Available at https://www.infoworld.com/
article/2642023/google-wants-your-phonemes.html, [cited 1 October 2012].
Perry, Walt L, Brian McInnis, Carter C Price, Susan C Smith, and John S Hollywood. Predic-
tive Policing: The Role of Crime Forecasting in Law Enforcement Operations (Santa Monica,
CA: RAND 2013).
Petzel, Eric. ‘Airmapview’. Airbnb, 2015. Available at https://medium.com/airbnb-
engineering/airmapview-a-view-abstraction-for-maps-on-android-4b7175a760ac,
[cited 2 May 2019].
references 243
R (Bridges) v CCSWP and SSHD. ‘Judgment in the Court of Appeal (Civil Division)’. 2020.
Available at https://www.bailii.org/ew/cases/EWCA/Civ/2020/1058.html, [cited 6 March
2021].
Rahim, Rasha Abdul. ‘Why Project Maven Is the Litmus Test for Google’s New Princi-
ples’. Amnesty International, 2018. Available at https://www.amnesty.org/en/latest/news/
2018/06/why-project-maven-is-the-litmus-test-for-googles-new-principles/, [cited 27
January 2019].
Rancière, Jacques. Disagreement. Politics and Philosophy. Translated by Julie Rose. (Min-
neapolis: University of Minnesota Press, 1999).
Rancière, Jacques, and Adnen Jdey. La méthode de la scène (Paris: Éditions Lignes, 2018).
Reese, Hope. ‘Is “Data Labeling” the New Blue-Collar Job of the AI Era?’. 2016. Avail-
able at https://www.techrepublic.com/article/is-data-labeling-the-new-blue-collar-job-
of-the-ai-era/, [cited 27 October 2019].
Reisman, Dillon, Jason Schultz, Kate Crawford, and Meredith Whittaker. ‘Algorithmic
Impact Assessments: A Practical Framework for Public Agency Accountability’. AI
Now Institute, 2018. Available at https://ainowinstitute.org/aiareport2018.pdf, [cited 21
October 2019].
Reprieve. ‘Two Journalists Ask the US Government to Remove Them from the Kill List’.
Reprive, 2018. Available at https://reprieve.org/us/2018/05/03/two-journalists-ask-u-s-
government-remove-kill-list/, [cited 5 July 2021].
Resnick, Brian. ‘Cambridge Analytica’s “Psychographic Microtargeting”: What’s Bullshit
and What’s Legit’. Vox, 2018. Available at https://www.vox.com/science-and-health/
2018/3/23/17152564/cambridge-analytica-psychographic-microtargeting-what, [cited
6 November 2019].
Reuters. ‘New Zealand Passport Robot Tells Applicant of Asian Descent to Open
Eyes’. 2016. Available at https://www.reuters.com/article/us-newzealand-passport-error-
idUSKBN13W0RL, [cited 22 October 2019].
Rider, Karina, and David Murakami Wood. ‘Condemned to Connection? Network Com-
munitarianism in Mark Zuckerberg’s “Facebook Manifesto”’. New Media & Society
21(3) (2019): 639–54.
Rieder, Bernhard. Engines of Order: A Mechanology of Algorithmic Techniques (Amsterdam:
Amsterdam University Press, 2020).
Rieder, Bernhard, and Guillaume Sire. ‘Conflicts of Interest and Incentives to Bias: A Mi-
croeconomic Critique of Google’s Tangled Position on the Web’. New Media & Society
16(2) (2014): 195–211.
Roach, John. ‘Microsoft Improves Facial Recognition Technology to Perform Well across
All Skin Tones, Genders’. Microsoft, 2018. Available at https://blogs.microsoft.com/ai/
gender-skin-tone-facial-recognition-improvement/, [cited 22 March 2020].
Roberts, Dorothy E. ‘Book Review: Digitizing the Carceral State’. Harvard Law Review
132(6) (2019): 1695–1728.
Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media
(New Haven: Yale University Press, 2019).
Rogers, Kenneth. The Attention Complex: Media, Archeology, Method (Basingstoke: Palgrave
Macmilan, 2014).
Rose, Nikolas. ‘Government and Control’. British Journal of Criminology 40(2) (2000):
321–39.
Rose, Nikolas. ‘The Neurochemical Self and Its Anomalies’. In Risk and Morality, edited
by Richard V. Ericson and Aaron Doyle, 407–37 (Toronto: University of Toronto Press,
2003).
references 245
Rosenberg, Matthew, Nicholas Confessore, and Carole Cadwalladr. ‘How Trump Con-
sultants Exploited the Facebook Data of Millions’. The New York Times (17 March
2018).
Roussi, Antoaneta. ‘Resisting the Rise of Facial Recognition’. Nature 587 (2020): 350–3.
Rouvroy, Antoinette. ‘The End(s) of Critique: Data-Behaviourism vs. Due-Process’. In
Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the
Philosophy of Technology, edited by Mireille Hildebrandt and Katja de Vries, 143–67
(London: Routledge, 2012).
Rouvroy, Antoinette, and Thomas Berns. ‘Gouvernementalité algorithmique et perspectives
d’émancipation: Le disparate comme condition d’individuation par la relation?’. Réseaux
1(177) (2013): 163–96.
Rudin, Cynthia. ‘Stop Explaining Black Box Machine Learning Models for High Stakes De-
cisions and Use Interpretable Models Instead’. Nature Machine Intelligence 1(5) (2019):
206–15.
Russell, Legacy. Glitch Feminism: A Manifesto (London: Verso, 2020).
Sanders, Lewis, IV. ‘Facebook Funds AI Ethics Center in Munich’. Deutsche Welle,
2019. Available at https://www.dw.com/en/facebook-funds-ai-ethics-center-in-munich/
a-47156591, [cited 10 February 2019].
Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. ‘Auditing Al-
gorithms: Research Methods for Detecting Discrimination on Internet Platforms’. Paper
presented at Data and Discrimination: Converting Critical Concerns into Productive In-
quiry, a Preconference of the 64th Annual Meeting of the International Communication
Association (Seattle, WA: 2014).
Sandvik, Kristin Bergtora, Katja Lindskov Jacobsen, and Sean Martin McDonald. ‘Do No
Harm: A Taxonomy of the Challenges of Humanitarian Experimentation’. International
Review of the Red Cross 99(904) (2017): 319–44.
Saunders, Jessica, Priscillia Hunt, and John S Hollywood. ‘Predictions Put into Prac-
tice: A Quasi-Experimental Evaluation of Chicago’s Predictive Policing Pilot’. Journal of
Experimental Criminology 12(3) (2016): 347–71.
Schiller, Daniel. Digital Capitalism: Networking the Global Market System (Cambridge, MA:
MIT Press, 2000).
Schillings, Sonja. Enemies of All Humankind: Fictions of Legitimate Violence (Hanover, NH:
Dartmouth College Press, 2017).
Schmitt, Carl. The Concept of the Political. Translated by George Schwab. (Chicago:
University of Chicago Press, 1996).
Schneier, Bruce. ‘Why Data Mining Won’t Stop Terror’. Wired, 2005. Available at https://
www.schneier.com/essays/archives/2005/03/why_data_mining_wont.html, [cited 18
February 2019].
Scholtz, Trebor, Digital Labor: The Internet as Playground and Factory (London: Routledge,
2013).
Schouten, Peer. ‘Security as Controversy: Reassembling Security at Amsterdam Airport’.
Security Dialogue 45(1) (2014): 23–42.
Schutt, Rachel, and Cathy O’Neil. Doing Data Science: Straight Talk from the Frontline
(Sebastopol, CA: O’Reilly, 2013).
Science Museum. ‘Top Secret. From Ciphers to Cybersecurity’. 2019. Available at https://
www.sciencemuseum.org.uk/what-was-on/top-secret, [cited 29 July 2021].
Scola et al. v Facebook Inc. ‘Amended Complaint and Demand for Jury Trial’. Civil Action
No. 18-civ-05135: Superior Court of California, County of San Mateo, 2019. Available at
https://contentmoderatorsettlement.com/Home/Documents, [cited 29 July 2021].
246 references
Scola et al. v Facebook Inc. ‘Plaintiffs’ Renewed Notice of Motion and Motion for Final
Approval of Settlement’. Superior Court of California, County of San Matteo, 2021.
Available at https://contentmoderatorsettlement.com/Home/Documents, [cited 28 June
2021].
Scola et al. v Facebook Inc. ‘Proposed Settlement’. 2020. Available at https://content
moderatorsettlement.com/, [cited 26 July 2021].
Scola et al. v Facebook Inc. ‘Settlement Agreement and Release’. 2020. Available at https://
contentmoderatorsettlement.com/Content/Documents/Settlement%20Agreement.pdf,
[cited 30 November 2020].
Seaver, Nick. ‘What Should an Anthropology of Algorithms Do?’. Cultural Anthropology
33(3) (2018): 375–85.
Serres, Michel. Hermès II. L’Interférence (Paris: Les Editions de Minuit, 1972).
Serres, Michel. The Parasite (Baltimore: Johns Hopkins University Press, 1982).
Shanahan, Patrick M. ‘Public Declaration and Assertion of Military and State Secrets Privi-
lege by Patrick M. Shanhan, Acting Secretary of Defense’. In Civil Action No. 1:17-cv-0581
(RMC). US District Court for the District of Columbia, 2019.
Shane, Scott, and Daisuke Wakabayashi. “‘The Business of War”: Google Employees Protest
Work for the Pentagon’. The New York Times (4 April 2018).
Sherrets, Doug, Sean Liu, and Brett Rolston Lider. ‘User Behavior Indicator’. Google Patents.
United States Patent Office, 2014.
Shore, Cris, and Susan Wright. ‘Coercive Accountability: The Rise of Audit Culture in
Higher Education’. In Audit Cultures: Anthropological Studies in Accountability, Ethics
and the Academy, edited by Marilyn Strathern, 57–89 (London: Routledge, 2000).
Silver, Ellen. 2018, ‘Hard Questions: Who Reviews Objectionable Content on Facebook—
and Is the Company Doing Enough to Support Them?’. 2018. Available at https://
newsroom.fb.com/news/2018/07/hard-questions-content-reviewers/, [cited 17 May
2020].
Simonite, Tom. ‘What Really Happened When Google Ousted Timnit Gebru’. Wired,
2021. Available at https://www.wired.com/story/google-timnit-gebru-ai-what-really-
happened/, [cited 17 June 2021].
Singh, Angadh, and Carlos Gomez-Uribe. ‘Recommending Media Items Based on Take Rate
Signals’. Google Patents. United States Patent Office, 2019.
Singleton, Vicky. ‘When Contexts Meet: Feminism and Accountability in UK Cattle Farm-
ing’. Science, Technology, & Human Values 37(4) (2012): 404–33.
Smith, Aaron. ‘Gig Work, Online Selling and Home Sharing’. Pew Research Center,
2016. Available at https://www.pewinternet.org/2016/11/17/gig-work-online-selling-
and-home-sharing/, [cited 27 November 2019].
Snap Inc. ‘Community Guidelines’. 2019. Available at https://www.snap.com/en-US/
community-guidelines, [cited 3 October 2019].
Snow, Jacob. ‘Amazon’s Face Recognition Falsely Matched 28 Members of Congress
with Mugshots’. 2018. Available at https://www.aclu.org/blog/privacy-technology/
surveillance-technologies/amazons-face-recognition-falsely-matched-28, [cited 17 June
2021].
Solon, Olivia. “‘It’s Digital Colonialism”: How Facebook’s Free Internet Service Has Failed
Its Users’. The Guardian (27 July 2017).
Spotify. ‘Letter to Access Now’. 2021. Available at https://www.accessnow.org/cms/assets/
uploads/2021/04/Spotify-Letter-to-Access-Now-04-15-2021-.pdf, [cited 11 July 2021].
Squire, Vicki, editor. The Contested Politics of Mobility: Borderzones and Irregularity
(London: Routledge, 2011).
references 247
Srinivasan, Janaki, and Elisa Oreglia. ‘The Myths and Moral Economies of Digital ID and
Mobile Money in India and Myanmar’. Engaging Science, Technology, and Society 6 (2020):
215–36.
Srnicek, Nick. Platform Capitalism (Cambrige, UK: Polity, 2017).
Statista. ‘Business Data Platform’. 2019. Available at https://www.statista.com/topics/1145/
internet-usage-worldwide/, [cited 23 October 2019].
Statt, Nick. ‘Google Dissolves AI Ethics Board Just One Week after Forming It. Not a
Great Sign’. The Verge, 2019. Available at https://www.theverge.com/2019/4/4/18296113/
google-ai-ethics-board-ends-controversy-kay-coles-james-heritage-foundation, [cited
3 June 2019].
Statt, Nick. ‘Zuckerberg: “Move Fast and Break Things” Isn’t How Facebook Operates Any-
more’. 2014. Available at https://www.cnet.com/news/zuckerberg-move-fast-and-break-
things-isnt-how-we-operate-anymore/, [cited 14 June 2021].
Stephens-Davidowitz, Seth. Everybody Lies: Big Data, New Data, and What the Internet Can
Tell Us About Who We Really Are (New York: HarperCollins, 2017).
Stoker-Walker, Chris. ‘Twitter’s Vast Metadata Haul Is a Privacy Nightmare for Users’. 2018.
Available at https://www.wired.co.uk/article/twitter-metadata-user-privacy.
Stoler, Ann Laura. Duress: Imperial Durabilities in Our Times (Durham, NC: Duke Univer-
sity Press, 2016).
Stop LAPD Spying Coalition. ‘Predictive Policing: Profit-Driven Racist Policing’.
2016. Available at https://stoplapdspying.org/predictive-policing-profit-driven-racist-
policing/, [cited 20 May 2021].
Stop LAPD Spying Coalition. ‘Stop LAPD Spying Coalition Wins Groundbreaking Pub-
lic Records Lawsuit’. 2019. Available at https://stoplapdspying.medium.com/stop-lapd-
spying-coalition-wins-groundbreaking-public-records-lawsuit-32c3101d4575, [cited 3
May 2021].
Stop LAPD Spying Coalition and Free Radicals. ‘The Algorithmic Ecology: An Abolitionist
Tool for Organizing against Algorithms’. 2020. Available at https://stoplapdspying.
medium.com/the-algorithmic-ecology-an-abolitionist-tool-for-organizing-against-
algorithms-14fcbd0e64d0, [cited 20 June 2021].
Strathern, Marilyn. ‘Introduction: New Accountabilities’. In Audit Cultures: Anthropologi-
cal Studies in Accountability, Ethics and the Academy, edited by Marilyn Strathern, 1–18
(London: Routledge, 2000).
Stroud, Matt. ‘The Minority Report: Chicago’s New Police Computer Predicts Crimes,
but Is It Racist?’. The Verge, 2014. Available at https://www.theverge.com/2014/
2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist, [cited
23 October 2019].
Suchman, Lucy, Karolina Follis, and Jutta Weber. ‘Tracking and Targeting: So-
ciotechnologies of (In)security’. Science, Technology, & Human Values 42(6) (2017):
983–1002.
Suchman, Lucy, Lilly Irani, Peter Asaro, and et al. ‘Open Letter in Support of Google
Employees and Tech Workers’. International Commitee for Robot Arms Control,
2018. Available at https://www.icrac.net/open-letter-in-support-of-google-employees-
and-tech-workers/, [cited 28 January 2019].
Sullivan, Gavin. The Law of the List: UN Counterterrorism Sanctions and the Politics of Global
Security Law (Cambridge, UK: Cambridge University Press, 2020).
Sundararajan, Arun. ‘Network Effects’. NYU Stern, 2006. Available at http://oz.stern.nyu.
edu/io/network.html, [cited 28 October 2019].
Swanson, Christopher, and Johan Oskarsson. ‘Parking Suggestions’. Spotify AB Publisher.
United States Patent Office, 2018.
248 references
Xu, Ke, Vicky Liu, Yan Luo, and Zhijing Yu. ‘Analyzing China’s PIPL and how it
compares to the EU’s GDPR’. International Association of Privacy Professionals,
2021. Available at https://iapp.org/news/a/analyzing-chinas-pipl-and-how-it-compares-
to-the-eus-gdpr/, [cited 15 November 2021].
Ye, Yuan. ‘A Professor, a Zoo, and the Future of Facial Recognition in China’. Sixth
Tone, 2021. Available at https://www.sixthtone.com/news/1007300/a-professor%2C-a-
zoo%2C-and-the-future-of-facial-recognition-in-china, [cited 17 January 2022].
Yin, Cao. ‘Focus Tightens on Facial Recognition’. China Daily (18 May 2021).
Yuan, Li. ‘How Cheap Labor Drives China’s A.I. Ambitions’. The New York Times (25
November 2018).
Yujie, Xue. ‘Facial-Recognition Smart Lockers Hacked by Fourth-Graders’. Sixth Tone,
2019. Available at https://www.sixthtone.com/news/1004698/facial-recognition-smart-
lockers-hacked-by-fourth-graders, [cited 23 November 2019].
Yuval-Davis, Nira, Georgie Wemyss, and Kathryn Cassidy. Bordering (Cambridge, UK:
Polity, 2019).
Zaidan et al. v Trump et al. ‘Complaint. Case No. 1:17-Cv-00581’. United States District
Court for the District of Columbia, 2017. Available at https://www.plainsite.org/dockets/
34pkvx7dt/district-of-columbia-district-court/zaidan-et-al-v-trump-et-al/, [cited 15
April 2017].
Zaidan et al. v Trump et al. ‘Memorandum Opinion’. 2018. Available at https://ecf.dcd
.uscourts.gov/cgi-bin/show_public_doc?2017cv0581-13, [cited 20 September 2018].
Zaidan et al. v Trump et al. ‘Motion to Dismiss by Central Intelligence Agency, Dan Coats,
Department of Defense, Department of Homeland Security, Department of Justice, John
F. Kelly, James Mattis, Herbert Raymond Mcmaster, Michael Richard Pompeo, Jefferson
Beauregard Sessons, Iii, Donald J. Trump, USA’. 2017. Available at https://www.plainsite.
org/dockets/34pkvx7dt/district-of-columbia-district-court/zaidan-v-trump/?, [cited 2
April 2019].
Zaidan et al. v Trump et al. ‘Memorandum of Points and Authorities in Support of De-
fendants’ Motion to Dismiss’. United States District Court for the District of Columbia,
2017.
Zehfuss, Maja. War and the Politics of Ethics (Oxford: Oxford University Press, 2018).
Zhong, Yiyin. ‘Chinese Professor Files Country’s First Lawsuit against Use of Facial Recog-
nition Technology in Zoo Row’. The Telegraph (5 November 2019).
Zhou, Ding, and Pierre Moreels. ‘Inferring User Profile Attributes from Social Information’.
Google Patents. United States Patent Office, 2013.
Ziewitz, Malte. ‘Governing Algorithms: Myth, Mess, and Methods’. Science, Technology, &
Human Values 41(1) (2016): 3–16.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the
New Frontier of Power (New York: PublicAffairs, 2018).
Zuckerberg, Mark. ‘Free Basics Protects Net Neutrality’. Times of India (28 December 2015).
Zuckerberg, Mark. ‘Facebook Post’. Facebook, 2018. Available at https://www.facebook.
com/zuck/posts/10105865715850211, [cited 6 November 2019].
Index of names
Big Data World 204 capitalism 5, 7, 45, 112, 119–20, 188, 211,
Big Oil 107 see also cognitive capitalism, digital
big tech companies 1–3, 16–17, 92, capitalism, industrial capitalism, new
115–16, 134, 141, 172, 183, 185, 202, capitalism, philanthro-capitalism,
205, 209, 217 platform capitalism, surveillance
big tech platforms 93, 157 capitalism
binaries 14, 24, 31, 45, 66, 76, 80, 83, 85, Carnegie Endowment for International
113, 148, 150, 193, 206, 211 Peace 177
biometric data 92–3, 143 CelebA 164
biometric recognition 174 centralization/decentralization 95, 97, 101,
biometrics 92, 107, 213 103, 105, 113
black boxes/boxing 52, 172 Chicago crime data 53, 54f, 55f, 56–7, 58f,
black box society 11 59–62
Black Lives Matter 160 Chicago police 9, 52, 54, 57, 58f, 65
Booking Holdings 148 China 17, 120–1, 139, 152, 158, 161–2,
bordering/rebordering 181, 183–7, 190, 208, 216
192, 195, 202, 217 Fuyang District People’s Court 178
borders 3, 14, 17–18, 119, 134, 143, 160, Hangzhou Intermediate People’s
174, 181, 183–8, 190, 192–5, 197–8, Court 178
202–3, 208, 217 Personal Information Protection Law
Boston Consulting Group 2 (PIPL) 179
boundaries 3, 8, 14, 18, 61, 65, 74, 98, 101, Zhejiang 178, 180
112, 120, 147 n.41, 181, 184–8, 190, Zhejiang High People’s Court 178
192–7, 201–3, 207 Chinese
Brazil 97–8 citizens 176, 178, 180
Brennan Center of Justice 51 companies 164, 177, 180
Bureau of Investigative Journalism 2 n.3 ecosystems 103, 108
bureaucracy 4, 162, 164, 175 facial recognition 177, 179
bureaucratization 44–6, 163, 165, 175, 211 government 164, 176
business 99, 116, 120 market 18
actors 96 social media 179
aim 105 traditions 143 n.16
API 97 use of AI 177
applications 59 citizens 10, 28–9, 75, 78 n.41, 86, 94, 143,
dimension 106 147 n.41, 149, 159, 168, 175–6, 178,
leaders 92 180, 184–5, 187–8, 193–6, 199–203,
logic 96 210–11, 216–17
models 52, 92, 98, 103, 115, 127, 133, citizenship 181, 192–3, 196–7, 199–201,
172, 175 203, 217
processes 63 CivicScape 15, 45, 52–3, 56–7, 62–5
values 115 civilized–barbarian distinction 85, 186,
192
California 43, 198, 201 civil liberties 17, 160, 167
citizens 199–200, 217 civil society 17, 44 n.12, 140
servers 2 class action 133, 185, 198–203
Cambridge Analytica 13, 15, 24–7, 29–31, Clearview AI 161
33–6, 101, 205–6, 210 clickwork/clickworkers 117–18, 120
Cambridge University 26, 34 cloud 105
Canada 25 behaviour analytics 69
index of subjects 257
capabilities 77 commercial
platforms 98–101, 104, 205 actors 1–2
services 99, 153 apps 109
technologies 15, 213 content 129
clustering 34, 57–9, 124, 130, 157 data 29–30, 34
code 6, 17, 43, 52–4, 57, 102, 105, 109–12, gender classification 160
141, 143–4, 146–8, 155–6, 158, 191 intelligence 2
cognition 38–9 interests 167
cognitive platforms 98–100
capitalism 118 scene 206
constructs 28 secrecy 45, 51
ecology 38 surveillance 1–2
faculties 48 commodification 102, 129, 134, 148
cognizers 38–9 communication 6, 22, 29–30, 38, 40,
Cold War 17, 184 69–70, 81–2, 94, 97, 105, 109, 148,
collaboration 2, 43, 45, 78f, 91, 99, 106, 154–7, 163, 190–1, 202–3, 205
109, 142, 153, 156, 158, 181, 183, 210 communities 2, 12, 51–2, 65, 91, 98, 102,
collective 105, 107, 164–5, 168, 189–97, 200–3,
action 4, 40, 145 217
agency 165, 195 companies 26, 31, 33–4, 45, 50–2, 64–5,
coding 156 101–2, 106, 114, 117, 122, 126, 129,
conduct 24 135, 140, 142, 146, 148, 151, 153–5,
constraints 38 161, 165, 167, 170–1, 173, 177, 181,
creation of value 124 193, 195, 197, 199–200, 211, see
dissensus 215 also big tech companies, East India
distribution 146 Company, Internet companies, social
efforts 94, 125 media companies, tech companies
hacking 151 complementors 96, 103
interventions 158 composing 106
objects 28 data 31
patterns 71 platforms 16
protest 150 see also decomposing, recomposing
sense-making 45 composition 24, 31, 35, 41, 90, 93, 107,
subjectivity 10, 158, 214 109, 126–7, 209, 212–14, see also
subjects 152 decomposition, recomposition
collectives 17, 33, 36, 41, 145, 149–50, 155, computable
159, 210, 215 assessment 166
colonial categories 58
characteristics 103 data 32, 39–40
context 39 difference 135
continuities 17–18 forms 54
extractivism 165 computational
power 183, 186 capabilities 33
rationalities 93 capacities 49, 103
relationships 92 expressions 10
vocabularies 96 geometries 45
colonialism 5, 14, 112, 120 n.37, integrations 101
182–3, 202, 206, 211, see also models 34
techno-colonialism practices 87
258 index of subjects
critiques 21, 27, 43–4, 49, 53, 108–9, decisions 14, 23, 30, 46, 48, 53–4, 57–8,
117–18, 123, 128, 171–2, 182, 185, 213 60 n.64, 87, 105, 141, 145, 147, 163,
crowds 8, 118, 170, 177 175, 197–8, 201, see also algorithmic
crowdsourcing 100, 107, 112, 154, 157 decisions, unexceptional decisions
crowd-working 120 decision tree algorithm 49, 59, 61
cultural decision trees 49, 60, 62, 69, 212
constraints 38 decomposing 40, 93, 103, 210
domination 198 data 7, 31, 71, 132
practices 72 objects 54
relations 10 platforms 16, 101–2, 105, 111–12
studies 72 decomposition 15, 24, 31, 33–5, 38, 40
theorists 10, 182 n.84, 74, 80, 89, 105, 112–13, 129,
world 122 132–3, 135, 211–12
cultures 8, 73, 193, 198 democracy 4, 14, 18, 22, 29, 46, 149, 152,
cybernetics 74 163, 177, 185, 188, 206, 210 n.13, 218
democratic
data accountability 43
activism 10 action 149–50, 216
analysis 2, 33, 120, 183 actors 17
collection 10, 25, 65, 70, 75, 87, 107 AI 177
double 71, 214 control 163
justice 10 deploys 106
mining 21, 76 dissensus 153, 159
politics 11 governance 3
protection 26, 108–9, 139, 142, 150, 161, imaginaries 4
179, 202, 205, 214 institutions 7, 207
science 34, 55, 77 n.37 issues 205
scientists 4, 10, 37, 42–3, 59, 76–7, life 149, 168
169–70, 176, 204, 210 politics 149, 203, 215
transformations 44, 63 potentials 217
witnessing 11 procedures 187
databases 161, 164–5, 174 processes 22
data-driven regimes 177, 180
hotspot 63 scenes 14, 153, 178, 204
microtargeting 33 democratization 11, 18, 135, 162–3
politics 51 of democracy 14, 218
predictive policing 43, 59 dependency 16, 36, 39, 92–3, 98, 101, 111,
datafication 4, 8, 10, 16, 32, 35–6, 40, 54, 113, 125, 129, 148, 194–5, 198–202
91, 102, 121 n.36, 128–30, 133, 135, developers 17, 44, 96, 102, 105–6, 109, 111,
151, 156–7, 196, 208, 211 117, 140, 142, 146, 158, 204, 210
dataism 10 devices 8, 12–13, 38–41, 43, 71–2, 75, 87,
datasets 29, 34, 51 n.55, 52, 79, 115–16, 94, 106, 111, 130–3, 149–50, 155–7,
154, 161, 164–5, 170, 204 162, 169 n.44, 204–5, 208–9, 212
decision boundaries 56, 60–2, 64 digital
decisionism 42 n.3, 211 age 103, 158, 183
decision-makers 145 capitalism 115–18, 121, 124–5, 154, 204,
decision-making 4, 7, 15, 42–6, 51, 63, 206
146, 162, 167, 211, see also algorithmic economy 16, 40, 115–17, 120, 124–5,
decision-making 130, 213
260 index of subjects
Russia 88, 176, 180, 276 small data 7, 33–4, 116–17, 129–30, 132,
Department of Information 135, 210–11
Technologies 162, 180 Snowden 16, 77
Kremlin 183 archive 212
Moscow 162, 180 disclosures 2, 21, 32, 69
documents 70, 76, 82, 87, 90
San Francisco 160 leaks 13
Santa Clara University 43 memo 81 n.57
satellite imagery/technologies 91–2, 112 revelations 86
Saudi Arabia 120 slides 81
Save the Children 2, 8, 93 social media 26, 29–30, 37, 50, 70, 89,
scenes 12–17, 45, 71, 86, 88, 93, 116, 142, 91–2, 107, 118–19, 135, 179–80, 206
151–5, 159, 184–5, 191–2, 198, 203–4, companies 1, 18, 25, 115, 124–5, 169,
206, 208–10, 214, 216 184–5, 188–91, 194, 196, 202
science and technology studies (STS) 4, 7, platforms 39, 125, 194–6, 198, 202
73, 152 n.66, 209–10, 215, 218 social sciences 4, 8, 14, 23–4, 28, 41, 73,
Science Museum 1 140, 181, 210
search engines 38, 40, 122–3, 158, 167, 203 sociologists 10, 51 n.37, 64–5, 85, 89 n.89,
search queries 122, 133 115, 124, 175, 179, 183, 209
Second Nagorno-Karabakh War 151 socio-technical
security 3, 7, 23, 74, 79, 86, 88, 99, 114, 147 arrangements 156
n.41, 177, 206 constitution 109
agencies 1–2, 21, 68, 80, 143, 160, 202, controversies 209
208, 212 design 96
algorithms 15 devices 75
applications 1, 77, 80, 170 distrust 180
cameras 174 infrastructures 94
compositions 24 n.14 occurrences 215
practices 76, 87, 212 processes 5
practitioners 29 production 203
professionals 22, 75, 77–8, 80, 82, 85, 87, relations 112
89 scenes 155
standards 179 systems 7, 106
techniques 75 software 6, 9, 42, 45, 50–2, 63, 97–8, 100,
threats 93 102, 109, 111–12, 124–5, 142, 154,
self-accountable algorithms 171, 173, 176 160, 204
self and other 3, 5, 7–9, 12, 14–15, 42, 45, sovereign
72, 74, 83, 85, 116, 146, 150, 206–8, borders 18, 192
210, 214 boundaries 18, 193, 202
SenseTime 164 decisions 7, 44, 46, 60 n.64, 195
Serbian safe city 176 law 194–5, 202
sexism 167–8, 171 politics 217
SIGINT 78 power 42, 183, 185, 190–1, 211, 196
signals 37–9, 40 n.84, 125, 132, 134 sovereignty 42 n.3, 147, 184, 186, 190, 195,
Silicon Valley 1, 52 n.44, 95, 117, 140, 144, 197, 201–2, 210
153, 155, 182, 197, 200 speech 15, 35, 37–40, 184, 191, 206,
Singapore 204 210–11, see also freedom of speech,
SKYNET 15, 69–70, 80, 81f, 83, 87 hate speech
Skype 157 speech recognition 114, 132
index of subjects 269
United States (US) (Continued) websites 52 n.43, 97–8, 109 n.88, 122–4,
Total Information Awareness Act 21 126–7, 129, 147–8, 183
see also America, American WeRobotics 93
Ushahidi 112 WhatsApp 2, 97–8, 109, 194
Uyghurs 164, 177 workers 18, 36, 47–9, 100–1, 106, 117–21,
154, 158–9, 182, 184, 188, 190–1,
197–201, 203, 208, 217
valorization 15–16, 116–17, 119–23,
workflow 6–7, 10, 15–16, 44–5, 52–3,
125–9, 131, 133–5, 195, 212–13
56–7, 61–2, 64, 106, 148, 168, 172,
Venezuela 117
211, 215, see also human–machine
Viber 109
workflow
videos 133–4, 154, 161–2, 177, 197–9
violence 4–5, 57, 62, 72, 74, 112, 144–5, XKeyScore 78
180, 186, 188, 199
Young Rewired State 156
Web 38, 96–9, 101, 103–4, 112–13, 122–3, YouTube 97, 125, 133, 135, 161, 189, 191,
135, 153, 188 195