Perspectives On Digital Humanism
Perspectives On Digital Humanism
Erich Prem
Edward A. Lee
Carlo Ghezzi Editors
Perspectives
on Digital
Humanism
Perspectives on Digital Humanism
Hannes Werthner • Erich Prem • Edward A. Lee •
Carlo Ghezzi
Editors
Perspectives on
Digital Humanism
Editors
Hannes Werthner Erich Prem
Vienna University of Technology eutema GmbH and Vienna University
Vienna, Austria of Technology
Vienna, Austria
© The Editor(s) (if applicable) and The Author(s) 2022. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International
License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation,
distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons license and indicate if changes
were made.
The images or other third party material in this book are included in the book's Creative Commons license,
unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative
Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
“This is absolute nonsense.” This was the reaction of the audience, both academics
and non-academics, participating at the First International Conference on IT and
Tourism, in Innsbruck in 1994. Beat Schmid (University of St. Gallen, Switzerland)
spoke about electronic markets and Larry Press (UCLA, USA) about digital agents.
Now, only 28 years later, this “nonsense” runs the world, Information Technol-
ogy and its artifacts act as the operating system of our life, and it is hard to
distinguish between the real and the virtual. We cannot imagine a world without it,
and—besides running the world—it contributes and will continue to contribute to
solving important problems. However, this comes also with interconnected short-
comings, and in some cases, it even puts into question the sovereignty of states.
Other critical problems are echo chambers and fake news, the questioned role of
humans in AI and decision-making, the increasingly pressing privacy concerns, and
the future of work.
This “double face” is why we started the Digital Humanism initiative, with a first
workshop in April 2019 in Vienna. Over 100 attendees from academia, government,
industry, and civil society participated in this lively 2-day workshop. We talked
about technical, political, economic, societal, and legal issues, benefiting from
contributions from different disciplines such as political science, law, sociology,
history, anthropology, philosophy, economics, and informatics. At the center of the
discussion was the relationship between computer science/informatics and society,
or, as expressed during the workshop, the co-evolution of information technology
and humankind. The major outcome was the Vienna Manifesto on Digital Human-
ism, now available in seven languages, which lays down the core principles of our
initiative.
Since then, we have organized a set of workshops and panel discussions. These
events, forced by the pandemic to be online, have drawn a growing worldwide
community. In addition, we succeeded in establishing a core group of internationally
renowned intellectuals from different disciplines, forming a program committee that
jointly “governs” and directs the Digital Humanism initiative.
v
vi Preface
We share the vision that we need to analyze and to reflect on the relationship
between human and machine, and, equally important, to influence its development
for better living and society. Technology is for people and not the other way around.
We chose the term Digital Humanism, which was introduced–in the German-
speaking world–by Julian Nida-Rümelin and Nathalie Weidenfeld with their book
Digitaler Humanismus (Piper Verlag, 2018). We want to stress that humans should
be at the center of the digital world. Technological progress should improve human
freedom, peace, and progress in harmony with nature.
Today, the spirit of humanism should inspire the developments of our society,
which is largely reliant on digital technologies. As such, we distinguish Digital
Humanism from the Digital Humanities, the study of human society and culture
pursued by digital means. In contrast, Digital Humanism aims at rethinking our
current digital practices, including research, development, and innovation in the
digital domain. As such, it maintains a positivist goal for technology to create
societal progress rather than just innovation for the sake of economic growth.
The term humanism, taking a historical perspective, refers to two rather different
movements. The first denotes the period between the mid-fifteenth until the end of
the sixteenth century (Renaissance Humanism), with a rediscovery of antiquity in
the arts and in philosophy, and in which scholars, philosophers, and artists were
called and called themselves “humanists.” Aesthetics and ethics became centered on
humans rather than on the supernatural or divine. The best-known iconic represen-
tation of Humanism is Leonardo da Vinci’s Vitruvian Man, where a human enclosed
in a circle is shown as the archetype of the principles of harmony and proportion
advocated in Vitruvius’ book De Architectura. A second period of humanism
flourished in the Enlightenment period (end of eighteenth century), and the French
revolution was largely inspired by the principles of human freedom and democracy
rooted in the humanistic spirit of that time. Humanism was associated with educa-
tional and pedagogical ideals that focused on values such as human dignity and
humanity. Naturally, the two movements share a range of common concepts and
interests, some of which remain relevant for Digital Humanism today, for example, a
strong focus on human rights and how to maintain them in the digital realm.
There are, however, critics of these classical notions of humanism. Especially, the
educational ideal of humanists has been criticized as supporting beliefs in European
cultural supremacy. Furthermore, a focus on the human subject always requires
critical examination regarding who that subject precisely is and which of its many
traits should be considered essential. However, Digital Humanism today certainly
has no supremacy or colonial mission; quite the contrary, it is critical of already
existing colonial tendencies in today’s digital technologies. This is evidenced by our
stance on digital sovereignty and geopolitics, for example. Similarly, the question of
which traits of human nature should be focused on is a subject of discussion in
Digital Humanism, especially since the relation of the individual and the society is a
major concern of digital humanists.
In the context of the Enlightenment, proponents of Digital Humanism should also
be aware of the critical theory of the Frankfurt School of philosophy. Its prominent
members Adorno and Horkheimer provide a critical analysis of the process of
Preface vii
well as our cultural heritage while highlighting also the importance of culture and
art for digital innovation. Science fiction, for example, is a driver of digital
innovation.
• Data, Algorithm, and Fairness looks at the potential that digital technologies
have to both reinforce and ameliorate unfair treatment of groups of humans. It
deals with complex questions that may arise from an overly strong focus on the
individual rather than a societal perspective. It considers the attention economy
and effects that arise from the characteristics of internet search.
• Platform Power examines the economic and societal role of today’s mega-
platforms, such as Google, Facebook, and Twitter, looking at their dynamics,
on the important role they played in the pandemic, and their impact in specific
industries and their business models.
• Education and Skills of the Future considers how the future of work will affect
education, the impact of technology on the skills needed in future, and what and
how we should teach our young.
• Digital Geopolitics and Sovereignty looks at the contradiction of the inherent
global dimension of the digital world and the limits of national governance
structures. What is the future of sovereignty in digital times?
• Systems and Society addresses societal issues such as the future work, how to deal
with changes imposed by the digital world, how to frame technological design,
and how to formulate corresponding political answers.
• Learning From Crisis addresses the role of technology in the human reaction to
the global pandemic of 2020–2021, and it draws important lessons for a probable
next (global) crisis.
• Realizing Digital Humanism reflects on possible next steps and on the level of
research, writing on a more general societal and political level. As one contribu-
tion states, it seems easy to describe the problem, but it is hard to solve it.
Digital Humanism is a fundamental concept; it is about our future as humans and
as society, not only in the digital world. As such, it is not only an academic
undertaking, it is also a political issue. We need to engage with society, having a
mixed audience, from academics to political decision-makers, from industry and
institutions to civil society and non-governmental organizations, and it is not only
about science, research, and innovation. Equally important are education, commu-
nication, and influencing the public for democratic participation. We hope that this
collection of essays provides an essential contribution to this important endeavor.
We want to thank our colleagues for their contributions, also for responding on
time (at least most of the time) to our usually “urgent” requests. It was a pleasure to
work with you—thank you. We also thank our donors who made this volume
possible. We follow an open access strategy, with the content being accessible
both via our website as well as being published by Springer. Donors are the City
of Vienna (Kulturabteilung), WWTF (Vienna Science and Technology Fund),
Austrian Ministry of European and International Affairs, iCyPhy (the Industrial
Cyber-Physical Research Center at UC Berkeley), and the Database and Artificial
Intelligence Group at TU Wien. Finally, we want to thank Mete Sertkan and
Preface ix
xi
xii Vienna Manifesto on Digital Humanism
ideals with critical thoughts about technological progress. We therefore link this
manifesto to the intellectual tradition of humanism and similar movements striving
for an enlightened humanity.
Like all technologies, digital technologies do not emerge from nowhere. They are
shaped by implicit and explicit choices and thus incorporate a set of values, norms,
economic interests, and assumptions about how the world around us is or should
be. Many of these choices remain hidden in software programs implementing
algorithms that remain invisible. In line with the renowned Vienna Circle and its
contributions to modern thinking, we want to espouse critical rational reasoning and
the interdisciplinarity needed to shape the future.
We must shape technologies in accordance with human values and needs, instead
of allowing technologies to shape humans. Our task is not only to rein in the
downsides of information and communication technologies, but to encourage
human-centered innovation. We call for a Digital Humanism that describes, ana-
lyzes, and, most importantly, influences the complex interplay of technology and
humankind, for a better society and life, fully respecting universal human rights.
In conclusion, we proclaim the following core principles:
• Digital technologies should be designed to promote democracy and inclusion.
This will require special efforts to overcome current inequalities and to use the
emancipatory potential of digital technologies to make our societies more
inclusive.
• Privacy and freedom of speech are essential values for democracy and should be
at the center of our activities. Therefore, artifacts such as social media or online
platforms need to be altered to better safeguard the free expression of opinion, the
dissemination of information, and the protection of privacy.
• Effective regulations, rules and laws, based on a broad public discourse, must be
established. They should ensure prediction accuracy, fairness and equality,
accountability, and transparency of software programs and algorithms.
• Regulators need to intervene with tech monopolies. It is necessary to restore
market competitiveness as tech monopolies concentrate market power and stifle
innovation. Governments should not leave all decisions to markets.
• Decisions with consequences that have the potential to affect individual or
collective human rights must continue to be made by humans. Decision makers
must be responsible and accountable for their decisions. Automated decision
making systems should only support human decision making, not replace it.
• Scientific approaches crossing different disciplines are a prerequisite for tackling
the challenges ahead. Technological disciplines such as computer science/infor-
matics must collaborate with social sciences, humanities, and other sciences,
breaking disciplinary silos.
• Universities are the place where new knowledge is produced and critical thought
is cultivated. Hence, they have a special responsibility and have to be aware
of that.
Vienna Manifesto on Digital Humanism xiii
• Academic and industrial researchers must engage openly with wider society and
reflect upon their approaches. This needs to be embedded in the practice of
producing new knowledge and technologies, while at the same time defending the
freedom of thought and science.
• Practitioners everywhere ought to acknowledge their shared responsibility for
the impact of information technologies. They need to understand that no technol-
ogy is neutral and be sensitized to see both potential benefits and possible
downsides.
• A vision is needed for new educational curricula, combining knowledge from the
humanities, the social sciences, and engineering studies. In the age of automated
decision making and AI, creativity and attention to human aspects are crucial to
the education of future engineers and technologists.
• Education on computer science/informatics and its societal impact must start as
early as possible. Students should learn to combine information-technology skills
with awareness of the ethical and societal issues at stake.
We are at a crossroads to the future; we must go into action and take the right
direction!
SIGN AND SUPPORT THE MANIFESTO:
Authors
Hannes Werthner (TU Wien, Austria), Edward A. Lee (UC Berkeley, USA),
Hans Akkermans (Free University Amsterdam, Netherlands), Moshe Vardi (Rice
University, USA), Carlo Ghezzi (Politecnico di Milano, Italy), Nadia Magnenat-
Thalmann (University of Geneva, Switzerland), Helga Nowotny (Chair of the ERA
Council Forum Austria, Former President of the ERC, Austria), Lynda Hardman
(CWI, Centrum Wiskunde & Informatica, Netherlands), Oliviero Stock (Fondazione
Bruno Kessler, Italy), James Larus (EPFL, Switzerland), Marco Aiello (University
of Stuttgart, Germany), Enrico Nardelli (Università degli Studi di Roma “Tor
Vergata”, Italy), Michael Stampfer (WWTF, Vienna Science and Technology
Fund, Austria), Christopher Frauenberger (TU Wien, Austria), Magdalena Ortiz
xiv Vienna Manifesto on Digital Humanism
(TU Wien, Austria), Peter Reichl (University of Vienna, Austria), Viola Schiaffonati
(Politecnico di Milano, Italy), Christos Tsigkanos (TU Wien, Austria), William
Aspray (University of Colorado Boulder, USA), Mirjam E. de Bruijn (Leiden
University, Netherlands), Michael Strassnig (WWTF, Vienna Science and Technol-
ogy Fund, Austria), Julia Neidhardt (TU Wien, Austria), Nikolaus Forgo (University
of Vienna, Austria), Manfred Hauswirth (TU Berlin, Germany), Geoffrey G. Parker
(Dartmouth College, USA), Mete Sertkan (TU Wien, Austria), Allison Stanger
(Middlebury College & Santa Fe Institute, USA), Peter Knees (TU Wien, Austria),
Guglielmo Tamburrini (University of Naples, Italy), Hilda Tellioglu (TU Wien,
Austria), Francesco Ricci (Free University of Bozen-Bolzano, Italy), Irina Nalis-
Neuner (University of Vienna, Austria)
Contents
xv
xvi Contents
Edward A. Lee
Abstract This chapter challenges the predominant assumption that humans shape
technology using top-down, intelligent design, suggesting that technology should
instead be viewed as the result of a Darwinian evolutionary process where humans
are the agents of mutation. Consequently, we humans have much less control than
we think over the outcomes of technology development.
Rapid change breeds fear. With its spectacular rise from the ashes in the last decade,
we fear that AI may replace most white collar jobs (Ford 2015); that it will learn to
iteratively improve itself into a superintelligence that leaves humans in the dust
(Barrat 2013; Bostrom 2014; Tegmark 2017); that it will fragment information so
that humans divide into islands of disjoint sets of truths (Lee 2020); that it will
supplant human decision-making in health care, finance, and politics (Kelly 2016);
that it will cement authoritarian powers, tracking every move of their citizens and
shaping their thoughts (Lee 2018); and that the surveillance capitalists’ monopolies,
which depend on AI, will destroy small business and swamp entrepreneurship
(Zuboff 2019).
Surely, today, we still retain a modicum of control. At the very least, we can still
pull the plug. Or can we? The technology underlying these risks is made by humans,
so why can’t we control the outcomes? We have the power to design and to regulate,
don’t we? So why are we trying so desperately to catch up with yesterday’s disasters
while today’s just fester? The very technology that threatens us is also the reason we
are successfully feeding most of the 7.8 billion humans on this meager planet and
have lifted billions out of poverty in the last decades. Giving us pause, however,
Albert Einstein famously said, “We cannot solve our problems with the same
thinking we used when we created them.”
Knowledge is at the root of technology, information is at the root of knowledge,
and today’s technology makes information vastly more accessible than it has ever
E. A. Lee (*)
University of California, Berkeley, CA, USA
e-mail: eal@berkeley.edu
been. Shouldn’t this help us solve our problems? The explosion of AI feeds the
tsunami, turning every image, every text, and every sound into yet more information,
flooding our feeble human brains. We can’t absorb the flood without curation, and
curation of information is increasingly being done by AIs. Every subset of the truth
is only a partial truth, and curated information includes, necessarily, a subset. Since
our brains can only absorb a tiny subset of the flood, everything we take in is at best a
partial truth. The AIs, in contrast, seem to have little difficulty with the flood. To
them, it is the food that strengthens, perhaps leading to that feared runaway feedback
loop of superintelligence that sidelines humans into irrelevance.
The question I address here is, “Are we going to lose control?” You may find my
answer disturbing.
First, in posing this question, what do we mean by “we”? Do we mean “human-
ity,” all 7.8 billion of us? The idea of 7.8 billion people collectively controlling
anything is patently absurd, so that must not be what we mean. Do we mean the
engineers of Silicon Valley? The investors on Wall Street? The politicians who feed
off the partial truths and overt lies?
Second, what do we mean by “control”? Is it like steering a car on a network
of roads, or is it more like steering a car while the map emerges and morphs
into unexpected dead ends, underpasses, and loops? If we are steering technology,
then every turn we take changes the terrain that we have to steer over in
unexpected ways.
I am an engineer. In my own small way, I contribute to the problem by writing
software, some of which has small influences on our ecosystem. For most of my
40 years doing this, I harbored a “creationist” illusion that the things I designed were
my own personal progeny, the pure result of my deliberate decisions, my own
creative output. I have since realized that this is a bit like thinking that the bag of
groceries that I just brought back from the supermarket is my own personal accom-
plishment. It ignores centuries of development in the technology of the car that got
me there and back, agriculture that delivered the incredible variety of fresh food to
the store, the economic system that makes all of this affordable, and many other parts
of the socio-cultural backdrop against which my meager accomplishment pales.
In my recent book (Lee 2020), I coin the term “digital creationism” for the idea
that technology is the result of top-down intelligent design. This principle assumes
that every technology is the outcome of a deliberate process, where every aspect of a
design is the result of an intentional, human decision. I now know, 40 years later, that
this is not how it happens. Software engineers are more the agents of mutation in a
Darwinian evolutionary process. The outcome of their efforts is shaped more by the
computers, networks, software tools, libraries, programming languages, and other
programs they use than by their deliberate decisions. And the success and further
development of their product is determined as much or more by the cultural milieu
into which they launch their “creation” than by their design decisions.
The French philosopher known as Alain (whose real name was Émile-Auguste
Chartier), wrote, about fishing boats in Brittany:
Are We Losing Control? 5
Every boat is copied from another boat. ... Let’s reason as follows in the manner of Darwin.
It is clear that a very badly made boat will end up at the bottom after one or two voyages and
thus never be copied. ... One could then say, with complete rigor, that it is the sea herself who
fashions the boats, choosing those which function and destroying the others. (Rogers and
Ehrlich 2008)
Boat designers are agents of mutation, and sometimes their mutations result in a
badly made boat. From this perspective, perhaps Facebook has been fashioned more
by teenagers than by software engineers.
More deeply, digital technology coevolves with humans. Facebook changes its
users, who then change Facebook. For software engineers, the tools we use, them-
selves earlier outcomes of software engineering, shape our thinking. Think about
how IDEs1 (such as Eclipse or Visual Studio Code), message boards (such as Stack
Overflow), libraries (such as the Standard Template Library), programming lan-
guages (e.g., Scala, Rust, and JavaScript), and Internet search (such as Google or
Bing) affect the outcome of our software. These tools have more effect on the
outcome than all of our deliberate decisions.
Today, the fear and hype around AI taking over the world and social media taking
down democracy has fueled a clamor for more regulation. But if I am right about
coevolution, we may be going about the project of regulating technology all wrong.
Why have privacy laws, with all their good intentions, done little to protect our
privacy? They have only overwhelmed us with small-print legalese and annoying
pop-ups giving us a choice between “accept our inscrutable terms” and “go away.”
Do we expect new regulations trying to mitigate fake news or to prevent insurrec-
tions from being instigated by social media to be any more effective?
Under the principle of digital creationism, bad outcomes are the result of
unethical actions by individuals, for example by blindly following the profit motive
with no concern for societal effects. Under the principle of coevolution, bad out-
comes are the result of the “procreative prowess” (Dennett 2017) of the technology
itself. Technologies that succeed are those that more effectively propagate. The
individuals we credit with (or blame for) creating those technologies certainly play
a role, but so do the users of the technologies and their whole cultural context. Under
this perspective, Facebook users bear some of the blame, along with Mark
Zuckerberg, for distorted elections. They even bear some of the blame for the design
of Facebook software that enables distorted elections. If they were happy to pay for
social networking, for example, an entirely different software design may have
emerged.
Under digital creationism, the purpose of regulation is to constrain the individuals
who develop and market technology. In contrast, under coevolution, constraints can
be about the use of technology, not just its design and the business of selling it. The
purpose of regulation becomes to nudge the process of both technology and cultural
evolution through incentives and penalties. Nudging is probably the best we can
1
Integrated development environments (IDEs) are computer programs that assist programmers by
parsing their text as they type, coloring text by function, identifying errors and potential flaws in
code style, suggesting insertions, and transforming code through refactoring.
6 E. A. Lee
hope for. Evolutionary processes do not yield easily to control because the territory
over which we have to navigate keeps changing.
Perhaps privacy laws have been ineffective because they are based on digital
creationism as a principle. These laws assume that changing the behavior of corpo-
rations and engineers will be sufficient to achieve privacy goals (whatever those are
for you). A coevolutionary perspective understands that users of technology will
choose to give up privacy even if they are explicitly told that their information will
be abused. We are repeatedly told exactly that in the fine print of all those privacy
policies we don’t read, and, nevertheless, our kids get sucked into a media milieu
where their identity gets defined by a distinctly non-private online persona.
If technology is defining culture while culture is defining technology, we have a
feedback loop, and intervention at any point in the feedback loop can change the
outcomes. Hence, it may be just as effective to pass laws that focus on educating the
public, for example, as it is to pass laws that regulate the technology producers.
Perhaps if more people understood that Pokémon GO is a behavior-modification
engine, they would better understand Niantic’s privacy policy and its claim that their
product, Pokémon GO, has no advertising. Establishments pay Niantic for place-
ment of a Pokémon nearby to entice people to visit them (Zuboff 2019). Perhaps a
strengthening of libel laws, laws against hate speech, and other refinements to first-
amendment rights should also be part of the remedy.
I believe that, as a society, we can do better than we are currently doing. The risk
of an Orwellian state (or perhaps worse, a corporate Big Brother) is very real. It has
happened already in China. We will not do better, however, until we abandon digital
creationism as a principle. Outlawing specific technology developments will not be
effective, and breaking up monopolies could actually make the problem worse by
accelerating mutations. For example, we may try to outlaw autonomous decision-
making in weapons systems and banking, but as we see from election distortions and
Pokémon GO, the AIs are very effective at influencing human decision-making, so
putting a human in the loop does not necessarily help. How can a human who is,
effectively, controlled by a machine somehow mitigate the evilness of autonomous
weapons?
When I talk about educating the public, many people immediately gravitate to a
perceived silver bullet, that we should teach ethics to engineers. But I have to ask, if
we assume that all technologists behave ethically (whatever your meaning of that
word), can we conclude that bad outcomes will not occur? This strikes me as naïve.
Coevolutionary processes are much too complex.
This chapter is my small contribution to the digital humanism initiative, a
movement that seeks a more human-centric approach to technology. This initiative
makes it imperative for intellectuals of all disciplines to step up and take seriously
humanity’s dance with technology. That our limited efforts to rein in the detrimental
effects of digital technology have been mostly ineffective underscores our weak
understanding of the problem. We need humanists with a deeper understanding of
technology, technologists with a deeper understanding of the humanities, and policy
makers drawn from both camps. We are quite far from that goal today.
Are We Losing Control? 7
Returning to the original question, are we losing control? The answer is “no.” We
never had control, and we can’t lose what we don’t have. This does not mean we
should give up, however. We can nudge the process, and even a supertanker can be
redirected by gentle nudging.
References
Barrat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era,
St. Martin’s Press.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, UK, Oxford University
Press.
Dennett, D. C. (2017). From Bacteria to Bach and Back: The Evolution of Minds, W. W. Norton
and Company.
Ford, M. (2015). Rise of the Robots -- Technology and the Threat of a Jobless Future. New York,
Basic Books.
Kelly, K. (2016). The Inevitable: Understanding the 12 Technological Forces That Will Shape Our
Future. New York, Penguin Books.
Lee, E. A. (2020). The Coevolution: The Entwined Futures of Humans and Machines. Cambridge,
MA, MIT Press.
Lee, K.-F. (2018). Super-Powers: China, Silicon Valley, and the New World Order. New York,
Houghton Mifflin Harcourt Publishing Company.
Rogers, D. S. and P. R. Ehrlich (2008). “Natural Selection and Cultural Rates of Change.”
Proceedings of the National Academy of Sciences of the United States of America 105(9):
3416-3420.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. New York, Alfred
A. Knopf.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power, PublicAffairs , Hachette Book Group.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Social Robots: Their History and What
They Can Do for Us
Abstract From antiquity to today, some scientists, writers, and artists are passionate
about representing humans not only as beautiful statues but as automatons that can
perform actions. Already in ancient Greece, we can find some examples of autom-
atons that replaced servants. In this chapter, we go through the development of
automatons until the social robots of today. We describe two examples of social
robots, EVA and Nadine, that we have been working with. We present two case
studies, one in an insurance company and the other one in an elderly home. We also
mention the limits of the use of social robots, their dangers, and the importance to
control their actions through ethical committees.
N. M. Thalmann (*)
MIRALab, University of Geneva, Geneva, Switzerland
e-mail: Nadia.thalmann@unige.ch
Fascination for automation is neither new nor modern. Living side by side with
automated, mechanical beings which look like us and can do fantastic things or
protect us is one of mankind’s oldest dreams. In 400 BC, Greek mythology already
featured the tale of Talos, a 10-m giant automaton made of bronze. He was brought
to life by the gods to ward off pirates and invaders and to keep watch to protect the
island. Sometime later, in 250 BC, a human-like automatic device in the form of a
maid, holding a jug of wine in her right hand was made. When one placed a cup in
the palm of the automaton’s left hand, it automatically poured wine initially and then
poured water into the cup, mixing it when desired (Fig. 1).
This automatic maid can be seen in the Museum of ancient Greek Technology in
Katalo in Greece.
Leonardo Da Vinci, in the late fifteenth century, did technical drawings that were
found recently. These drawings have allowed the creation of a mechanical humanoid
automaton, a robotic knight which is capable of independent motion. It could stand,
sit, raise its visor, and maneuver its arms independently (Fig. 2). It was fully done
according to Leonardo Da Vinci’s drawings and is fully working.
Later, a marvel of automatons has been produced and shown in 1775 in London.
Spectators could see three animated human figures made by Pierre and Henri Louis
Droz, from Switzerland. These automatons were autonomously able to write, to
In 1950, Alan Turing developed the Turing Test to test a machine’s ability to
exhibit intelligent behavior equivalent to, or indistinguishable from, that of a
human.1 However, it was soon felt that the Turing Test was not sufficient. As the
notions of social and emotional intelligence theory grew in the 1980s and 1990s,
human intelligence was understood not only as an ability to answer logical questions
based on logical reasoning, but rather as an ability to take account of one’s environ-
ment, the real world, social interactions, and emotions, and this is something the
Turing Test could not measure. This paved the way for new technical developments
towards intelligent social robots.
Alongside the evolution in social sciences, computer science, too, developed over
time. Sixty years ago, computers were very large and were very limited. Today, they
are much smaller, much faster, and much more powerful and offer incredible
possibilities of interfacing with people through sensors and actuators. We have
now many software and hardware tools which can capture, understand, and analyze
a lot of signals and meanings. We can capture speech, sounds, gestures, shapes, etc.
Through the emergence of big data and machine learning, we can analyze new data,
find correspondence, and predict future patterns.
1
https://en.wikipedia.org/wiki/Turing_test
Social Robots: Their History and What They Can Do for Us 13
A social robot is defined as “an autonomous entity that interacts and communicates
with humans or other autonomous physical agents by following social behaviours
and rules attached to its role.”2 Social robots interact with people in social contexts.
Therefore, they must understand the social context, i.e., understand users’ behaviors
and emotions, and respond with the appropriate gestures, facial expressions, and
gaze. The challenge is to provide algorithms to sense, analyze situations and
intentions, and make appropriate decisions.
Our lab’s first social robot project in Switzerland was EVA (2008–2012), a
robotic tutor (Fig. 4). The overall goal of this project was to develop a long-term
social interaction framework with a human-like robot or virtual human, modelling
emotions, episodic memory, and expressive behavior.
EVA can interact with users by recognizing and remembering their names and
can understand users’ emotional states through facial expression recognition and
speech. Based on user input and its personality model, EVA produces appropriate
emotional responses and keeps information on the long-term interpersonal relation-
ships (Kasap and Magnenat Thalmann 2012). EVA played a role as an actor with
real actors in the Roten Fabrik Theater at Zurich.
Taking this one step further, in 2013, we started in NTU in Singapore working on
Nadine, a socially intelligent robot with a strong human-likeness, a natural-looking
skin and hair, and realistic hands and body. One of our main motivation over time is
not only to produce a technical innovation but also to contribute to art. Nadine has
been recognized as a beautiful living sculpture, a piece of expressive art with a very
human-computer natural interface.3 We consider this kind of realistic robots as the
next step in the making of living sculptures. Nadine is equipped with a 3D camera
and a microphone. It can recognize people, gestures, and clues of social situations of
emotion and behavior as well as speech, which enables it to understand social
situations. Nadine has an emotional module, a memory, and a social attention
2
https://en.wikipedia.org/wiki/Social_robot
3
https://en.wikipedia.org/wiki/Nadine_Social_Robot
14 N. M. Thalmann
model as well as a chatbot, which means that depending on the input, it will generate
the appropriate emotional response and remember interactions. Nadine’s controllers
control lip synchronization and its gaze and enable Nadine to adapt its facial
expressions to basic emotions (Fig. 5). It has a personality, mood, and emotion
model, and depending on its relationship with a user, its response will vary and adapt
to each situation.
We have conducted a study with Nadine humanoid social robot as a customer service
agent in an insurance company in Singapore alongside human employees. Nadine
was accessible to the public, and the customer could ask any question relative to the
insurance. As a service customer agent, Nadine was able to answer customer queries
and maintain a courteous, professional behavior during interaction. Our objective
was to study if a robot could handle real social situations, if customers were willing
to interact with human-like robots, and to study the usefulness of a humanoid robot
Social Robots: Their History and What They Can Do for Us 15
therapist who has little time. Nadine could repeat the numbers, show results on
several screens, and take the time to congratulate people. Patients also highly
appreciated the natural human-like beauty of Nadine.4
Most researchers are developing robots for a better humanity, and the examples cited
above fall into this category. Others fabricate killer robots, dominant robots, and bio
robots to use them against humans for profits or for hateful purposes. Fortunately,
researchers from diverse areas are beginning to formulate together ethical questions
about the use of robotic technology and its implementation in societies. A very
important report on the ethics of robotics has been published by the World Com-
mission of the Ethics of Scientific Knowledge and Technology in 2017.5 Important
Ethical committees are taken place as IEEE and ACM6 and other professional
organizations. More importantly, in research, any project on social robots interacting
with humans needs an approval from institutional review boards or ethical
committees.
I would like to conclude on the following citation by Aristotle who speculated in
his Politics (ca. 322 BC, book 1, part 4) that automata could someday bring human
equality:
If, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to
guide them, chief workmen would not want servants, nor masters’ slaves.
Reference
Kasap, Z. and Magnenat Thalmann, N. (2012). Building Long-term Relationships with Virtual and
Robotic Characters: The Role of Remembering, The Visual Computer, vol. 28, no. 1, pp. 87-97.
4
Do Elderly enjoy playing Bingo with a humanoid robot? Submitted at Ro-Man 2021 Conference,
https://www.dropbox.com/s/fj7qdqm4a01ezhz/NT_Humanoid%20man%20vs%20robots%20Final
%20reviewed_May%203.docx?dl¼0
5
https://unesdoc.unesco.org/ark:/48223/pf0000253952 (report on the ethics of robotics)
6
https://dl.acm.org/doi/10.1145/1167867.1164071 (Ethical acts in robotics report)
Social Robots: Their History and What They Can Do for Us 17
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Artificial Intelligence and the Problem
of Control
Stuart Russell
The central technical concept in AI is that of an agent—an entity that perceives and
acts (Russell and Norvig 2020).1 Cognitive faculties such as reasoning, planning,
and learning are in the service of acting. The concept can be applied to humans,
robots, software entities, corporations, nations, or thermostats. AI is concerned
principally with designing the internals of the agent: mapping from a stream of
raw perceptual data to a stream of actions. Designs for AI systems vary enormously
depending on the nature of the environment in which the system will operate, the
nature of the perceptual and motor connections between agent and environment, and
the requirements of the task.
AI seeks agent designs that exhibit “intelligence,” but what does this mean?
Aristotle (Ethics) gave one answer: “We deliberate not about ends, but about
means. . . . [We] assume the end and consider how and by what means it is attained,
1
The word “agent” in AI carries no connotation of acting on behalf of another.
S. Russell (*)
University of California, Berkeley, CA, USA
e-mail: russell@berkeley.edu
and if it seems easily and best produced thereby.” That is, an intelligent or rational
action is one that can be expected to achieve one’s objectives. This line of thinking
has persisted to the present day. Arnauld (1662) broadened Aristotle’s theory to
include uncertainty in a quantitative way, proposing that we should act to maximize
the expected value of the outcome. Daniel Bernoulli (1738) refined the notion of
value, moving it from an external quantity (typically money) to an internal quantity
that he called utility. De Montmort (1713) noted that in games (decision situations
involving two or more agents) a rational agent might have to act randomly to avoid
being second-guessed. Von Neumann and Morgenstern (1944) tied all these ideas
together into an axiomatic framework that underlies much of modern economic
theory.
As AI emerged in the 1940s and 1950s, it needed some notion of intelligence on
which to build the foundations of the field. Although some early research was aimed
more at emulating human cognition, the notion that won out was rationality: a
machine is intelligent to the extent that its actions can be expected to achieve its
objectives. In the standard model, we aim to build machines of this kind; we define
the objectives; and the machine does the rest. There are several different ways in
which the standard model can be instantiated. For example, a problem-solving
system for a deterministic environment is given a cost function and a goal criterion
and finds the least-cost action sequence that leads to a goal state; a reinforcement
learning system for a stochastic environment is given a reward function and a
discount factor and learns a policy that maximizes the expected discounted sum of
rewards.
This general approach is not unique to AI. Control theorists minimize cost
functions; operations researchers maximize rewards; statisticians minimize an
expected loss function; and economists, of course, maximize the utility of individ-
uals, the welfare of groups, or the profit of corporations.
In short, the standard model of AI (and related disciplines) is a pillar of twentieth-
century technology.
computational resource and defending against any possible attempt to interfere with
goal achievement.
The Vienna Manifesto on Digital Humanism includes the following principle:
“We must shape technologies in accordance with human values and needs, instead of
allowing technologies to shape humans.” Perhaps the clearest example demonstrat-
ing the need for this principle is given by machine learning algorithms performing
content selection on social media platforms. Such algorithms typically pursue the
objective of maximizing clickthrough or a related metric. Rather than simply
adjusting their recommendations to suit human preferences, these algorithms will,
in pursuit of their long-term objective, learn to manipulate humans to make them
more predictable in their clicking behavior (Groth et al. 2019).2 This effect may be
contributing to growing polarization and extremism in many countries.
The mistake in the standard model comes from transferring a perfectly reasonable
definition of intelligence from humans to machines. The definition is reasonable for
humans because we are entitled to pursue our own objectives. (Indeed, whose would
we pursue if not our own?) Machines, on the other hand, are not entitled to pursue
their own objectives. A more sensible definition of AI would have machines
pursuing our objectives. In the unlikely event that we can specify the objectives
completely and correctly and insert them into the machine, we can recover the
standard model as a special case. If not, then the machine will necessarily be
uncertain as to our objectives, while being obliged to pursue them on our behalf.
This uncertainty—with the coupling between machines and humans that it entails—
turns out to be crucial to building AI systems of arbitrary intelligence that are
provably beneficial to humans. In other words, I propose to do more than “shape
technologies in accordance with human values and needs.” Because we cannot
necessarily articulate those values and needs, we must design technologies that
will, by their very constitution, respond to human values and needs, whatever
they are.
3 A New Model
2
Providing additional evidence for the significance of the problem of misspecified objectives, Hillis
(2019) has drawn the analogy between uncontrollable AI systems and uncontrollable economic
actors—such as fossil-fuel corporations maximizing profit at the expense of humanity’s future.
22 S. Russell
As noted in the preceding section, the uncertainty about objectives that the second
principle espouses is a relatively unstudied concept in AI—yet it is central to
ensuring that we not lose control over increasingly capable AI systems.
In the 1980s, the AI community abandoned the idea that AI systems could have
definite knowledge of the state of the world or of the effects of actions, and they
embraced uncertainty in these aspects of the problem statement. It is not at all clear
why, for the most part, they failed to notice that there must also be uncertainty in the
objective. Although some AI problems such as puzzle solving are designed to have
well-defined goals, many other problems that were considered at the time, such as
recommending medical treatments, have no precise objectives and ought to reflect
the fact that the relevant preferences (of patients, relatives, doctors, insurers, hospital
systems, taxpayers, etc.) are not known initially in each case. While it is true that
unresolvable uncertainty over objectives can be integrated out of any decision
problem, leaving an equivalent decision problem with a definite (average) objective,
this transformation is invalid when there is the possibility of additional evidence
regarding the true objectives. Thus, one may characterize the primary difference
between the standard and new models of AI through the flow of preference infor-
mation from humans to machines at “runtime.” This flow comes from evidence
provided by human behavior, as the third principle asserts.
This basic idea is made more precise in the framework of assistance games—
originally known as cooperative inverse reinforcement learning (CIRL) games in the
terminology of Hadfield-Menell et al. (2017a). The simplest case of an assistance
game involves two agents, one human and the other a robot. It is a game of partial
information, because, while the human (in the basic version) knows the payoff
function, the robot does not—even though the robot’s job is to maximize it. In a
Bayesian formulation, the robot begins with a prior probability distribution over the
human payoff function and updates it as the robot and human interact during the
game. The basic assistance game model can be elaborated to allow for imperfectly
rational humans (Hadfield-Menell et al. 2017b), humans who don’t know their own
preferences (Chan et al. 2019), multiple human participants (Fickinger et al. 2020),
multiple robots, and so on.
Assistance games are connected to inverse reinforcement learning, or IRL (Rus-
sell 1998; Ng and Russell 2000), because the robot can learn more about human
preferences from the observation of human behavior—a process that is the dual of
reinforcement learning, wherein behavior is learned from rewards and punishments.
The primary difference is that in the assistance game, unlike the IRL framework, the
human’s actions are affected by the robot’s presence—for example, the human may
try to teach the robot about his or her preferences. This two-way process lends the
framework an inevitable game-theoretic character that produces, among other phe-
nomena, emergent conventions for communicating preference information.
The overall approach also resembles principal–agent problems in economics,
wherein the principal (e.g., an employer) needs to incentivize another agent (e.g.,
an employee) to behave in ways beneficial to the principal. The key difference here is
that unlike a human employee, the robot has no interests of its own. Furthermore, we
Artificial Intelligence and the Problem of Control 23
are building one of the agents in order to benefit the other, so the appropriate solution
concepts may differ.
Within the framework of assistance games, a number of basic results can be
established that are relevant to Turing’s problem of control.
• Under certain assumptions about the support and bias of the robot’s prior proba-
bility distribution over human rewards, one can show that a robot solving an
assistance game has non-negative value to humans (Hadfield-Menell et al. 2017a).
• A robot that is uncertain about the human’s preferences has a non-negative
incentive to allow itself to be switched off (Hadfield-Menell et al. 2017b). In
general, it will defer to human control actions.
• To avoid changing attributes of the world whose value is unknown, the robot will
generally engage in “minimally invasive” behavior to benefit the human (Shah
et al. 2019). Even when it knows nothing at all about human preferences, it will
still take “empowering” actions that expand the set of actions available to the
human.
There are too many open research problems in the new model of AI to list them all
here. The most directly relevant to moral philosophy and the social sciences is the
question of social aggregation: how should a machine decide when its actions affect
the interests of more than one human being? Issues include the preferences of evil
individuals (Harsanyi 1977); relative preferences and positional goods (Veblen
1899; Hirsch 1977); and interpersonal comparison of preferences (Nozick 1974;
Sen 1999). Also of great importance is the plasticity of human preferences, which
brings up both the philosophical problem of how to decide on behalf of a human
whose preferences change over time (Pettigrew 2020) and the practical problem of
how to ensure that AI systems are not incentivized to change human preferences in
order to make them easier to satisfy.
Assuming that the theoretical and algorithmic foundations of the new model for
AI can be completed and then instantiated in the form of useful systems such as
personal digital assistants or household robots, it will be necessary to create a
technical consensus around a set of design templates for provably beneficial AI, so
that policy makers have some concrete guidance on what sorts of regulations might
make sense. The economic incentives would tend to support the installation of
rigorous standards at the early stages of AI development, because failures would
be damaging to entire industries, not just to the perpetrator and victim.
The question of enforcing policies for beneficial AI is more problematic, given
our lack of success in containing malware. In Samuel Butler’s Erewhon and in Frank
Herbert’s Dune, the solution is to ban all intelligent machines, as a matter of both law
and cultural imperative. Perhaps if we find institutional solutions to the malware
problem, we will be able to devise some less drastic approach for AI. As the
Manifesto underscores, the technology of AI has no value in itself beyond its ability
to benefit humanity.
24 S. Russell
References
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
The Challenge of Human Dignity in the Era
of Autonomous Systems
Paola Inverardi
P. Inverardi (*)
Dipartimento di Scienze ed Ingegneria dell’Informazione e Matematica, Università dell’Aquila,
L’Aquila, Italy
e-mail: paola.inverardi@univaq.it
and the increasing presence of AI-fuelled autonomous systems have shown that
privacy concerns are insufficient: ethics and the human dignity are at stake. “Accept/
not accept” options do not satisfy our freedom of choice, and what about our
individual preferences and moral views?
Autonomous machines tend to occupy the free space in a democratic society in
which a human being can exercise her freedom of choice. That is the space of
decisions that are left to any individuals when such decisions do not break funda-
mental rights and laws but are the expression of personal ethics. From the case of
privacy preferences in the app domain to the more complex case of autonomous
driving cars, the potential user is left unprotected and inadequate in her interaction
with the digital world.
A simple system that manages a queue of users to access a service by following a
by design fair ordering, e.g., first in first out, may prevent users to exchange their
positions in the queue by personal choice, thus depriving users to exercise a free
choice driven by her moral disposition, e.g., leave her position to an older lady.
What is considered fair by the system’s developer may not match users’ ethics.
The above example may seem artificial and of little importance, but it is not. In
the years of digital transformation we have witnessed the side effect of increasing the
rigidity of the processes implemented by digital systems beyond what the law
indicated. How many times have we heard answers like “yes this could be possible,
but the system does not allow it”? The above queue managing system may have
associated to the ordering position a personal identifier and already made available
all the personal information to the service provider. Although it may appear more
complex to exchange positions, it would be not a problem for a digital system to
manage the exchange. It only requires that the system is properly designed to take
into consideration the user’s right of choice. Overlooking this attitude in the era of
autonomous technology may put at high danger our personal ethical values.
More complex interactions between systems and users shall be made possible in
order to allow users’ ethics to freely manifest.
However, even when such interaction is made possible, think, e.g., of the
(by GDPR law mandatory in Europe) possibility to express users’ consent to cookies
profiling, the ways systems are presenting such interaction to the user is extremely
complex and time expensive even for an expert user and often turns out in a accept/
not accept choice.
In a digital society where the relationship between citizens and machines is
uneven, moral values like individuality and responsibility are at risk.
From a societal point of view, it is therefore crucial to understand which is the
space of autonomy that a system can exercise without compromising laws and
human rights.
Indeed, autonomous systems interact within a society, characterized by collective
ethical values, with multiple and diverse users, each of them characterized by her
individual moral preferences.
The European Group on Ethics in Science and New Technologies (EGE) recom-
mends an overall rethinking of the values around which the digital society is to be
structured (EGE 2018), the most important being the value of human dignity in the
The Challenge of Human Dignity in the Era of Autonomous Systems 27
context of the digital society, understood as the recognition that a person is worthy of
respect in her interaction with autonomous technologies. A person must be able to
exercise control on information about herself and on the decisions that autonomous
systems make on her behalf.
There is a general consensus about this, but legislation follows and does not
prevent the problem, and it is debatable whether regulatory approaches like GDPR or
others are effectively protecting the human dignity of users. Besides regulation,
active approaches have been proposed in the research on AI, where systems/software
developers and companies should apply ethical codes and follow guidelines for the
development of trustworthy systems in order to achieve transparency and account-
ability of decisions (AI HLEG 2019; EU 2020). However, despite the ideal of a
human-centric AI and the recommendations to empower the users, the power and the
burden to preserve the users’ rights still remain in the hands of the (autonomous)
systems producers.
The above-described active approaches do not guarantee our freedom of choice
that is manifested by our individual preferences and moral views. Design principles
for meaningful human control over AI-enabled AS are needed. Users need (digital)
empowerment in order to move from passive to active actors in governing their
interactions with autonomous systems, and it is necessary to define the border in the
space of decisions between what the system can decide on its own and what may be
controlled and possibly overridden by the user. This also means that the system shall
be designed to be open to more complex interactions with its users as far as users’
moral decisions are concerned.
But how to draw the border between systems’ decisions and users’ ones?
Reflections on digital ethics can help in this respect. Digital ethics, as introduced
in Floridi (2018), is the branch of ethics that aims at formulating and supporting
morally good solutions through the study of moral problems relating to personal
data, (AI) algorithms, and corresponding practices and infrastructures. It identifies
two separate components, hard and soft ethics. Hard ethics is the base to define and
enforce values by legislation and institutional bodies, i.e., hard ethics is what makes
or shapes the law and represents the values collectively accepted, e.g., GDPR in
Europe.
It is insufficient, since it cannot and shall not cover all the space of ethical
decisions. Soft ethics complements it by considering what ought and ought not to
be done over and above the existing regulation, not against it, or despite its scope, or
to change it, or to by-pass it (e.g., in terms of self-regulation).
Personal preferences fall in the scope defined by soft ethics, e.g., the varieties of
privacy profiles that characterize different users. A system will implement decisions
to choices that correspond to both hard and soft ethics. The producer will guarantee
compliance with the hard ethics rules but who does take care, and how it can care of,
the values and preferences of each person?
We claim that soft ethics can express users’ moral preferences and should mold
their interaction with the digital world. Empowering a person with a software
technology that supports her soft ethics is the means to make her an independent
and active user in/of the digital society.
28 P. Inverardi
Acknowledgements The work described in this chapter is part of the EXOSOUL project (https://
exosoul.disim.univaq.it/). The author thanks all the research team for enlightening discussions and
work together.
References
AI HLEG (2019) The High-Level Expert Group on Artificial Intelligence. Ethics guidelines for
trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-
trustworthy-ai
Autili M., et al. (2019) A Software Exoskeleton to Protect and Support Citizen's Ethics and Privacy
in the Digital World. IEEE Access 7, 2019.
EGE (2018) European Group on Ethics in Science and New Technologies. Statement on artificial
intelligence, robotics and autonomous systems. https://ec.europa.eu/research/ ege/pdf/
ege_ai_statement_2018.pdf.
EU (2020) European Commission. White paper on artificial intelligence, 2020. https://ec.europa.eu/
info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
Floridi L. (2018) “Soft ethics and the governance of the digital.” Philosophy & Technology 31(1),
pp. 1-8.
Liao B., Slavkovik M., and van der Torre L. (2019) Building Jiminy Cricket: An Architecture for
Moral Agreements Among Stakeholders. In Proc. of AIES19, 2019.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part II
Participation and Democracy
The Real Cost of Surveillance Capitalism:
Digital Humanism in the United States
and Europe
Allison Stanger
A. Stanger (*)
Middlebury College, Middlebury, VT, USA
e-mail: stanger@middlebury.edu
the cause of surveillance, since the same insatiable drive for data exists in planned
economies. China currently leads the world in AI applications, because it also leads
the world in commercial and security espionage. The real threat to liberal democra-
cies is not capitalism, as Zuboff’s book title seems to imply, but the growing
inequalities that corporate surveillance in its unregulated present form both reveals
and exacerbates.
1 Zuboff’s Argument
1
These epistemic rights are apparently self-evident, since they are never defined.
The Real Cost of Surveillance Capitalism: Digital Humanism in the United. . . 35
not mentioned in the New York Times essay, suggesting that capitalism itself isn’t the
problem. But if the capitalist profit motive is not the problem, what is? What does
any of this have to do with epistemology? Big data in and of itself is not knowledge.
The interpretation of data produces knowledge rather than mere noise. Who are these
knowledge creators? Is Zuboff one of them?
recent example of the former. The myriad ways that the capitalist economies of
continental Europe and the European Union have challenged the excesses of sur-
veillance capitalism through national and EU legislation are evidence against a
business-government conspiracy. If there has been an epistemic coup, as Zuboff
argues, democratic governments are clearly not entirely on board.
Finally, surveillance capitalism suggests that there is something intrinsic to
capitalism that is animating data collection, when this is not the case. While its
economy certainly has some capitalist features, the Chinese Communist Party’s
interventions in economic life are incompatible with capitalism. Beijing’s
restructuring of Alibaba co-founder Jack Ma’s corporate empire is a case in point
(Zhong 2021). The China Brain Project, which involves the Chinese military,
harvests data from Baidu that fuels China’s controversial Social Credit System,
designed to reward “pro-social” and punish “anti-social” behavior. China is also
apparently interested in building expertise in American behavior modification, and
Americans have willingly volunteered their personal data for Chinese use in
exchange for using the popular app TikTok, which has been banned in India for
national security reasons. It may not be the case, however, that securing American
data to train algorithms to better manipulate or sell to Americans is necessary. A
2007 multinational study found that the OCEAN Big Five personality inventory,
which was exploited so brilliantly by Cambridge Analytica to interfere in the 2016
US presidential election, is “robust across major areas of the world” (Schmitt et al.
2007).
In mischaracterizing the nature of the problem, therefore, Zuboff misses the real
story. Coups are intentional, and if anything, technology companies don’t want the
political power that has inadvertently accrued to them through their monopoly of
information. Mark Zuckerberg and other tech titans have repeatedly stated that they
want appropriate government regulation but have been forced to do the best they can
with self-regulation until government again assumes its proper role as overseer and
promoter of the greater good. The inherent problems with self-regulation are obvi-
ous. But it is not difficult to see that government has been slow to come to terms with
the dramatic transformation of democracy’s public sphere through technological
innovation.
To summarize, the Big Tech companies have not orchestrated a coup; they have
myopically optimized for shareholder value at the expense of civic life. They have
created products for other children that they do not want their own children to use
(The Social Dilemma 2020). Further, these companies haven’t cornered the market
on knowledge, as the word epistemic suggests, but on data, and data can mislead just
as easily as it can inform.
The Real Cost of Surveillance Capitalism: Digital Humanism in the United. . . 37
By failing to specify the causal mechanisms of the very real negative costs she
identifies, Zuboff creates the impression that capitalism itself is the cause for the
problem, when the real source of the problem is the absence of good governance.
Blaming capitalism itself is misplaced, because we are in the midst of a transforma-
tion that challenges our existing cognitive capacity, with or without AI. The move to
the cloud, a market Amazon is betting heavily on, only exacerbates anti-democratic
trends to which democratic governments have been slow to react—but are capable of
doing so.
Zuboff’s bottom line, however, does highlight a looming challenge to open
societies and democracy: the accelerating competition between the United States
and China for supremacy in AI applications and the potential implications that
contest has for inalienable rights in a liberal democracy. The Chinese regime is an
example of what the philosopher Elizabeth Anderson calls private government.
Private government’s distinguishing feature is that it does not recognize a protected
public sphere free of sanction or elite oversight (Anderson 2017, p. 37). Private
government is always authoritarian, since it does not value liberal notions of
democratic accountability. “Private government,” Anderson writes, “is government
that has arbitrary, unaccountable power over those it governs” (Anderson 2017,
p. 45). The ends of communist government, Anderson continues, are neither liberty
nor equality but “utilitarian progress and the perfectibility of human beings under the
force of private government” (Anderson 2017, p. 62).
For Anderson, the only way to preserve and protect both equality and freedom is
to make government a public affair, accountable to the governed. The transition from
monarchy to liberal democracy, in this view, involved gradually replacing private
government with public government. Public government utilizes the rule of law and
substantive constitutional rights to advance and protect the liberties and interests of
the governed rather than the governors (Anderson 2017, pp. 65–66).
Government is private in China, in contrast, because the Chinese leadership
rejects the very idea that the Party’s encroachment on individual rights can be
inappropriate or undesirable. Speaking at the Kennedy School in February 2020,
former FBI director James Comey identified this difference as the place where
negotiations with China over technology transfer typically break down. The Chinese
don’t understand the American distinction between technology for private uses and
for public uses (the latter being the potential regulable space, from an American
perspective) (Comey 2020). The same refusal to distinguish between the private and
public realms underlies China’s one child policy and the government’s current
efforts to encourage Chinese single women to marry and have children. Since the
very idea of a right to privacy presupposes a public-private distinction, privacy in
China is easily sacrificed at the altar of national security and societal goals. Thus,
there is a values alignment problem for AI applications in open societies that does
not exist in China (Lanier and Weyl 2020; Stanger 2021).
38 A. Stanger
It is certainly true that the people cannot govern themselves if unable to distinguish
fact from fiction. Because of the possibility of illiberal democracy (Trump is exhibit
A), we should not just be interested in democracy but in the quality of democracy.
There is a real link between liberal democracy and education, the ability to distin-
guish truth from lies, to respect science and free inquiry. The problem is the
exploitation of personal data to change behavior, not big data itself, which can be
deployed for both positive and negative ends (Guszcza et al. 2014).
The real cost of the cluster of trends in motion that Zuboff calls surveillance
capitalism is increasing knowledge inequality that destabilizes liberal democracy.
These growing power gaps exist at both the national and global levels. They exist
between the people and elites, between the most powerful tech companies and the
governments who seek to regulate them, and between the companies and their
product, which is you. With the GDPR and the DSA, Europe provides a laboratory
for promoting greater equality in a transformed global economy. In thinking about
the future of Section 230 of the 1996 Communications Decency Act, America would
do well to review the European data already in hand.
Silicon Valley’s disproportionate power is not imperial, because it is not wielded
via Washington; rather, Silicon Valley has recently silenced a US president. Both
Europe and the United States have a shared interest in educating citizens to vote with
their feet so as to level the playing field in those countries where Big Tech’s impact is
oversized and stifles indigenous innovation. Third-party markets in personal data can
be regulated. With the new Biden administration at the helm, there is a serious
opportunity for European-American collaboration on AI innovation to check rising
digital authoritarianism. There has to be room for greater collaboration on products
that can be customized to meet different local needs. The current American lawsuit
against Facebook that charges them with illegally buying up their rivals (Instagram
and WhatsApp) is also something to watch. Forty US states have filed the lawsuit,
and the successful antitrust case against Microsoft in the 1990s was also a product of
The Real Cost of Surveillance Capitalism: Digital Humanism in the United. . . 39
extensive involvement of states’ attorneys general in the litigation process (Kang and
Isaac 2020).
Reducing social inequality premised on knowledge inequality in the face of
accelerating technological change is a shared challenge. Our most pressing problems
have global dimensions, which provide fertile ground for cooperation rather than
confrontation.2 For the United States, personal data ownership, the right to be
forgotten, liberal education, and insisting on greater transparency in algorithmic
judgments are promising places to start (Lanier 2014; Post 2018). Both Europe and
the United States need to reimagine rights-based democratic government, in which
every human being is worthy of education, work, and health, for the global infor-
mation age. As the March 2020 Final Report of the US National Security Commis-
sion on Artificial Intelligence writes, “We want the United States and its allies to
exist in a world with a diverse set of choices in digital infrastructure, e-commerce,
and social media that will not be vulnerable to authoritarian coercion and that
support free speech, individual rights, privacy, and tolerance for differing views”
(Schmidt et al. 2021, p. 28). This is a formidable educational and political under-
taking, one best tackled collaboratively with other open societies, but it is essential if
we are to build a shared future that promotes the human flourishing of all, not just
knowledge elites.
References
2
Zuboff wrote another New York Times opinion piece that ran on January 24, 2021. It identified
epistemic inequality as the social and political harm of greatest concern. Five days later, she
published her second piece, analyzed above, targeting an epistemic coup. Taken together, the two
pieces suggest very different policy remedies. See https://www.nytimes.com/2020/01/24/opinion/
sunday/surveillance-capitalism.html.
40 A. Stanger
Post, R. (2018) “Data privacy and dignitary privacy: Google Spain, the right to be forgotten, and the
construction of the public sphere,” Duke Law Journal (67), pp. 981-1072.
Schmidt, E. et al. (2021) Final report, National Security Commission on Artificial Intelligence.
Available at: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
(Accessed: 13 April 2021)
Schmitt, D. et al. (2007) “The geographic distribution of big five personality traits,” Journal of
Cross-Cultural Psychology, vol. 38 (2), pp. 173-212. Available at: https://www.
toddkshackelford.com/downloads/Schmitt-JCCP-2007.pdf (Accessed: 13 April 2021)
The Social Dilemma (2020) Directed by Jeff Orlowski (documentary). A Netflix Original.
Stanger, A. (2021) “Ethical challenges of machine learning in the US-China AI rivalry,”
unpublished manuscript.
Zhong, R. (2021) “Ant Group announces overhaul as China tightens its grip,” New York Times,
April 12. Available at: https://www.nytimes.com/2021/04/12/technology/ant-group-alibaba-
china.html (Accessed: 13 April 2021)
Zuboff, S. (2019) The age of surveillance capitalism: The fight for a human future at the frontier of
power. New York: Public Affairs.
Zuboff, S. (2021) “The coup we are not talking about,” New York Times, January 29. Available at:
https://www.nytimes.com/2021/01/29/opinion/sunday/facebook-surveillance-society-technol
ogy.html (Accessed: 13 April 2021)
Allison Stanger is the 2020–21 SAGE Sara Miller McCune Fellow at the Center for Advanced
Study in the Behavioral Sciences, Stanford University; Leng Professor of International Politics and
Economics, Middlebury College; and External Professor, Santa Fe Institute
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Democratic Discourse in the Digital Public
Sphere: Re-imagining Copyright
Enforcement on Online Social Media
Platforms
Sunimal Mendis
Abstract Within the current European Union (EU) online copyright enforcement
regime—of which Article 17 of the Copyright in the Digital Single Market Directive
[2019] constitutes the seminal legal provision—the role of online content-sharing
service providers (OCSSPs) is limited to ensuring that copyright owners obtain fair
remuneration for content shared over their platforms (role of “content distributors”)
and preventing unauthorized uses of copyright-protected content (“Internet police”).
Neither role allows for a recognition of OCSSPs’ role as facilitators of democratic
discourse and the duty incumbent on them to ensure that users’ freedom to engage in
democratic discourse are preserved. This chapter proposes a re-imagining of the EU
legal framework on online copyright enforcement—using the social planning theory
of copyright law as a normative framework—to increase its fitness for preserving
and promoting copyright law’s democracy-enhancing function.
Online social media platforms that are open to members of the public (e.g.,
Facebook, Twitter, YouTube, TikTok) constitute digital spaces that provide tools
and infrastructure for members of the public to dialogically interact across geo-
graphic boundaries. Given the high numbers of users they attract, the substantial
amount of discourse taking place over these platforms, and its capacity to influence
contemporary public opinion, it is possible to define them as a core component of the
contemporary digital public sphere.
As envisioned by Habermas (1989), the digital public sphere has a key function in
fostering democratic discourse which, within the deliberative democratic ideal, is
crucial for furthering the democratic decision-making process. Thus although typi-
cally subject to private ownership, given their character as a component of the digital
public sphere, it is necessary that the governance of online social media platforms
reflects this private-public partnership and is aimed towards advancing the public
interest in fostering democratic discourse.
S. Mendis (*)
Tilburg University, Tilburg, The Netherlands
e-mail: L.G.S.Mendis@tilburguniversity.edu
1
It is noted that in the decisions delivered in C-469/17 Funke Medien NRW [2019] ECLI:EU:C:
2019:623 and C-516/17 Spiegel Online [2019] ECLI:EU:C: 2019:625 the Court of Justice of the
European Union (CJEU) interpreted the exceptions and limitations to copyright provided under
Article 5 of the EU Copyright Directive (2001) as user rights as opposed to mere user privileges.
2
For instance, Case C-201/13 Deckmyn v Vandersteen [2014] ECDR 21, C-469/17 Funke Medien
NRW [2019] and C-516/17 Spiegel Online [2019]. See also C-476/17 Pelham and Others [2019]
ECLI:EU:C: 2019:624.
44 S. Mendis
obligations to safeguard users’ freedom. On the other hand, Article 17(7) does
underscore the importance of ensuring that users are able to rely on existing
exceptions and limitations for quotation, criticism, review and parody, caricature
and pastiche, but this responsibility is assigned to Member States as opposed to
OCSSPs.
Thus, the online copyright enforcement regime introduced by Article 17 is
skewed in favor of protecting the economic interests of copyright owners with less
emphasis being placed on the protection of users’ freedom to engage in democratic
discourse. Pursuant to a simple “cost-benefit” analysis, it is intuitive that it would be
less costly for OCSSPs to block or remove questionable content rather than to take
the risk of incurring liability under Article 17. This means that OCSSPs would be
incentivized to calibrate their content moderation systems to suppress potentially
copyright-infringing content without properly analyzing the legality of that content
under applicable copyright exceptions, thereby increasing the risks of “collateral
censorship.”
As such, within the present EU legal framework on online copyright enforcement,
the role of OCSSPs is limited to ensuring that copyright owners obtain fair remu-
neration for content shared over their platforms (role of “content distributors”) and
preventing unauthorized uses of copyright-protected content (“Internet police”).
Neither role allows for a recognition of OCSSPs’ role as facilitators of democratic
discourse and the duty incumbent on them pursuant to that role to ensure that users’
freedom to engage in democratic discourse are preserved.
While acknowledging the primacy of the utilitarian-based incentive theory as the
dominant narrative of contemporary EU copyright law framework, this chapter
proposes a re-imagining of the EU legal and policy framework on online copyright
enforcement using the social planning theory of copyright law as a parallel theoret-
ical framework. The social planning theory as advanced in the writings of Elkin-
Koren (1995), Netanel (1996), and Fisher (2001) is rooted in the ideological
argument that copyright can and should be shaped to foster a just and attractive
democratic culture (Fisher 2001, p. 179). While affirming the role of copyright in
preserving the incentives of authors to produce and distribute creative content, the
social planning theory envisions a broader purpose for copyright law in promoting
the discursive foundations for democratic culture and civic association (Netanel
1996). Thus, it prescribes that protecting the interests of copyright owners must be
tempered by the overarching aspiration of sustaining a participatory culture (Fisher
2001), which in turn necessitates the adequate preservation of users’ freedom to
engage with copyright-protected content for purposes of democratic discourse.
Accordingly, the social planning theory emphasizes the need to calibrate copyright
law to minimize impediments to the public’s ability to engage with content in
socially valuable ways while, at the same time, protecting the legitimate interests
of copyright owners (Netanel 1996).
I argue that the democratic function of copyright law, as exemplified by the social
planning theory, offers a normative basis for re-imagining the role of OCSSPs within
EU copyright law. This would firstly entail a re-affirmation that the protection of
users’ freedom to benefit from copyright exceptions (particularly those exceptions
Democratic Discourse in the Digital Public Sphere: Re-imagining Copyright. . . 45
that are vital for enabling democratic discourse such as quotation and parody) is
central to copyright law’s purpose and as such should be granted equal weight and
importance as the protection of the economic rights of copyright owners. This would
enable the protection of users’ freedom to be re-located from the periphery of
copyright law policymaking (to which it is currently relegated) to the center of the
discussion. Furthermore, it would provide a normative basis for courts to engage in a
more expansive teleological interpretation of copyright exceptions with a view to
advancing the democracy-enhancing function of copyright law. Secondly, it would
pave the way for acknowledging the potency of content moderation systems to direct
and influence public discourse on social media platforms. This would provide a basis
for OCSSPs to be imposed with positive obligations to ensure that content moder-
ation systems are designed and implemented in a manner that provides adequate
protection to users’ freedom, thereby transforming their role from being mere
“content distributors” or the “Internet police” to active partners in fostering demo-
cratic discourse in the digital sphere.
Within the present EU copyright law discourse that is grounded on the utilitarian
approach, arguments for preserving and promoting democratic discourse on social
media platforms are typically rooted in fundamental rights justifications, particularly
the freedom of expression. Thus, fostering a participatory culture and robust dialogic
interaction in the online sphere tends to be viewed as something exogenous to
copyright law’s objectives and, often, as something that comes into conflict with
it. Espousing the social planning theory as a parallel theoretical framework would
bring about a paradigm shift that enables the protection of democratic discourse to be
seen as something that is endogenous—and in fact fundamental—to copyright’s
purpose and provide a solid normative basis for re-imagining the EU legal frame-
work on online copyright enforcement to increase its fitness for preserving and
promoting copyright law’s democracy-enhancing function.
References
Dahlberg, L. (2001). The Internet and democratic discourse: Exploring the prospects of online
deliberative forums extending the public sphere. Information, Communication & Society, 4(4),
pp. 615–633.
Elkin-Koren, N. (1995). Copyright and Social Dialogue on the Information Super Highway: The
Case Against Copyright Liability of Bulletin Board Operators. Cardozo Arts & Entertainment
Law Journal, 13, pp. 346–411.
Fisher, W. (2001). Theories of Intellectual Property. In: S.R. Munzer, ed., New Essays in the Legal
and Political Theory of Property. Cambridge: CUP, pp. 168–199.
Frosio, G. and Mendis, S. (2020). Monitoring and Filtering: European Reform or Global Trend?’.
In: G. Frosio ed., The Oxford Handbook of Intermediary Liability. Oxford: OUP.
46 S. Mendis
Habermas, J. (1989). The Structural Transformation of the Public Sphere: An Inquiry into a
Category of Bourgeois Society. Translated by Thomas Burger. Cambridge: MIT.
Netanel, N.W. (1996). Copyright in a Democratic Civil Society. Yale Law Journal, 106, pp.
283–387.
Pasquale, F. (2010). Beyond Innovation and Competition: The Need for Qualified Transparency in
Internet Intermediaries. Northwestern University Law Review, 104 (1), pp. 105–173.
Peverini, P. (2015). Remix Practices and Activism. In: E. Navas, O. Gallagher, x. burrough, eds.,
The Routledge Companion to Remix Studies. New York: Routledge, pp. 333–345.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
The Internet Is Dead: Long Live
the Internet
George Zarkadakis
The internet is almost 50 years old.1 It has transformed our world and has provided
new and exciting opportunities to business, society, science, and individuals; but it
has also ushered an era of greater inequality, surveillance, exclusion, and injustice.
The first iteration of the internet (“web 1.0”) was a network of organizational servers
where individual PCs were connected intermittently via dial-up modems. This
evolved into the current mobile internet (“web 2.0”) that consists of many central-
ized social-network clouds that suck user data like black holes. As such, web 2.0 has
enabled the land-grabbing business models of the Big Tech oligopolies. An unfair
Why do we need a new internet to democratize the digital economy and enhance participatory
governance?
1
https://www.usg.edu/galileo/skills/unit07/internet07_02.phtml#:~:text¼January%201%2C%
201983%20is%20considered,to%20communicate%20with%20each%20other.&text ¼ ARPANET
%20and%20the%20Defense%20Data,the%20birth%20of%20the%20Internet
G. Zarkadakis (*)
Atlantic Council, Zug, Switzerland
e-mail: gzarkadakis@atlanticcouncil.org
distribution of power has thus emerged whereby our data are gathered, analyzed, and
monetized by private companies while we accept cookies in a hurry. Today’s digital
economy is a rentier’s wet dream come true. We have become serfs in the digital
fiefdoms of the techno-oligarchs, tenants, instead of co-owners, of the enormous
economic value that we generate through our digital avatars. Add the march of AI
systems that automate our jobs, and what you get is the social contract of liberal
democracy being torn to pieces. If we continue with business as usual, the future will
be one of massive unemployment, dislocation, and the end of dreams. The Global
Financial Crisis offered us a glimpse of what that means: populism, mistrust in
democracy, conspiracy theories, polarization, hate, racism, and a dark replay of the
1930s. The COVID-19 pandemic has further exacerbated the political and economic
asymmetries of a digital economy that works only for the few. “Working from
home” sounds good until you realize that your job can now be outsourced anywhere
in the world, at a much lower cost. Virtualization of work equals labor arbitrage
enabled by web 2.0 and Zoom calls.
Faced with the danger of their historical obliteration, democracies are gearing up for
a fight. Proposals range from breaking up the tech oligopolies, taxing them more,
expanding welfare to every citizen via a universal basic income, and enacting stricter
data privacy laws. The defense strategy has a noble purpose: to reduce the power and
influence of the techno-oligarchs.
But the usual means of defending democracy by legislating and regulating are
insufficient and inefficient to deal with the magnitude and nature of this particular
problem. Instead of a viable and sustainable solution, they will create new bottle-
necks, more paperwork, more centralized control, and more loopholes to be
exploited by influential and well-funded lobbies. At the end, regulation shifts
power to governments, not citizens. We, the citizens, will be replacing one master
with another. The strategy of increasing the role of the State in order to deal with
inequality in the digital economy is wrong and will fail. Our problem is technolog-
ical, not regulatory. Like the early aviators, we are trying to leave the ground using
wings that cannot fly. And just as you cannot regulate an ornithopter to reach the
stratosphere, so it is with the current internet: to make the digital economy fairer,
trustworthy, and more inclusive, we need a different technology, another kind of
internet.
But what would that “alternate” internet look like? And what should its funda-
mental building blocks be? Perhaps the best way to think about these questions is to
begin by asking what is wrong with the current technology. I would like to argue that
there are three main problem areas that we need to focus in order to reinvent the
internet: data ownership (and its necessary corollary, digital identity), security, and
disintermediation. Let’s take those areas in turn and examine them further.
The Internet Is Dead: Long Live the Internet 49
Data ownership is perhaps the biggest problem area of all. Ownership goes beyond
self-sovereignty. It suggests property rights for the data, not just the right to allow
permission for their use. We need to own our personal data as well as the data that we
generate through our social interactions, actions, and choices. Our data is the most
valuable resource in the digital economy. They are powering the AI algorithms that
animate the wheels of the digital industries. When those algorithms finally replace us
in the workplace, our data will be the only valuable resource for which we can
legitimately claim a share in the bounty of the Fourth Industrial Revolution. Many
initiatives, such as Sir Tim Berners-Lee’s Inrupt project (Lohr 2021), are trying to
work around the current web and provide ways for some degree of data ownership
while building on existing web standards. However, by addressing only one of the
three problem areas, they are only partial solutions. A more radical approach is
necessary by establishing immutable, verifiable, digital identity systems as secure
and trusted ways to identify every actor connected on the internet, including humans,
appliances, sensors, robots, AIs, etc. Data generated by an actor would thus be
associated with their digital identity and thus establish ownership rights. So, if I am a
denizen of the alternate internet, I can decide what pieces of my data I will make
available, to whom, and under what conditions. For example, if I need to interact
with an application that requires my age, I will only allow that piece of data to be
read by the application and nothing else. I can also decide on a price for giving
access to my data to a third party, say to an advertising agency or a pharmaceutical
company wishing to use my health records for medical research. Or I can decide to
join a data cooperative, or a data trust (Zarkadakis 2020a), and pool my data with
other people’s data, include perhaps data from smart home appliances and smart city
sensors, and thus exponentially increase the collective value of the “shared” data
value chain. Digital identities can enable auditable data ownership from which we
could engineer an equitable income for citizens in an AI-powered economy of
material abundance. As we look for sustainable and meaningful funding for a
universal basic income, data ownership based on digital identity may be the key
solution. A “universal basic income” that is funded by economic activity would
relieve governments from having to excessively tax and borrow. More importantly
perhaps, it would be income earned, not “handed out,” and as such would uphold –
rather than demean – human dignity and self-respect.
3 Security
hackers. Centralization of data in the current internet is also the cause of data
breaches, regulatory fines, reputational risk, and consumer mistrust. Given these
inherent shortcomings, the cyber security arms race is forever unwinnable. In the
alternate internet, applications should be peer-to-peer, decentralized, and communi-
cate directly with each other. They should run on virtual machines that use an
operating system and communication protocol built on top of the fundamental
TCP/IP protocol and run on the individual device, say a smartphone. If they fail,
or are attacked, the damage will be minimal and restricted rather than spreading
across the whole of the network. Moreover, such an operating system could render
that alternate internet as a single, global computer, made up of billions of nodes. It
will be the realization of the original internet dream: a completely decentralized and
secure global network of trust, where there can be no surveillance.
4 Disintermediation
Security is not the only negative outcome of the centralized nature of web 2.0. All
services are currently intermediated by default, and that includes essential services
such as data storage and computing resources. It is the reason why only four
companies control 70% of the world’s cloud infrastructure (Cohen 2021). Regard-
less of what one may think of Donald Trump, the fact that Twitter, a private
company, could silence a sitting US President ought to give pause to anyone who
cares about free speech. Like Twitter, many other private social media platforms
such as Facebook and YouTube have assumed the high office of unelected arbitra-
tors of what is true and permissible, replacing the role of legislatures, courts, and
governments. Intermediation is also responsible for the fact that internet content can
be monetized almost exclusively using an advertising business model, which is what
social media platforms are exploiting in order to make their billions. If you are a
content creator today, you need to satisfy advertisers with high numbers of followers
and hits, which impacts both what content you can create and how you deliver it. The
tiny minority of individual content creators who get this combination right and
manage to earn some meaningful income from their work are then subject to the
whims of the social media platforms that host their content. We must disintermediate
and decentralize the internet if we want human creativity and innovation to flourish.
In the alternate internet, content creators do not need the advertising industry to earn
income; they can monetize their content in disintermediated, peer-to-peer markets,
through micropayments paid to them directly by the consumers of their content.
Moreover, infrastructure can also be disintermediated, and every node in the network
can provide data and computing services and resources. A decentralized internet at
global scale will provide new sources of income for billions of people. Just imagine
anyone connecting to that internet via their smartphone or laptop and making
available data storage and computing as part of a “cloud commonwealth.”
The Internet Is Dead: Long Live the Internet 51
We are already witnessing the dawn of the new internet, often referred to as “web
3.0.” Distributed ledger technologies are providing new ways to establish
disintermediated trust in peer-to-peer marketplaces, as well as new ways to reimag-
ine money and financial assets. Smart contracts automate transactions and provi-
dence auditing on supply chains, banking, and insurance, while non-fungible tokens
(NFTs) are transforming the internet of information into the internet of assets and
enable content creators and digital artists to sell their creations, just as they would if
they were made of atoms instead of bits.
In the web 3.0, individual users are connected directly, peer-to-peer, without
centralized clouds. They may exchange content and other digital assets as NFTs that
are executable applications. To achieve this, we must empower users with personal
cloud computers (PC2), i.e., software-defined personal computers, to store and
process their data. When users are ready to swap or trade their data, they can compile
data into NFT capsules (encrypted, self-extracting, self-executing programs that
encapsulate the data), much like people generate PDF files from Word documents
today. And they will then share those capsules on a content delivering network
running on a blockchain and verified by miners, instead of centralized cloud servers.
The web 3.0 will, in effect, be a peer-to-peer web of personal cloud computers.
Many engineers and developers are already working on realizing the new inter-
net. For example, the Elastos Foundation is developing a full range of tools and
capabilities for web 3.0 application developers, so they may begin to code
decentralized applications on the new web. All this is happening on open-source
platforms in the true spirit of sharing knowledge and collaborating on projects.
Which is exactly what is different in the mindset of the new digital world: success,
innovation, and economic value can be derived from collaboration, not just compe-
tition, and from co-creating equitable ecosystems, not enforcing inequitable
oligopolies.
There is an additional prize to be won from transitioning from web 2.0 to web 3.0. A
decentralized web based on digital identity, data ownership, and a peer-to-peer cloud
commonwealth will bring new possibilities for digital business models that are
inclusive and democratically governed, much like cooperatives of mutual ownership
companies. Ethereum was first to experiment with “decentralized autonomous
organizations” (DAOs) that can blend direct and representational democracy in
private governance. However, similar models of democratic digital governance
can also be adopted in the public domain by a city, a county, a region, or a nation
state or in circumstances where citizens need to manage commons (Zarkadakis
2020b, p. 140).
52 G. Zarkadakis
In a society where every citizen has a verifiable and decentralized digital identity,
there can be no surveillance – state or private – only liberty and personal responsi-
bility. Citizens can freely associate in the cyberspace, transact, create, innovate,
debate, learn, and self-develop. Because their identity is verifiable and trusted, they
are incentivized toward socially responsible behavior and consensus. The
decentralized web 3.0 can be the foundation of a truly digital “polis” in cyberspace.
In such a democratic polis, free citizens own their data and contribute through their
interactions, knowledge, skills, networks, and choices to the creation of economic
value in inclusive digital platforms, value that is then shared fairly among all those
who have contributed on the basis of their contribution. Moreover, participatory
forms of public governance can be enacted by institutionalizing citizen assemblies in
liberal democracies and include citizen views and recommendations in the policy-
making processes of legislatures. The future does not have to be a dystopia of
oppression, surveillance, endemic penury, and dependency on central government
control and welfare. A new internet of equitable opportunities can help us overcome
the dire straits of societal polarization and injustice and let democracy, freedom, and
liberty reclaim their civilizing influence on human nature.
References
Cohen J. (2021) ‘Four Companies control 67% of the world’s cloud infrastructure’, PC Magazine,
[online]. Available at: https://uk.pcmag.com/old-cloud-infrastructure/131713/four-companies-
control-67-of-the-worlds-cloud-infrastructure (Accessed: 18 May 2021)
Lohr, S. (2021) ‘He created the internet. Now he’s out to remake the digital world’, New York
Times, [online]. Available at: https://www.nytimes.com/2021/01/10/technology/tim-berners-
lee-privacy-internet.html (Accessed: 18 May 2021)
Zarkadakis, G. (2020a) ‘Data Trusts could be the key to better AI’, Harvard Business Review,
[online]. Available at: https://hbr.org/2020/11/data-trusts-could-be-the-key-to-better-ai
(Accessed: 18 May 2021)
Zarkadakis, G. (2020b) Cyber Republic: reinventing democracy in the age of intelligent machines,
Cambridge: MIT Press.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Return to Freedom: Governance of Fair
Innovation Ecosystems
H. Akkermans (*)
w4ra.org, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
University for Development Studies UDS, Tamale, Ghana
e-mail: Hans.Akkermans@akmc.nl
J. Gordijn
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
The Value Engineers BV, Soest, The Netherlands
e-mail: jaap@thevalueengineers.nl
A. Bon
w4ra.org, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
e-mail: a.bon@vu.nl
2 Innovation Ecosystems
The traditional policy view on innovation that has been dominant for decades casts
innovation foremost in terms of “invention” and subsequent “adoption” and spread
of an innovative technology. The early editions of Everett M. Rogers’ (2003) highly
influential text Diffusion of Innovations reflect this view. The process of innovation
is captured in terms of a metaphor borrowed from physics. Diffusion is interpreted
(certainly where it concerns the received view in high-level policy making) as a
relatively deterministic, mechanistic, and unidirectional phenomenon. The same
physics metaphor also serves to establish order (not to say: hierarchy) in the process
of research, starting from fundamental research, then applied research, to strategic
research and, ultimately, technology development.
In recent years, it has become mainstream to frame the innovation process in the
different terms of ecosystems, both in academic literature (Oh et al. 2016) and in
policy (European Union 2020). This move embodies a significant change from the
older policy framing of innovation. It has a clear metaphorical nature as well,
however, borrowed not from physics but from biology. This change of metaphor
has important consequences in several ways.
First, the process image, or high-level empirical model, of innovation changes.
Rather than a mechanistic process of diffusion (with the famous “S-curve” of
adoption2), it posits an interactive dynamic of multiple “species,” i.e., the various
1
https://dighum.ec.tuwien.ac.at/dighum-manifesto/ (May 2019)
2
The S-curve (Rogers 2003, Ch. 7) refers to the S-shaped cumulative distribution function of
innovation adoption. It may be mathematically derived from a very simple imitation model for the
spread of an innovation within a population or market.
Return to Freedom: Governance of Fair Innovation Ecosystems 55
As a corrective to the “failing of the system,” Berners-Lee (2018) calls for a “re-
decentralization” of the Web. In a coevolutionary view, this may involve both
technology (e.g., SOLID3) and non-technology societal actions. Jairam et al.
(2021) make an explicit distinction between a technology and how it is controlled,
pointing out that the technology level and its governance level can have very
different characteristics. For example, the Big Tech platforms rely on network
“decentralized” technologies, but their governance level is in contrast strongly
centralized, even monopolistic. These authors investigate blockchain technologies
(such as Bitcoin, Corda, Ethereum, Tezos) and their industrial applications (e.g.,
smart energy scenarios such as peer-to-peer sustainable energy trading). They show
that also here many forms of governance exist from highly centralized to
decentralized (and often opaque).
The focus of this work is on the question how the governance of technologies can
be decentralized.4 To this end, these authors introduce the notion of fair innovation
ecosystems and propose a set of design principles for fair and equitable ecosystems.
Decentralized ecosystems, as a realistic alternative for the Big Tech platforms, have
a fair distribution of governance power, whereby fairness is defined along the
following lines (Jairam et al. 2021):
(a) Participation. Fair governance ensures active involvement in the decision-
making process of all who are affected and other parties with an interest at
stake. It includes all participants interacting through direct or representative
democracy. Participants should be able to do so in an unconstrained and truthful
manner, and they should be well informed and organized so as to participate
fruitfully and constructively.
(b) Rule of law. Equity: all participants have legitimate opportunities to improve or
maintain their well-being. Agreed-upon legal rules and frameworks, with under-
lying democratic principles, are enforced impartially while guaranteeing the
rights of people; no participant is above the rule of law.
(c) Effectiveness and efficiency. Fair governance fulfils societal needs by incorpo-
rating effectiveness while utilizing the available resources efficiently. Effective
governance ensures that the different governance actors meet societal needs.
Fully utilizing resources, without being wasted or underutilized, ensures efficient
governance.
(d) Transparency. Information on matters that affect participants must be freely
available and accessible. The decision-making process is performed in a manner
that is clear for all by following rules and regulations. Transparency also
3
SOLID is a web-decentralization project led by Berners-Lee, aiming at developing a technology
platform for Social Linked Data applications that are completely decentralized and fully under
users’ control (https://inrupt.com/solid/).
4
The importance of good governance is explicitly recognized in the United Nations’ Sustainable
Development Goals (SDGs) and is the core topic of SDG 16.
Return to Freedom: Governance of Fair Innovation Ecosystems 57
Vardi (2018) attributes the failing of the Internet system to a naive “hippie” notion of
information freedom.5 In his view, information has as a result become a “commons,
an unregulated shared public resource” which is subject to “The Tragedy of the
Commons” (Hardin 1968). Hardin’s view was that commons governance of shared
resources is inevitably doomed to fail, leaving as alternatives only market and state
forms of governance. He derived this from the neoclassical economics theoretical
assumption that humans act as rational self-interested individual agents. His anti-
collective arrangement argument was welcomed by neoliberal economists who
employed it to promote their ideas about free markets as key governance
mechanism.6
5
This led to a lot of debate in the Communications of the ACM. In light of the discussion above and
in the remainder of this article, one may perhaps say that hippie naiveté is in assuming that a
decentralized technology effortlessly leads to a governance regime that is similarly decentralized.
Quod non. This technology-driven mistake is perhaps more understandable upon realizing that an
earlier generation of scientists concerned about societal impacts of science were dealing with highly
centralized technologies such as the atom bomb. See, e.g., Bernal (1939, 1958), physics professor at
Birkbeck College in London and a founding father of the field now known as Science, Technology,
and Society (STS).
6
An interesting irony here is that Hardin’s article has generally been received as supporting free
market ideas, but Hardin was in fact writing about overpopulation and argued for the need of state
58 H. Akkermans et al.
Hardin’s argument was a general theoretical one. Ostrom (1990, 2010), however,
deconstructed and dismantled it in an evidence-based way, through a large interna-
tional set of detailed empirical case studies and extensive field research.7 Her work
makes clear that successful commons are widespread but are not at all “unregulated”
(or free) as a shared resource. Generally, they are characterized by governance
arrangements that consist of a complex array of participatory and “grassroots”
democratic agreements, possibly mixed with market mechanisms as well as forms
of state regulation. Ostrom’s work gave rise to a theory of what she calls “polycentric
governance,” formulating a set of general conditions and design principles for
commons-type arrangements to be successful. There are many successful and
long-standing commons also in the digital world. Although due attention should
be paid to the fact that digital resources have important differences from natural
resources, there are interesting parallels with proposals such as those above regard-
ing the governance of digital technology networks.
It is intriguing to observe that in virtually all discussions of governance issues, a
concept of freedom is involved, although different and even conflicting ones, and
often hidden in the background.8 Following De Dijn (2020), a prevalent conception
of freedom today, adhered to by neoliberals, free marketeers, and libertarians, is that
of limited state power. She describes this as a major and deliberate break with much
older conceptions of freedom as developed in the Humanism and Enlightenment
periods, where freedom is a collective concept and lies in the ability by the people to
exercise control over the way in which they are governed – at root a democratic and
participatory conception of freedom. In contrast, she traces back the leave-me-alone,
I-want-to-do-what-I-like individualized conceptions of freedom to the antidemo-
cratic and counterrevolutionary forces of the seventeenth and eighteenth centuries.9
A neoliberal conception of freedom reduces humans to individual, self-interested,
utility-maximizing agents “freely” buying on a market. It is very much a consump-
tive and consumerist notion: market agents acquiring and consuming services on
digital platforms. This neoliberal “the-world-is-flat” notion of freedom is indeed
universal (“global”) but in a fully undifferentiated and uniform (“flat”) way. In
contrast, the societal conception of freedom pointed at here is a productive notion:
it is one of citizenship that co-creates the society we (hope to) live in. It is
cosmopolitan but acknowledges that freedom is contextualized (Harvey 2009;
Stuurman, 2017), with due recognition of the many different and overlapping
coercion, even to the point that he supported China’s one-child policy. In contemporary digital
society terms, he was arguing not for surveillance capitalism, but for the surveillance state.
7
Elinor Ostrom received the Nobel Prize for Economics for this work in 2009. Not only was she the
first woman to receive this prize, she was a political scientist rather than economist, leading to
surprise in some economist quarters.
8
As an interesting global example, Sen (1999) describes Development as Freedom. Chapter 5 of his
book in particular displays that the underlying conception of freedom boils down to a neoliberal
market one.
9
It is tempting to add that the Big Tech power monopolies of today demonstrate that the neoliberal
conception of freedom itself turns out to be Hayek’s “road to serfdom.”
Return to Freedom: Governance of Fair Innovation Ecosystems 59
spheres and networks of human activities and relationships – including from the
standpoint of the individual and their identity.
In the digital society, proper value-based digital governance (European Union
2020) is a return to freedom: the democratic and participatory freedom of Humanism
and Enlightenment. Science and innovation policy has again to move forward, from
the ecosystem helix frame to a much more inclusive policy of fair digital ecosys-
tems. It is today’s urgent task to redesign freedom in a value-based way and put it
into action for a human future of our digital society.
References
Bernal, J.D. (1939) The Social Function of Science. London, UK: Routledge.
Bernal, J.D. (1958) World Without War. London, UK: Routledge & Kegan Paul. 2nd edn 1961.
ISBN 978-0-429-28245-4
Berners-Lee, T. (2018) ACM Turing Award Lecture, given at the 10th ACM Web Science
Conference on 29 May 2018 in Amsterdam. The video of the Turing Award Lecture is available
at the acm.org website: https://amturing.acm.org/vp/berners-lee_8087960.cfm. The Turing
Award is considered to be the Nobel Prize for Informatics.
Bon, A. (2020) Intervention or Collaboration? Redesigning Information and Communication
Technologies for Development. Amsterdam, The Netherlands: Pangea. ISBN
9789078289258. Open Access pdf version https://w4ra.org/publications/
Carayannis E.G. and Campbell, D.F.J. (2012) Mode 3 Knowledge Production in Quadruple Helix
Innovation Systems. New York, NY, USA: Springer. SpringerBriefs in Business 7. ISBN
9781461420613
De Dijn, A. (2020) Freedom: An Unruly History. Cambridge, MA, USA: Harvard University Press.
ISBN 9780674988330
Etzkowitz, H. and Leidesdorff, L. (2000) The Dynamics of Innovation: from National Systems and
“Mode 2” to a Triple Helix of University-Industry-Government Relations. Research Policy Vol.
29, pp. 109-123
European Union (2020) Berlin Declaration on Digital Society and Value-Based Digital Govern-
ment. Signed at the ministerial meeting of the Council of the European Union on 8 December
2020. https://ec.europa.eu/isa2/sites/default/files/cdr_20201207_eu2020_berlin_declaration_
on_digital_society_and_value-based_digital_government_.pdf
Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. and Trow, M. (1994) The New
Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies.
London, UK: Sage. ISBN 0-8039-7794-8
Hardin, G. (1968) The Tragedy of the Commons. Science Vol. 162 No. 3859 (13 December 1968),
pp. 1243-1248
Harvey, D. (2009) Cosmopolitanism and the Geographies of Freedom. New York, NY, USA:
Columbia University Press. ISBN 9780231148467
Jairam, S., Gordijn, J., Torres, I., Kaya, F. and Makkes, M. (2021) A Decentralized Fair Governance
Model for Permissionless Blockchain Systems. In Proceedings of the International Workshop
on Value Modelling and Business Ontologies (VMBO 2021), Bolzano, Italy, March 4-5, 2021.
http://ceur-ws.org/Vol-2835/paper3.pdf
Lee, E.A. (2020) The Coevolution: The Entwined Futures of Humans and Machines. Cambridge,
MA, USA: MIT Press. ISBN 9780262043939
Manzini, E. (2015) Design, When Everybody Designs: An Introduction to Design for Social
Innovation. Cambridge, MA, USA: MIT Press. ISBN 9780262028608
60 H. Akkermans et al.
Nowotny, H., Scott, P. and Gibbons, P. (2001) Re-Thinking Science. Cambridge, UK: Polity Press.
ISBN 0-7456-2608-4
Oh, D.-S., Phillips, F., Park, S., Lee, E. (2016) Innovation Ecosystems: A Critical Examination.
Technovation Vol. 54, pp. 1-6
Ostrom, E. (1990) Governing the Commons: The Evolution of Institutions for Collective Action.
Cambridge, UK: Cambridge University Press. ISBN 9780521405997
Ostrom, E. (2010) Beyond Markets and States: Polycentric Governance of Complex Economic
Systems. American Economic Review Vol. 100, No. 3 (June 2010) pp. 641–672. Revised
version of the Nobel Prize Lecture, 2009.
Rogers, E. M. (2003) Diffusion of Innovations. 5th edn. New York, NY, USA: Free Press
Rogers, E. M., Medina, U. E., Rivera, M. A, Wiley, C. J. (2005) Complex Adaptive Systems and the
Diffusion of Innovations. The Innovation Journal Vol. 10 (No. 3), article 3, pp. 1-25
Sen, A. (1999) Development as Freedom. Oxford, UK: Oxford University Press. ISBN
9780192893307
Stuurman, S. (2017) The Invention of Humanity - Equality and Cultural Difference in World
History. Cambridge, MA, USA: Harvard University Press. ISBN 9780674971967
Vardi, M.Y. (2018) How the Hippies Destroyed the Internet. Communications of the ACM Vol.
61 (No. 7, July 2018), p. 9
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Decolonizing Technology and Society:
A Perspective from the Global South
Anna Bon, Francis Dittoh, Gossa Lô, Mónica Pini, Robert Bwana,
Cheah WaiShiang, Narayanan Kulathuramaiyer, and André Baart
Abstract Despite the large impact of digital technology on the lives and future of
all people on the planet, many people, especially from the Global South, are not
included in the debates about the future of the digital society. This inequality is a
systemic problem which has roots in the real world. We refer to this problem as
“digital coloniality.” We argue that to achieve a more equitable and inclusive global
digital society, active involvement of stakeholders from poor regions of the world as
co-researchers, co-creators, and co-designers of technology is required. We briefly
discuss a few collaborative, community-oriented technology development projects
as examples of transdisciplinary knowledge production and action research for a
more inclusive digital society.
A. Bon (*)
w4ra.org, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
e-mail: a.bon@vu.nl
F. Dittoh
University for Development Studies UDS, Tamale, Ghana
e-mail: fdittoh@uds.edu.gh
G. Lô · A. Baart
Bolesian BV, Utrecht, The Netherlands
e-mail: gossalo@bolesian.ai; andre@andrebaart.nl
M. Pini
Universidad San Martín, Buenos Aires, Argentina
e-mail: mpini@unsam.edu.ar
R. Bwana
University of Amsterdam, Amsterdam, The Netherlands
e-mail: r.m.bwana@uva.nl
C. WaiShiang · N. Kulathuramaiyer
University Malaysia, Sarawak, Malaysia
e-mail: wscheah@unimas.my; nara@unimas.my
People from poor environments, e.g., in the Global South, are not often included in
debates about the digital society. This is surprising, as impacts from digital technol-
ogies do have far-reaching consequences for their lives and future. Nowadays, the
rapid co-evolution of society and technology is calling for reflection, deliberation,
and responsible action. Scientists are posing the question: “Are we humans defining
technology or is technology defining us?” (Lee 2020), but who are “we” in this
question? Who is defining technology, and who has the knowledge, the assets, and
the decision-making power?
A way to understand the impacts of digital transformation for people in the Global
South is to observe the digital society through a decolonial lens. This helps to
understand the, often tacit, patterns of power in the social and technological fabric.
If we consider the digital society to be an image of the physical world, it will have
inherited, along with other aspects, historical patterns of inequality. These patterns
are referred to as “coloniality” (Mendoza 2021, pp. 46–54; Mignolo and Walsh
2018, pp. 1–12; Quijano 2016, pp. 15–18).
At the moment of writing, about three billion people in the world are unconnected
from the digital society – a phenomenon often called the digital divide – but this
number is rapidly decreasing. Being connected, particularly through the Internet and
Web, is generally seen as the key to a better life. With the breathtaking pace in which
the Internet is rolled out even in remote corners of the world, universal connectivity,
with full endorsement of the United Nations,1 might well soon be completed. The
follow-on question is: will omnipresent connectivity bring social justice, equality,
and a more sustainable and prosperous world closer to all?
The World Wide Web, the backbone of the digital society, was designed,
according to its inventor Tim Berners-Lee, as “an open platform that would allow
everyone, everywhere to share information, access opportunities and collaborate
across geographic and cultural boundaries” (Berners-Lee 2017). However, despite
being a global common, the Web’s wide penetration also makes it into a dominant
standard. Through its ubiquity, the Web exerts pressure toward uptake, even if this
uptake may harm the individual user. The alternative – refusing to be part of it –
results in isolation. This phenomenon, which is described by David Grewal as
network power, is common for networked standards (Grewal 2008, pp. 20–28). It
makes the digital society into a hegemonic system from which – especially from the
perspective of the Global South – there is no escape, despite the price users,
communities, and even countries have to pay with their money or data, to become
part of it.
When we observe the current structure of the digital society, we see that it is
physically, economically, and socially extremely centralized and concentrated in the
Global North, where to date the forbearers of many digital innovations reside. For
1
See, e.g., https://www.un.org/development/desa/en/news/administration/internet-governance-2.
html (Accessed 1 May 2021)
Decolonizing Technology and Society: A Perspective from the Global South 63
2
https://www.oneworld.nl/lezen/discriminatie/racisme/zwart-dan-rijdt-de-zelfrijdende-auto-jou-
eerder-aan/ (Accessed: 1 May 2021)
Decolonizing Technology and Society: A Perspective from the Global South 65
3
https://aopp-mali.com/ (Accessed 1 May 2021)
4
https://www.itu.int/osg/spuold/wsis-themes/ict_stories/themes/case_studies/e-bario.html
(Accessed 1 May 2021)
66 A. Bon et al.
3 Conclusion
From the above discussions, it becomes clear that coloniality is a reality also in the
digital society. Universal Internet connectivity does not necessarily equate to truly
inclusive connectedness. According to African philosopher Achille Mbembe, we
must realize that coloniality is more than academic discourses and representations
(Mbembe 2001). It is a systemic problem, materialized in the real world and felt in
5
https://www.kasadaka.com/ (Accessed 1 May 2021)
Decolonizing Technology and Society: A Perspective from the Global South 67
References
Baart, A., Bon, A., De Boer, V., Dittoh, F., Tuijp, W. and Akkermans, H. (2019) “Affordable Voice
Services to Bridge the Digital Divide – Presenting the Kasadaka Platform” in Escalona, M.J.,
Mayo, F.D., Majchrzak, T.A., Monfort, V. (eds) Web Information Systems and Technologies,
LNBIP Book Series, Vol. 327, pp. 195-220. Berlin, Germany: Springer.
Berners-Lee, T. (2017) Three Challenges for the Web, According to its Inventor. [Online].
Available at: https://webfoundation.org/2017/03/web-turns-28-letter/ (Accessed: 1 May 2021).
Berners-Lee, T. (2019) The Web is under Threat. Join us and Fight for it. [Online] Available at:
https://webfoundation.org/2018/03/web-birthday-29/ (Accessed: 1 May 2021).
Dittoh, F., Akkermans, H. De Boer, V. Bon, A. Tuyp, W. and Baart, A. (2021) “Tibaŋsim:
Information Access for Low-Resource Environments” in Yang, X.S., Sherratt, S., Dey, N.,
Joshi, A. (eds) Proceedings of the Sixth International Congress on Information and Communi-
cation Technology: ICICT 2021, London, UK, Vol. 1, Singapore: Springer. Available at: https://
w4ra.org/wp-content/uploads/2014/02/ICICT_2021_paper_289.pdf (Accessed: 1 May 2021).
Lee, E.A. (2020) The Coevolution: The Entwined Futures of Humans and Machines. Cambridge
MA, USA: MIT Press.
Grewal, D. S. (2008) Network Power: The Social Dynamics of Globalization. New Haven, USA &
London, UK: Yale University Press.
Harris, R., Ramaiyer, N.A.N.K. and Tarawe, J. (2018) “The eBario Story: ICTs for Rural Devel-
opment” In International Conference on ICT for Rural Development (ICICTRuDev) pp. 63-68,
IEEE.
Mignolo, W.D. and Walsh, C.E. (2018) On Decoloniality: Concepts, Analytics, Praxis. Durham,
NC, USA: Duke University Press.
Mendoza, B. (2021) “Decolonial Theories in Comparison” in Shih S., Tsai, L. (eds) Indigenous
Knowledge in Taiwan and Beyond. Sinophone and Taiwan Studies, Vol .1, pp. 249-271,
Singapore: Springer.
Lô, G., de Boer, V., Schlobach, S. and Diallo, G. (2017) “Linking African Traditional Medicine
Knowledge”. Semantic Web Applications and Tools for Healthcare and Life Sciences
(SWAT4LS), Rome Italy. [Online] Available at: https://hal-archives-ouvertes.fr/hal-
01804941/document (Accessed 1 May 2021).
Mohamed, S., Png, M.T. and Isaac, W. (2020) “Decolonial AI: Decolonial Theory as
Sociotechnical Foresight in Artificial Intelligence”. Philosophy & Technology, Vol. 33 No.
4, pp. 659-684.
Mbembe, A. (2001) On the Postcolony. Studies on the History of Society and Culture, Vol. 41, Los
Angeles, USA: University of California Press.
68 A. Bon et al.
Pini, M.E. (2020) “Digital Inequality in Education in Argentina”. In Proceedings of the 12th ACM
Conference on Web Science (WebSci ’20 Companion), July 6–10, 2020, Southampton, UK,
pp. 37-40, New York, NY, USA: ACM. Available at: https://doi.org/10.1145/3394332.
3402827.
Quijano, A. (2016) “Bien Vivir – Between Development and the De/Coloniality of Power”.
Alternautas (Re) Searching Development: The Abya Yala Chapter 3(1) pp. 10-23. [Online]
Available at: http://www.alternautas.net/blog/2016/1/20/bien-vivir-between-development-and-
the-decoloniality-of-power1 (Accessed 1 May 2021).
Vos, S., Schaefers, H., Lago, P. and Bon, A. (2020) “Sustainability and Ethics by Design in the
Development of Digital Platforms for Low-Resource Environments”. [Online]. Amsterdam
Sustainability Institute Integrative Project Technical Report. pp. 1-43, Vrije Universiteit
Amsterdam. Available at: https://w4ra.org/wp-content/uploads/2021/01/ICT4FoodSec.pdf
(Accessed 1 May 2021).
Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power. London, UK: Profile Books.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part III
Ethics and Philosophy of Technology
Digital Humanism and the Limits
of Artificial Intelligence
Julian Nida-Rümelin
The expression “Artificial Intelligence” (AI) is multifaceted and is used with differ-
ent meanings. In the broadest and least problematic sense, AI denotes everything
from computer-controlled processes, the calculation of functions, the solution of
differential equations, logistical optimization, and robot control to “self-learning”
systems, translation software, etc. The most problematic and radical conception of
AI says that there is no categorical difference between computer-controlled pro-
cesses and human thought processes. This position is often referred to as “strong
AI.” “Weak AI” then merely is the thesis that all thought and decision processes
could in principle be simulated by computers. In other words, the difference between
strong and weak AI is the difference between identification and simulation. From
this perspective, strong AI is a program of disillusionment: What appears to us to be
a characteristically human property is nothing but that which can be realized as a
computer program. Digital humanism takes the opposite side.
J. Nida-Rümelin (*)
Ludwig Maximilians Universität Munich, Munich, Germany
e-mail: julian.nida-ruemelin@lrz.uni-muenchen.de
II
The analytic philosopher John Searle (1980) has devised a famous thought experi-
ment. Searle asks us to imagine yourself being a monolingual English speaker
“locked in a room and given a large batch of Chinese writing” plus “a second
batch of Chinese script” and “a set of rules” in English “for correlating the second
batch with the first batch.” The rules “correlate one set of formal symbols with
another set of formal symbols”: “formal” (or “syntactic”) meaning you “can identify
the symbols entirely by their shapes.” A third batch of Chinese symbols and more
instructions in English enable you “to correlate elements of this third batch with
elements of the first two batches” and instruct you, thereby, “to give back certain
sorts of Chinese symbols with certain sorts of shapes in response.” Those giving you
the symbols “call the first batch ‘a script’” [a data structure with natural language
processing applications], “they call the second batch ‘a story,’ and they call the third
batch ‘questions’”; the symbols you give back “they call. .. ‘answers to the ques-
tions’”; “the set of rules in English. .. they call ‘the program’”: you yourself know
none of this. Nevertheless, you “get so good at following the instructions” that “from
the point of view of someone outside the room,” your responses are “absolutely
indistinguishable from those of Chinese speakers.” Just by looking at your answers,
nobody can tell you don’t speak a word of Chinese. Outside in front of the slot, there
is a native speaker of Chinese, who, having formulated the story and the questions
and having received the answers, concludes that somebody must to be present in the
room who also speaks Chinese.
The crucial element missing here is apparent: It is the understanding of the
Chinese language. Even if a system—in this case the Chinese Room—is function-
ally equivalent to somebody who understands Chinese, the system does not yet itself
understand Chinese. Understanding and speaking Chinese requires various kinds of
knowledge. A person who speaks Chinese refers with specific terms to the
corresponding objects. With specific utterances, she pursues certain—
corresponding—aims. On the basis of what she has heard (in Chinese), she forms
certain expectations, etc. The Chinese Room has none of these characteristics. It
does not have any intentions; it has no expectations that prove that it speaks and
understands Chinese. In other words, the Chinese Room simulates an understanding
of Chinese without itself possessing a command of the Chinese language.
Years later, Searle (1990) radicalized this argument in connecting it with philo-
sophical realism (Nida-Rümelin 2018), that is, the thesis that there is a world that
exists regardless of whether it is observed or not. Signs only have a meaning for us,
the sign users and sign interpreters. We ascribe meaning to certain letters or symbols
by communicating, by agreeing that these letters or symbols stand for something.
They have no meaning without these conventions. It is misleading to conceive the
computer as a character-processing, or syntactic, machine that follows certain logical
or grammatical rules. The computer is comprised of various elements that can be
described by physics, and the computational processes are a sequence of electrody-
namic and electrostatic states. To these states, signs are then ascribed, to which we
Digital Humanism and the Limits of Artificial Intelligence 73
attribute certain interpretations and rules. The physical processes in the computer
have no syntax, they do not “know” any logical or grammatical rules, and they are
not even strings of characters. The syntactical interpretation is observer-relative. As
syntactic structures are observer-relative, the world is not a computer. This argument
is radical, simple, and accurate. It rests on a realist philosophy and a mechanistic
interpretation of computers. Computers are that which they are materially: objects
that can be completely described and explained using the methods of physics. Syntax
is not a part of physics; physics describes no signs, no grammatical rules, no logical
conclusions, and no algorithms. The computer simulates thought processes without
thinking itself. Mental properties cannot be defined by behavioral characteristics.
The model of the algorithmic machine, of mechanism, is unsuitable as a paradigm
both for the physical world and as a paradigm for human thinking.
A realist conception is far more plausible than a behaviorist conception regarding
mental states (Block 1981). Pains characterize a specific type of feelings that are
unpleasant and that we usually seek to avoid. At the dentist, we make an effort to
suppress any movement so that we do not interfere with the treatment, but by no
means does this mean that we have no pain. Even the imaginary super-Spartan, who
does not flinch even under severe pain, can have pain. It is simply absurd to equate
“having pain” with certain behavioral patterns.
III
It can be shown that logical and mathematical proofs to a large extent cannot be
based on algorithms, as students of formal logics learn early on in their study.
Already the calculi of first-order predicate logic do not allow for algorithmic proof
writing. The fundamental reason for this phenomenon, that more complex logical
systems than propositional logic are not algorithmic in this sense, is Kurt Gödel’s
incompleteness theorem (Gödel 1931), the probably most important theorem of
formal logic and meta-mathematics. This theorem shows that insight and intelligence
in general cannot be grasped adequately within a machine paradigm (Lucas 1961).
One can interpret Gödels theorem as the proof that the human mind does not work
like an algorithm. Possibly even consciousness in general is based on incomplete-
ness as Roger Penrose (1989) argues, but I remain up to now agnostic about this
question, being however convinced that neither the world nor human beings function
like a machine.
If humans were to act just as deterministically as Turing machines (Turing 1950),
then genuine innovation itself would not be imaginable. If it was in principle
possible to foresee what we do and believe in the future, genuine innovations
would not exist. Disruptive innovations in knowledge and technology require that
future knowledge and technology is not part of old knowledge and technology. The
assumption of an all-comprising determinism is incompatible with true innovation
(Popper 1951, 1972). It is more plausible to assume that the thesis of weak AI, the
74 J. Nida-Rümelin
thesis that all human deliberation can be simulated by software systems, is wrong,
than to assume that there is no genuine innovation.
IV
References
Block, Ned (1981), Psychologism and Behaviorism, The Philosophical Review 90 (1): 5–43.
Gödel, Kurt (1931), Über formal unentscheidbare Sätze der Principia Mathematica und verwandter
Systeme I, Monatshefte für Mathematik und Physik 38: 173–198.
Lucas, John R. (1961), On Minds, Machines and Gödel, Philosophy 36: 112–127.
Nida-Rümelin, Julian (2018), Unaufgeregter Realismus. Eine philosophische Streitschrift,
Paderborn: mentis.
Nida-Rümelin, Julian (2020), Eine Theorie praktischer Vernunft, Berlin/Boston: De Gruyter.
Nida-Rümelin, Julian, Weidenfeld, Nathalie (2018), Digitaler Humanismus. Eine Ethik für das
Zeitalter der Künstlichen Intelligenz, München: Piper (italian translation: Milano: Franco
Angeli 2019; korean translation: Pusan National University Press 2020).
Penrose, Roger (1989), The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of
Physics, Oxford University Press.
Popper, Karl (1951), Indeterminism in Quantum Physics and Classical Physics, British Journal of
Philosophy of Science 1: 179–188.
Popper, Karl (1972), Objective Knowledge, Oxford University Press.
Searle, John (1980), “Minds, Brains and Programs”, Behavioral and Brain Sciences 3 (3): 417–457.
Searle, John (1990), “Is the Brain a Digital Computer?”, Proceedings and Addresses of the
American Philosophical Association, 64 (3): 21–37.
Turing, Alain (1950): Computing Machinery and Intelligence, Mind 59: 433–460.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Explorative Experiments and Digital
Humanism: Adding an Epistemic
Dimension to the Ethical Debate
Viola Schiaffonati
Abstract The rise of Digital Humanism calls for shaping digital technologies in
accordance with human values and needs. I argue that to achieve this goal, an
epistemic and methodological dimension should be added to the ethical reflections
developed in the last years. In particular, I propose the framework of explorative
experimentation in computer science and engineering to set an agenda for the
reflection on the ethical issues of digital technologies that seriously considers their
peculiarities from an epistemic point of view. As the traditional epistemic categories
of the natural sciences cannot be directly adopted by computer science and engi-
neering, the traditional moral principles guiding experimentation in the natural
sciences should be reconsidered in the case of digital technologies where uncertainty
about their impacts and risks is very high.
1 Introduction
The rise of Digital Humanism calls for shaping digital technologies in accordance
with human values and needs to possibly solve the critical issues of current techno-
logical development. Within this framework, ethics plays an increasing role at both a
descriptive level and normative one, and, accordingly, several important results have
been achieved in the last years. On the one hand, approaches such as the Value
Sensitive Design have shifted the attention to the idea of active responsibility, that is,
the design of technology to incorporate positive values (van den Hoven 2007). On
the other hand, several regulatory frameworks have been proposed to address the
ethical issues related to digital technologies, such as AI, and their adoption within
our society.
Notwithstanding the importance of these initiatives, I argue that a further dimen-
sion should be added to this debate. This dimension concerns the analysis of the
disciplinary and methodological status of computer science and engineering to better
V. Schiaffonati (*)
Politecnico di Milano, Milan, Italy
e-mail: viola.schiaffonati@polimi.it
understand the radical paradigm shift promoted by digital technologies. Rather than
considering this dimension as alternative to the other ones, I claim that it should be
integrated with them to address the current challenges of digital technologies in a
more comprehensive way. In this chapter, I focus in particular on the nature and role
of experiments in AI and autonomous robotics. The main result of adding this further
dimension to the current analysis is to set an agenda for the reflection on the ethical
issues of digital technologies that seriously considers their peculiarities from a
disciplinary and a methodological point of view. Constructing on some of my
previous works, I argue that the traditional epistemic categories of the natural
sciences cannot be directly adopted by computer science and engineering as an
artificial discipline. Accordingly, the traditional moral principles guiding experi-
mentation in the natural sciences should be reconsidered in the case of digital
technologies, where uncertainty about their impacts and risks is very high.
This chapter is organized as follows. Section 2 discusses the nature and role of
experiments in computer science and engineering and how experiments are per-
ceived as ways to increase the scientific maturity of the field. Section 3 presents the
novel notion of explorative experimentation emerging from the analysis of the
practice of AI and autonomous robotics. Section 4 connects epistemic uncertainty,
typical of explorative experiments, to the design of ethical frameworks based on an
incremental approach. Finally, Sect. 5 concludes the chapter by stressing how
explorative experiments can impact on the current shaping of Digital Humanism.
In the last years, the debate on the nature and role of experiments in computer
science and engineering has emerged as one of the ways to stress its scientific status:
to adopt the same experimental standards of the natural sciences can make computer
science and engineering more mature and credible.
AI and autonomous robotics make no exception. AI, for example, is facing a
reproducibility crisis in which the importance of reproducibility is taken for granted:
the specificity of reproducibility in AI is not investigated, and, in the end, only
practical benefits are evidenced (Gundersen et al. 2018). Autonomous robotics
presents two different tendencies (Amigoni et al. 2014). On the one hand, the
traditional principles of experimental method (reproducibility, repeatability, gener-
alization, etc.) are seen as golden standards to which the research practice should
conform. For example, public distribution of code is promoted to achieve reproduc-
ibility. On the other hand, rigorous approaches to experimentation are not yet part of
current practices. For example, the use of settings that can be applied to different
environments is limited, jeopardizing the possibility of generalizing experimental
results.
Only few exceptions have stressed the peculiarity of experimentation in computer
science and engineering and emphasized that the term experiment can be used in
different ways (Tedre 2015). Moreover, the question whether it does make sense to
Explorative Experiments and Digital Humanism: Adding an Epistemic. . . 79
apply the same standards of the natural sciences to the artificial ones has been seldom
asked. The idea that computer science and engineering is an experimental science of
a very special type has been advanced by Allen Newell and Herbert Simon already in
the 1970s (Newell and Simon 1976). Even if the invitation to see each new machine
as an experiment has remained largely unattended, some exceptions exist: they point
out that experimentation is more multifaceted than usually depicted in computer
science and engineering.
Two elements are particularly important. First, many experiments have the goal
of testing technical artifacts rather than theories. Technical artifacts are physical
objects with a technical function and use plan designed by humans in order to fulfill
some practical functions (Veermas et al. 2011). Second, in several cases, experi-
menters are designers, thus losing the independence of the experimenter prescribed
in the classical experimental protocol. This is why I have proposed the notion of
explorative experimentation to give reason of a part of the experimental practice in
computing that cannot be subsumed under the traditional categories of the epistemic
and controlled experimentation typical of the natural sciences (Schiaffonati 2020).
5 Conclusion
In this chapter, I have suggested that to address some of the issues connected to
digital technologies, the development of appropriate techniques is not enough.
Rather, I have shown that some problems have to be addressed with methods having
a philosophical nature.
To conclude, I emphasize how the framework of explorative experimentation is
connected to the larger issue of the societal impact of digital technologies. The
problem of how this approach can be better adopted in practice remains open. Yet I
argue that a shift in the conceptualization has, at least, two important roles. The first
one concerns the influence of novel epistemic categories, such as explorative
experiments, on ethical ones, as I have discussed in Sect. 4. The second level regards
the development of the disciplines of the artificial, to which computer science and
engineering belong, by starting from methodological reflections. This is not only a
disciplinary issue, but has an impact on how humans, digital technologies, and their
interactions are conceptualized in the current discussion on Digital Humanism. If
one of the goals of Digital Humanism is to “shape technologies in accordance with
human values and needs, instead of allowing technologies to shape humans,” it is
essential to recognize the centrality of technical artifacts and sociotechnical systems
in the disciplines of the artificial. Sociotechnical systems are composed of physical
objects, people, organizations, institutions, conditions, and rules. They, thus, have a
hybrid character as they consist of components which belong in many different
“worlds”: not only those requiring a physical description but also those requiring a
social one (Veermas et al. 2011). So far, the components requiring a physical
description have been addressed by scientific and engineering disciplines. Now it
is time to consider all the components requiring a social description, like the ones
promoted by the humanities and the social sciences, and to develop, accordingly, the
new field of the artificial disciplines which should include and integrate both in a
creative way.
References
Amigoni, F., Schiaffonati, V., Verdicchio, M. (2014) ‘Good Experimental Methodologies for
Autonomous Robotics: From Theory to Practice’, in F. Amigoni, V. Schiaffonati (eds.),
Methods and Experimental Techniques in Computer Engineering, SpringerBriefs in Applied
Sciences and Technology, Springer, pp. 37-53.
82 V. Schiaffonati
Amigoni, F. and Schiaffonati, V. (2018) ‘Ethics for Robots as Experimental Technologies: Pairing
Anticipation with Exploration to Evaluate the Social Impact of Robotics’, in IEEE Robotics and
Automation Magazine, 25, n. 1, pp. 30-36.
Gil, T. G. and Hevner, A. N. (2013) ‘A Fitness-Utility Model for Design Science Research’, in
ACM Transactions on Management Information Systems, 4 (2): 5-24.
Gundersen, O. E., Aha, D., Gil, Y. (2018) ‘On Reproducible AI: Towards Reproducible Research,
Open Science, and Digital Scholarship in AI Publications’, in AI Magazine, 39, n. 3, pp. 56-68.
Hansson, S.O. (2015) ‘Experiments before Science? – What Science Learned from Technological
Experiments’, in Sven Ove Hansson (ed.) The Role of Technology in Science, Springer.
Newell, A. and Simon, H. (1976) ‘Computer Science as Empirical Inquiry: Symbols and Search’,
Communications of the ACM 19 (3): 113-126.
Schiaffonati, V. (2020). Computer, robot ed esperimenti, Milano: Meltemi.
Tedre, M. (2015) The Science of Computing. Boca Raton: CRC Press, Taylor & Francis Group.
van de Poel, I. (2016) ‘An Ethical Framework for Evaluating Experimental Technology’, in Science
and Engineering Ethics, 22, pp. 667-686.
van den Hoven J. (2007) ‘ICT and Value Sensitive Design’, In Goujon P., Lavelle S., Duquenoy P.,
Kimppa K., Laurent V. (eds.) The Information Society: Innovation, Legitimacy, Ethics and
Democracy In honor of Professor Jacques Berleur s.j, IFIP International Federation for
Information Processing, vol 233, Boston: Springer.
Veermas, P., Kroes, P., van de Poel, I., Franssen, M., Houkes, W. (2011) A Philosophy of
Technology: From Technical Artefacts to Sociotechnical Systems, Morgan & Claypool
Publishers.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Digital Humanism and Global Issues
in Artificial Intelligence Ethics
Guglielmo Tamburrini
Abstract In the fight against pandemics and climate crisis, the zero hunger chal-
lenge, the preservation of international peace and stability, and the protection of
democratic participation in political decision-making, AI has increasing – and often
double-edged – roles to play in connection with ethical issues having a genuinely
global dimension. The governance of AI ambivalence in these contexts looms large
on both the AI ethics and digital humanism agendas.
1 Introduction
Global ethical issues concern humankind as a whole and each member of the human
species irrespective of her or his position, functions, and origin. Prominent issues of
this sort include the fight against pandemics and climate crisis, the zero hunger
challenge, the preservation of international peace and stability, and the protection of
democracy and citizen participation in political decision-making. What role is AI
playing – with its increasingly pervasive technologies and systems – and will be
likely to play in connection with these global ethical challenges?
The COVID-19 pandemic has raised distinctive challenges of human health and
well-being protection across the planet, which are inextricably related to worldwide
issues of economic resilience and protection of education, work, and social life
participation rights. Artificial Intelligence (AI) has the potential to become a major
technological tool to meet pandemic outbursts and the attending ethical issues.
Indeed, infection spreading data and machine learning (ML) technologies pave the
way to computational models for predicting diffusion patterns, identifying and
assessing the effectiveness of pharmacological, social, and environmental measures,
up to and including the monitoring of wildlife ecological niches, whose preservation
is so important to restrain frequent contacts with wild animal species and related
virus spillovers. Similarly, AI affords technological tools to optimize food
G. Tamburrini (*)
Università di Napoli Federico II, Naples, Italy
e-mail: guglielmo.tamburrini@unina.it
production and distribution so as to fight famines and move toward the zero hunger
goal in the UN sustainable development agenda.
Failures to use effective AI technologies to fight pandemics and world hunger
may qualify as morally significant omissions. Along with these omissions, another
source of moral fault may emerge from the ethically ambivalent roles that AI is
actively assuming in the context of other global challenges. On the one hand, AI
models may contribute to identify energy consumption patterns and corresponding
climate warming mitigation measures. On the other hand, AI model training and
related big data management produce a considerable carbon footprint. Similarly, AI
military applications may improve Communications, Command, and Control
(C3) networks and enhance both precision and effectiveness of weapons systems,
leading to a reduction of military and civilian victims in warfare situations. And yet,
the ongoing AI arms race may increase the tempo of conflicts beyond meaningful
human control and lower the threshold to start conflicts, thereby threatening inter-
national peace and stability. Just as importantly, AI systems may help one in
retrieving the diversified political information which is needed to exercise responsi-
ble democratic citizenship. However, in both authoritarian and democratic countries,
AI systems have been already used to curtail freedom and participation in political
decision-making.
As exemplary cases of AI playing ambivalent roles in global ethical issues, I will
focus here on the climate crisis and the preservation of global peace and international
stability. Universal human values and needs that are prized by digital humanism play
a crucial role in the governance of such AI ambivalence.
Within the widely differentiated ICT sector, extensive discussion is under way
about the energy consumption of some non-AI software, like blockchain and other
cryptocurrency software, which are estimated to consume amounts of energy
exceeding the energy need of countries like Ukraine or Sweden (https://cbeci.org/
cbeci/comparisons/). In contrast with this, it is still unclear which fraction of the ICT
sector energy consumption can be specifically attributed to AI in general or to
machine learning or other prominent research and commercial subfields in particular.
Available data are mostly anecdotal. It was estimated that GPT-2 and GPT-3 –
successful natural language processing (NLP) models for written text production
developed by ML techniques – were trained by means of huge amounts of textual
data and gave rise to a carbon footprint comparable to that of five average cars
throughout their lifecycle (Strubell et al. 2019). More systematic assessment efforts
are clearly needed.
Considering the increasingly pervasive impact of AI technologies, the White
Paper on AI released in 2020 by the European Commission recommends addressing
the carbon footprint of AI systems across their lifecycle and supply chain: “Given the
increasing importance of AI, the environmental impact of AI systems needs to be
duly considered throughout their lifecycle and across the entire supply chain, e.g., as
regards resource usage for the training of algorithms and the storage of data”
(EU 2020, p. 2). However, one should carefully note that developing suitable metrics
and models for estimating the AI carbon footprint at large is a challenging and
elusive problem. To begin with, it is difficult to precisely circumscribe AI within the
broader ICT sector. Moreover, a sufficiently realistic assessment requires one to
consider wider interaction layers between AI technologies and society, including
AI-induced changes in work, leisure, and consumption patterns. These wider inter-
action layers have proven difficult to encompass and measure in the case of various
other technologies and systems.
Without belittling the importance and the difficulty of achieving a sufficiently
realistic evaluation, what is already known about the lifecycle of both exemplary AI
systems like GPT-2 and GPT-3 and the supply chain of big data for ML suffices
to spur a set of interrelated policy questions. Should one set quantitative limits to
energy consumption for AI model training? How are AI carbon quotas, if any, to be
identified at national and international levels? How to distribute equitable shares of
AI limited resources to business, research, and public administration? Who should
be in charge of deciding which data for AI training to collect, preserve, and
eventually get rid of for the sake of environmental protection? (Lucivero 2019).
Only by addressing these issues of environmental justice and sustainability can AI be
made fully compatible with the permanence on our planet of human life and the
unique moral agency that comes with it, grounding human dignity and the attending
responsibilities that our species has toward all living entities (Jonas 1979).
86 G. Tamburrini
The protection of both human life and dignity has been playing a crucial role in the
ethical and legal debate about autonomous weapons systems (AWS), that is,
weapons systems that are capable of selecting and attacking military objectives
without requiring any human intervention after their activation. The wide spectrum
of positions emerging in this debate has invariably acknowledged as a serious
possibility the occurrence of AWS suppressing human lives in violation of Interna-
tional Humanitarian Law (IHL) (Amoroso and Tamburrini 2020). Indeed, AI per-
ceptual systems, developed by machine learning and paving the way to more
advanced AWS, were found by adversarial testing to incur into unexpected and
counter-intuitive errors that human operators would easily detect and avoid. Notable
in the AWS debate context is the case of a school bus taken for an ostrich (Szegedy
et al. 2014). Since properly used school buses and their passengers are protected by
IHL distinction and proportionality principles, the example naturally suggests the
following question: Who will be held responsible for unexpected and difficult to
predict AWS acts that one would regard as war crimes, had they been committed by
a human being?
The use of AWS has been additionally claimed to entail a violation of human
dignity (Amoroso and Tamburrini 2020, p. 5). Robert Sparrow aptly summarized
this view, pointing out that the decision to take another person’s life must be
compatible with the acknowledgement of the personhood of those with whom we
interact in warfare. Therefore, “when AWS decide to launch an attack, the relevant
interpersonal relationship is missing,” and the human dignity of the potential victims
is not recognized. “Indeed, in some fundamental sense there is no one who decide
whether the target of the attack should live or die” (Sparrow 2016, pp. 106–7).
These various concerns about IHL and human dignity respect have been upheld
since 2013 by the international Campaign to Stop Killer Robots in advocacy for a
ban on lethal AWS. The Campaign has also extensively warned that AWS may raise
special threats to international peace. The latter is a fundamental precondition for the
flourishing of human life that any sensible construal of humanism as a doctrine and
movement – including digital humanism – is bound to recognize as a highly prized
value. AWS threaten peace by making wars easier to wage on account of reduced
numbers of involved soldiers, by laying conditions for unpredictable runaway
interactions between AWS on the battlefield, and by accelerating the pace of war
beyond human cognitive and sensory-motor abilities.
AI may bring about threats to international peace and stability in the new
cyberwarfare domain too. Indeed, AI learning systems are expected to become
increasingly central there, not only for their potential to expand cyberdefence
toolsets but also to launch more efficient cyberattacks (Christen et al. 2020, p. 4).
Cyberattacks aimed at nuclear weapons command and control networks, at the
hacking of nuclear weapons activation systems, or at generating false nuclear attack
warnings raise special concerns. Accordingly, the confluence of AI cyberweapons
with nuclear weapons intensifies that distinctive threat to the permanence on our
Digital Humanism and Global Issues in Artificial Intelligence Ethics 87
planet of human life and moral agency that physicists and other scientists have been
publicly denouncing at least since the Russell-Einstein Manifesto in 1955.
From the development of AWS to AI systems for discovering software vulner-
abilities and waging cyberconflicts, an AI arms race is well under its way. The
weaponization of AI should be internationally regulated and the AI arms race
properly bridled. Digital humanism, with its analyses and policies inspired by
universal ethical values and the protection of human dignity, has a central role to
play in this formidable endeavor.
4 Concluding Remarks
The AI ethics agenda has been mostly concerned with ethical issues arising in
specific AI application domains. Familiar cases are issues arising in connection
with loans, careers and job hiring automatic decisions, insurance premium evalua-
tion, or parole-granting tribunal judgments. Selectively affecting designated groups
of stakeholders, these may aptly be called local ethical issues. Here, the focus has
been placed instead on AI ethics issues that are global, insofar as they impact on
humankind and all members of the human species as such. The climate crisis and the
AI arms race have been used as exemplary cases to illustrate both the difference
between local and global ethical issues and the need for a proper governance of AI
ethically ambivalent roles. Last but not least, it has been argued that the ethical
governance of this ambivalence makes crucial appeal to universal human values that
any doctrine or movement deserving the name of digital humanism must endorse and
support in the context of the digital revolution.
References
Amoroso D., Tamburrini G. (2020) ‘Autonomous Weapons Systems and Meaningful Human
Control: Ethical and Legal Issues’, Current Robotics Reports 1(7), pp. 187–194. https://doi.
org/10.1007/s43154-020-00024-3
Blair G. S., (2020) ‘A tale of two cities: reflections on digital technology and the natural environ-
ment’, Patterns 1(5). https://www.cell.com/patterns/fulltext/S2666-3899(20)30088-X
Christen M., Gordijn B., Loi M. (eds) (2020). The Ethics of Cybersecurity. Cham: Springer. https://
link.springer.com/book/10.1007%2F978-3-030-29053-5
European Commission (2020). White paper on AI. A European approach to excellence and trust,
Bruxelles, 19 February 2020. https://ec.europa.eu/info/sites/info/files/commission-white-paper-
artificial-intelligence-feb2020_en
Jonas H. (1979). Das Prinzip Verantwortung. Versuch einer Ethik für die technologische
Zivilisation. Frankfurt am Main: Insel-Verlag.
Lucivero, F. (2019) ‘Big data, big waste? A reflection on the environmental sustainability of big
data initiatives’, Science and Engineering Ethics, 26, pp. 1009–30. https://doi.org/10.1007/
s11948-019-00171-7
88 G. Tamburrini
Rolnick D. et al. (2019) ‘Tackling Climate Change with Machine Learning’, arxiv.org.1906.05433.
https://arxiv.org/abs/1906.05433
Sparrow R. (2016) ‘Robots and Respect: Assessing the Case Against Autonomous Weapon
Systems’, Ethics & International Affairs 30(1), pp. 93-116.
Strubell E., Ganesh A., McCallum A. (2019) ‘Energy and Policy Considerations for Deep Learning
in NLP’, arxiv.org.1906.02243. https://arxiv.org/abs/1906.02243
Szegedy Ch. et al. (2014) ‘Intriguing properties of neural networks,’ arxiv.org.1312.6199v4. https://
arxiv.org/abs/1312.6199v4
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Our Digital Mirror
Erich Prem
Abstract The digital world has a strong tendency to let everything in its realm
appear as resources. This includes digital public discourse and its main creators,
humans. In the digital realm, humans constitute the economic end and at the same
time provide the means to fulfill that end. A good example is the case of online
public discourse. It exemplifies a range of challenges from user abuse to amassment
of power, difficulties in regulation, and algorithmic decision-making. At its root lies
the untamed perception of humans as economic and information resources. In this
way, digital technology provides us with a mirror that shows a side of what we are as
humans. It also provides a starting point to discuss such questions as who would we
like to be – including digitally, which purpose should we pursue, and how can we
live the digital good life?
For Antoine de Saint-Exupery (1939), airplanes can become a tool for knowledge
and for human self-knowledge. The same is true for digital technologies. We can use
the digital as a mirror that reflects an image of what we are as humans. And when we
look closely, it can become an opportunity to shape who we are in ways that make us
more attractive.
It is perhaps surprising to talk about attraction in the context of digital humanism.
Much of its discourse to date is about the downsides of digital technologies: human
estrangement, monopolistic power, unprecedented surveillance, security attacks, and
other forms of abuses. However, the discourse of digital humanism is not entirely
negative. Its proponents believe in progress including by technological means. It is a
constructive endeavor and acknowledges that many good things have come from
digital technologies. Still, digital humanism calls for better technology and an
improved way of shaping our future with the help of digital tools. It demands
technologies shaped in accordance with human values and needs instead of allowing
technologies to shape humans in often unpredictable and undesirable ways.
E. Prem (*)
eutema GmbH and Vienna University of Technology, Vienna, Austria
e-mail: prem@eutema.com
The advent of the internet significantly impacted the way we speak in public. From
early newsgroups to today’s online social networks, this development deserves its
own, more thorough investigation. Today, public discourse without the digital has
become quite unthinkable. At the same time, the phenomenon of online discourse is
now a major concern of policy makers as well as of critical thinkers, researchers, and
many citizens. Its shortcomings, from fake news to echo chambers, from foreign
political influence to the pervasion of illegal content, are blamed on its digital form,
hence, on technology. Some key challenges include the following:
– Platforms exploit discourse to drive user behavior. They can prioritize emotional
content over facts, nudge users into staying online, and have become viable ways
to influence user behavior including political decisions.
– Algorithms supervise and police user-contributed online content with the aim to
detect illegal matter, spot infringements of intellectual property, remove what
may be considered harmful, etc.
– There is a massive shift of power over discourse control from traditional rulers of
public discourse, such as media, politicians, and thinkers, to digital platforms.
– Discourse in online platforms has proven enormously difficult to regulate by any
single country. The only exceptions are through massive investments in surveil-
lance, censorship, and severe limitations of freedom of expression, for example,
in China.
Our Digital Mirror 91
content, and many seem to suggest that any content that is potentially harmful should
be removed. While the former is usually defined in legal texts and practice, the latter
is typically ill-defined and lies at the networks’ discretion. The ensuing discussions
of democratic parliaments and nongovernment think-tanks then concern freedom of
expression as a basic or human right, censorship, regulation, etc. (Cowls et al. 2020).
While these are important and difficult discussions, a more essential line of thinking
is required, namely, the question of what should the essential qualities of online
discourse be? It is another typical characteristic of digital technologies that we can
rarely do away with them once they have been rolled out. We therefore need to have
productive, forward-looking discussions. This can include a debate about how much
“harm” a discourse may have to include to be productive, to stimulate, or to provoke.
We need to discuss not only formal qualities of discourse, but what should its
purpose be, who should partake, and whom should it serve?
2 Scaffolding Discourse
The reasons for challenges of digital technologies do not exclusively root in the fact
that they are digital as opposed to analogue, nor do they lie in their ubiquitous nature
and the ease with which digital technologies can manage large numbers. The
challenges root in how they affect our basic conceptions of the world. Although
the technical characteristics are important, there currently is an unprecedented scale
of how the digital facilitates commercial gains of a specific character. We mentioned
how online discourse provides a basis of targeted advertising, of data harvesting, and
for the construction of predictive behavioral models. This exploitation of online
discourse lets discussions appear as a resource in the digital sphere. The digital
(platform) perspective thus regards human language from the standpoint of observ-
ability and predictability. The resulting digital online sphere consists of (mostly)
humans that provide linguistic resources and their online presence and of businesses
requiring that humans need to be predicted and targeted in advertising. In this
discourse, humans become a resource in the digital realm.
Such a resource-focused perspective is not unique to digital technology. As early
as 1954, Heidegger suggested that this specific way of letting everything appear as a
resource lies in the very nature of modern technology (Heidegger 1954). In his
terminology, technology lets everything become part of an enframing (“Ge-stell”) as
resource (“Bestand”). Heidegger uses the example of the river that appears as a
power source once we start building electricity stations. Digitization not only pro-
vides such enframing for various objects and phenomena in our environment; it
additionally and much more than previous, older technologies enframes us as
humans. It is perplexing that in the digital realm, the human is both the source and
the sink. Humans constitute the economic end and at the same time provide the
means to fulfill that end. Humans stand reserve to the extent that they are simply
either data or money generators. From an economic viewpoint, Zuboff (2019)
identified a similar concept drift underlying surveillance capitalism. It is a strong
Our Digital Mirror 93
References
Cowls J., Darius P., Golunova V., Mendis S., Prem E., Santistevan D., Wang W. (2020). Freedom
of Expression in the Digital Public Sphere [Policy Brief]. Research Sprint on AI and Content
Moderation. Retrieved from https://graphite.page/policy-brief-values
De Saint-Exúpery A. (1939) Wind, Sand und Sterne (Terre des hommes.) Düsseldorf: Karl Rauch.
Deibert R.J. (2020) Reset. Toronto: House of Anansi Press.
Doridot F. (2008) Towards an ‘engineered epistemology’? Interdisciplinary Science Reviews, 33:3,
254-262, DOI: https://doi.org/10.1179/174327908X366941.
Heidegger M. (1954) Die Frage nach der Technik. (The question concerning technology.) Vorträge
und Aufsätze. (1990), Pfullingen: Neske.
Stalder F. (2016) Kultur der Digitalisierung. (Culture of digitization.) Frankfurt/Main: Suhrkamp.
Zuboff S. (2019) Surveillance capitalism. London: Profile books.
94 E. Prem
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part IV
Information Technology and the Arts
Fictionalizing the Robot and Artificial
Intelligence
Nathalie Weidenfeld
Abstract This text explores the contemporary fascination with robots and digitality
and points out how this distorts our view on what digitization can do for us. It pleads
for a realist and non-fictionalized view on robots and artificial intelligence.
Nothing in our contemporary popular culture, so it seems, has been more fascinating
than the fantasy of robots and the idea of an upcoming total digitization of our
society. Hollywood is full of blockbuster films filled with evil, sometimes beautiful
robots and dreams of eternal or at least alternative lives made possible through
digital means.
Films however are not only expressions of deep-seated fears and hopes, they also
create a cultural imaginary that feeds into these fears and hopes, thereby creating a
loop that is best described as a more or less closed circle. Now, artistic creations are
and should of course be free to do many things; they may create unrealistic settings
and invent dramatic premises which make us wonder. Problems only arise when
readers or viewers forget to read these films correctly, that is, metaphorically. Take a
film about a society in which robots are used as personal slaves for household chores,
where the protagonist must learn to overcome his prejudice toward them robots.
Does this film incite us to think about our relationships to future robots? No! Because
this film is not about robots but a metaphorical tale about humans dealing with
humans in which robots represent underprivileged humans.
Digitalization and artificial intelligence pose many problems for society and
culture. It is therefore of utmost importance to see them for what they are in order
to judge their potentials and their dangers realistically. An inadequate “import” of
fiction into reality is useless and unproductive.
In order to see more clearly what exactly has been imported from fiction into
reality, let’s take a closer look at narratives focused on AI. When we look at films –
particularly the ones in the last 20 years – dealing with the topos of the robot, we can
N. Weidenfeld (*)
Munich, Germany
e-mail: weidenfeld@nida-ruemelin.de
discern two types: the good, innocent, and sometimes even spiritual robot and the
bad, demonical, and evil robot. These two stereotypes are an expression of a
paradigm that can be called “primitivist.” The primitivistic paradigm is a “cultural
reflex” of Western society in need to construct an “Other” that can then be used as a
mirror (Torgovnick 1990): a mirror onto which one can project one’s own beloved or
hated properties. American Indians have long served as Other within the primitivist
order – not only in the time of the European Enlightenment but also in the US
American culture: During centuries, Indians were either portrayed as bloodthirsty
demonic savages or as innocent and spiritually superior people. The American
Indian as a topos remains an obsession for US American novels and films, starting
with narratives in the seventeenth century dealing with Puritan settlers abducted by
bloodthirsty Indians, all through the nineteenth century where narratives of the noble
Indian become popular up to the New Age image of the spiritually and morally
idealized Indian.
Societies not only create an imaginary Other from already existing real persons
but also create them from time to time. The best example for this is the Alien, who
became a primitivist topos in the 1980s. The Aliens were portrayed along the same
lines as the Indians beforehand: They were either evil and bloodthirsty or good and
spiritually highly developed (Weidenfeld 2007a, b). Today, the robots have taken
over the role of the Aliens. This primitivist mode of conceptionalizing robots has a
deep influence on present society. If Elon Musk or even Stephen Hawking warns us
and speaks of lurking dangers and threats which robots pose for humanity – they
serve the same old primitivist cliché of the evil Other. What they are doing is not an
adequate or realistic description, but rather an inscription into an already existing
narrative which is at the same time re-introduced into the world.
When one looks into narratives and images of digitization, one encounters similar
mechanisms. Digitization is either seen as a mode of bringing about a digital
paradise or digital hell. The idea of a digital heaven is often invoked by images of
clarity, a blue sky, and an overwhelming universe of all-connectedness. Digital
heaven is a puritan heaven: It is based on the idea of transparency, clarity, and
purity. In a digital universe, things are either “0” or “1”; there is no ambiguity. Also,
the idea of a “natural” teleological development is often suggested in the visual
imagery which tells us that man has evolved from ape, to homo sapiens, to homo
digitalis – often represented as man holding a portable phone in hand. Digital future
becomes a millenaristic prophecy filled with utopian hopes and desires such as the
desire for salvation and eternal life.
In the past centuries, technological developments have often been accompanied
by unrealistic utopian visions. When Henry Ford pioneered in the mass production
of the automobile, he was convinced that this new technology would bring about
peace and prosperity for all. “We are about to enter a new era because when we have
everything, freedom and paradise on earth will come. Our new way of thinking and
acting will bring us a new world, a new heaven and a new earth, as prophets have
longed for since age old times” (Ford 1919). Roughly a hundred years after the
automobile, it is digitization that will supposedly bring salvation. CEOs in Silicon
Valley upload their presentations with images which suggest this.
Fictionalizing the Robot and Artificial Intelligence 99
References
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
How to Be a Digital Humanist
in International Relations: Cultural Tech
Diplomacy Challenges Silicon Valley
Technology1 has become the most pressing question of our time. Who creates it,
who controls it, who has access to it, and who doesn’t are the new parameters that
determine emerging power structures around the world. Until recently, most of these
decisions were taken unilaterally by a handful of tech entrepreneurs in Silicon
Valley, and supported through the laws of a free and unregulated market, the obscure
inner workings of which regulators around the world neither understood nor wanted
to inhibit. Built upon this nurturing humus, Silicon Valley has experienced a long
period of economic boom fueled by advances in technology. Globally admired for its
ability to attract a seemingly limitless influx of human talent and venture capital, the
region has become a symbol for youthful utopian optimism and
1
“Technology” in this article refers to frontier technologies such as artificial intelligence, quantum
computing, blockchain, biotechnology, and a.o. that are considered main drivers of the Fourth
Industrial Revolution and have created a diverse ecosystem of technology companies. Information
technology (IT) companies are a subset of these tech companies.
international relations based on a new digital humanism want to create and advocate
for global frameworks for technology that preserve our universal human rights.
International diplomacy is practiced through its own language code. In multilat-
eral fora such as the United Nations, where new ethical principles, norms, standards,
and even instruments of international law are negotiated, different nations rely on
different words to camouflage their underlying agenda and sometimes to embellish
reality. Authoritarian states, for instance, rely heavily on the words sovereignty
(Applebaum 2020) and non-interference to push back when pressed on recent online
human rights violations. It’s through these dog whistles that human rights are
increasingly under attack, not only in practice but also conceptually. We are also
facing an increasingly polarized and fragmented world in danger of creating parallel
ideological universes, conceptual frameworks, and a separate digital infrastructure
for new technologies (Lee 2018). Against this emerging geopolitical divide, diplo-
mats often struggle to address common concerns that are based on our shared
humanity. Digital humanism lends itself to become a universally accepted concept
for international diplomacy that can rally countries with different political systems,
cultures, and histories around a certain set of values without compromising on
universally recognized human rights. As a tech policy compass, it could guide
nation-states and tech companies to work together on the digital transformation of
our international system. Digital humanism could be the safe space where even
ideological adversaries and competitors could find common ground. It is in every-
one’s interest to develop technologies that are trusted by consumers around the
world. Even digital authoritarian countries, which engage in an arms race for global
influence and power, have no interest in creating rogue autonomous weapons
systems that they themselves may one day lose control over. Tech companies are
also in dire need for a compass that values culture over strategy as a necessary
environment for innovation and growth. Digital humanism could very well provide
the tool kit for a new corporate culture of big and small tech companies and provide
orientation in times of global tectonic shifts.
Digital humanism can also serve as a blueprint for nation-states that want to
engage with the global tech industry. Since some big players in technology boast
annual revenues that easily match the GDP of smaller nation-states, diplomacy needs
to rethink what it means to be an international actor in the digital age. One such
innovative approach in international relations is the emerging field of tech diplo-
macy, applied by an increasing number of nation-states and other actors in Silicon
Valley. The Danish initiative of “techplomacy” was a milestone for international
diplomacy and received global attention. In 2020, Austria followed suit as the
second country in the world to appoint a Tech Ambassador to Silicon Valley. To
mediate between governments and big tech, tech diplomacy addresses an imbalance
in information and competence. Traditionally, tech companies have lobbied for their
interests in political centers of influence. Today, governments are sending diplomats
to lobby for the interests of their citizens to the technological centers of influence. Of
course, tech diplomacy is reciprocal and therefore also attractive for tech companies.
They can see their prestige and influence on the international system formerly
recognized by being included into a practice traditionally only reserved to sovereign
How to Be a Digital Humanist in International Relations: Cultural Tech. . . 105
governments. But tech companies have other incentives as well. They employ their
own tech diplomats to show to the world that their long-term interests go beyond the
immediate goal of making a profit. One might argue that their business depends on a
stable rules-based international system and is strengthened by a shared set of values
and their technology being developed through the lens of common ethical principles.
Tech diplomacy is only one way that governments and tech companies interact
with each other. CEOs of big platforms often meet and consult with government
leaders directly. But the impact of these encounters seldomly trickle down to the
institutional level of government, lack continuity, and often a clear agenda. Tech
diplomacy, on the other hand, institutionalizes the relationship between the tech
industry and governments by creating long-lasting relations and mechanisms that
can be activated in times of crisis and need. To be clear, tech companies are not
states. However, in some areas, their growing power challenges traditional realms of
government (Wichwoski 2020). Sovereign countries may control their territories,
but big tech platforms control the “digital territories” of their online communities
and can define the rules which have a spillover effect on public life in general. Tech
diplomats need to function as advocates of their citizens and make tech companies
accountable. However, tech diplomacy is also about seeking alliances and common
ground where interests and values align. In this context, a new digital humanism in
diplomacy can also lead democratic governments and tech companies who share a
set of principles and beliefs to collaborate in the fight of digital authoritarianism
globally.
In addition, foreign policy actors need to take into account that there is an inherent
difference between human beings and their digital avatars. The consequences of
merging the real and online world, while we gradually turn into digital humans,
aren’t clear yet. In order to understand the many layers that constitute a digital
human policy makers, tech companies and technologists need to break out of their
respective silos and start working together to assure that basic rights are met
regardless of these differences. The novel interdisciplinary concept of cultural tech
diplomacy was first pioneered by Open Austria, the official Austrian government
representation in Silicon Valley. Traditional cultural diplomacy uses art and culture
as a soft power tool (Nye 1990, pp. 153–171) to influence other nation-states and
their citizens by means of attraction rather than coercion. Think about the successes
of Hollywood, Rock ‘n’ Roll, and David Hasselhoff inspiring young Germans to
“look for freedom” that helped to end the Cold War. By combining the immense
potential of cultural soft power with tech diplomacy, the Austrian diplomatic mission
in Silicon Valley creates a safe space for dialogue with tech partners that is rooted in
artistic experimentation. Art lends itself to explore topics that in other arenas would
be prone to conflict and trigger political controversies and partisan agendas. To be
the “fool” who is able to speak the truth has a lot of merit, especially in the delicate
inner workings of the international system.
A recent study commissioned by art + tech network The Grid led by Open
Austria, highlights the asymmetry of power between artists and tech companies
who are limiting the access to their technologies. But this asymmetry “obscures
the foundational role that the region’s counterculture has played in the emergence of
106 C. Blume and M. Rauchbauer
big tech – and the values of techno-utopianism, flattened hierarchies, flexibility, and
so on that have guided the industry,” as the report’s author Vanessa Chang (2020)
points out. Artists have historically contributed greatly to Silicon Valley’s world-
conquering success formula. It’s not only fair, it’s imperative that artists should yet
again be included in the creative process that constitutes the development of new
tech. It is essential to reassign value to artistic practices inside the technology sphere.
Complementary to the top-down regulatory approach that reacts to existing technol-
ogy, cultural tech diplomacy wants to proactively shape technology bottom-up
during the conception, research, and development of new tech products and services.
For this model to be successfully implemented, tech companies need to make
themselves vulnerable to the arts by opening up their R&D labs to artists and
philosophers that can provide a whole new perspective to an old set of problems.
Artists as digital humanists are uniquely equipped to explore the potential and
pitfalls of frontier technologies in an unconventional and experimental way that
has the additional benefit of not only educating technologists but also the broader
population about what it means to be a digital human in the age of artificial
intelligence.
At a time of shifting core values, an acceleration of digitalization, and a changing
international system, diplomats today need to be agile and transform into digital
humanists in order to successfully navigate the challenges of their trade. Cultural
tech diplomacy could be a winning proposition that paves the way for the future of
international relations in the digital age.
References
Applebaum, Anne (2020), How China outsmarted the Trump administration. While the U.S. is
distracted, China is rewriting the rules of the global order, Washington DC: The Atlantic, Nov
2020
Chang, Vanessa (2020) The Grid: Art + Tech Report 2020, San Francisco : EUNIC Silicon Valley,
December 2020
Kurzweil, Ray (2005), The Singularity Is Near: When Humans Transcend Biology, New York:
Penguin
Lee, Kai-Fu (2018), AI Superpowers: China, Silicon Valley, and the new world order, Boston:
Houghton Mifflin Harcourt
Markoff, John (2005), What the Dormouse Said: How the Sixties Counterculture Shaped the
Personal Computer Industry,’ New York: Viking Press
Moravec, Hans (1998), Robot: Mere Machine to Transcendent Mind, Oxford University Press.
Nida-Rümelin, Julian, Weidenfeld, Nathalie (2018), Digitaler Humanismus - Eine Ethik für das
Zeitalter der Künstlichen Intelligenz, München: Piper Verlag GmbH.
Nye, Joseph S. Jr (1990) Soft Power, Foreign Policy, Nr. 80, Twentieth Anniversary, Autumn 1990
O’Mara, Margaret, (2020), Silicon Politics, Communications of the ACM, December 2020, Vol.
63 No. 12
Wichwoski, Alexis (2020) The information trade: how big tech conquers countries, challenges our
rights, and transforms our world, New York: HarperCollins
How to Be a Digital Humanist in International Relations: Cultural Tech. . . 107
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
We Are Needed More Than Ever:
Cultural Heritage, Libraries, and Archives
The Corona crisis has given digitization a huge boost in many areas, which need to
be developed further in a strategic and co-designing way – without subordinating
ourselves to technologies but shaping them. The City of Vienna currently encour-
ages and orchestrates innovative ideas, collaborative action, and social engagement
in this field in its framework initiative “digital humanism.” With Anita Eichinger, the
Vienna City Library was involved from the very beginning and took on the men-
torship of the working group on arts and culture.1
The Vienna City Library,2 often described as “memory of the city,” has devel-
oped over the last 170 years from an inconspicuous administrative library (founded
in 1856) into a representative city library, a municipal cultural institution, and one of
the most important scholarly archives with a special focus on Vienna. It holds around
6,500,000 volumes of books on Vienna (Viennensia and Austriaca); 1600 bequests
of important inhabitants of the city (such as Franz Grillparzer, Marie von Ebner-
Eschenbach, Franz Schubert, Johann Strauss, Karl Kraus, or Friederike Mayröcker),
1
It consists of Gerald Bast, Daniel Löcker, and Carmen Aberer (MA 7) and Irina Nalis, Elfriede
Penz, Erich Prem, Eveline Wandl-Vogt, and Hannes Werthner as well as Anita Eichinger and
Katharina Prager (MA 9) and formulated a position paper on “Digital Humanism and Arts/Culture.”
2
See https://www.wienbibliothek.at/english and https://www.digital.wienbibliothek.at/.
some of them part of the UNESCO Memory of the World; and one of the world’s
largest poster collection. Apart from these central holdings, it also collects newspa-
per clippings, historical photographs, sermons, leaflets, travel reports, cookbooks,
and much more. Over the past decade, there has been a strong focus on retro-
digitization – and while digital accessibility of materials remains a priority, the
library is concentrating on research, innovation, and digital humanism in the years
to come.
Although the Vienna Library is not an art and cultural institution per se, it is an
important interface in the cultural field, with expertise in safeguarding and mediating
cultural assets.
In our current global situation – where we are dealing with the monopolization of
the web, the spread of extremist attitudes, anti-factualism, filter bubbles and echo
chambers, the loss of privacy, and many other problems – the importance of archives
and libraries cannot be overstated. Historian Jill Lepore most recently pointed out
these connections and the history of evidence, proof, and knowledge in her podcast
“The Last Archive.”3
Keeping track, filing, and cataloguing are important tools in bibliographic con-
trol. However, it is also essential to understand creativity, imagination, social and
critical competence, change of perspective, inclusion, diversity, and much more as
central contents of cultural and artistic activity. The abovementioned working group
on arts and culture made it clear in its position paper that art, culture, and the
competences of the creative must help shape digital humanism as fundamental
factors – a matter that has been obvious to technology monopolies, for example,
for a long time already.
It can be argued that libraries and archives also have some experience in com-
bining creativity with order or chaos with systematics and adapting their practices to
the logics of human art and knowledge production over centuries.
The Vienna City Library keeps the historical records of a city renowned world-
wide as a city of culture and – in the last decades – also as a center of social,
scientific, and technological innovation. In this respect, it can also refer to its
legendary intellectual history: Before and after World War I, fin-de-siècle Vienna
and Red Vienna achieved international significance in the cultural and social sphere.
After the end of the Austrian monarchy in 1918, the city’s leaders, together with its
intellectuals, boldly “imagined a new society that would be economically just,
scientifically rigorous, and radically democratic. ‘Red Vienna’ undertook experi-
ments in public housing, welfare, and education while maintaining a world-class
presence in science, music, literature, theater, and other fields of cultural production”
(McFarland et al. 2020). The roots of the ideas that came to life in the first Austrian
republic were often already established in fin-de-siècle Vienna. Mostly, they were a
reaction to profound societal, technological, and medial changes. The fields of
medicine, economics, art, and philosophy reacted to this turmoil and sought for
new ways of living – Freud’s psychoanalysis, the philosophers of the Vienna Circle,
3
See https://www.thelastarchive.com/.
We Are Needed More Than Ever: Cultural Heritage, Libraries, and Archives 111
or the epochal innovations in music driven by Schönberg and the Vienna School are
still known to this day. Today, we are in a similar situation of upheaval. Digital
humanism aims to encourage people to think and to shape the digital future in a new
way. So much for the broader context – but what do digital transformation and
digital humanism involve in the context of the Vienna City Library, whose duty it is
to preserve the cultural heritage of the city – and also for libraries and archives in
general?
Digital humanism calls for a “third way” of digitization. This means that there
must be an alternative way to Silicon Valley and China, a way without aiming for
profit or authoritarianism, but for the benefit of humanity. In 2004, the Google Books
project started. Google worked together with libraries and publishers around the
world to preserve books and to make the information accessible to people world-
wide. Prominent and huge university libraries have cooperated with Google since
that time. Jean-Noel Jeanneney, head of Bibliothèque nationale de France from 2002
to 2007, cautioned against Google and “Americanization” and argued for a European
digital library (Jeanneney 2006). Although Jeanneney was polemic in his book, there
is one important conclusion to be drawn: Cultural heritage is a public good,
therefore, it should remain public property. Digitizing books and other sources in
libraries and making them accessible to the public and the scientific world must be
the responsibility of public non-commercial institutions. Parallel to Google archives,
libraries and museums around the world also started massive programs not only to
digitize their collections but also to contextualize them and, therefore, gain addi-
tional values and new insights (e.g., citizen science projects, digital history plat-
forms, digital humanities projects). Libraries are at a turning point. They have good
prerequisites and qualifications, but they need to change their perspective on what
they have been doing for thousands of years. The Vienna City Library takes digital
humanism as a chance to reposition itself. In the following, we outline four important
new tasks as part of its strategy to answer the challenges of the digital age.
1 Self/Education
This term is meant to combine two aspects, namely, the mission to educate a wider
public as well as oneself as part of a library. First, it should be a central task of
libraries to systematically counteract to the digital gap and to train critical, respon-
sible users, as well as designers of our digital life, together. Archives and libraries
have habitually and for centuries dealt with masses of data, disorder, gaps, and
search and selection processes. The systematic, precise handling of data is one of
their core competences, and they should learn to impart this knowledge, which is so
often a desideratum elsewhere, to a wider audience. In the spirit of digital humanism,
this should also increasingly involve marginalized and disadvantaged groups to
whom the culturally transmitted knowledge of a community has often not been
easily accessible for various reasons (language, educational background, etc.). The
role of the librarians and archivists is changing from reclusive gatekeepers of hidden
112 A. Eichinger and K. Prager
treasures to guides that help users navigate contradictory data systems. In this
context, it is necessary for librarians to acquire skills of the future and to try out
new cultural techniques. These include, among others, dealing with ambiguity and
uncertainty; imagination and association; intuition; thinking in terms of alternatives;
establishing unconventional contexts; challenging the status quo; and changing
perspectives.
2 Participatory Turn
To get this self/education started and to initiate a “participatory turn” in archives and
libraries, the Vienna City Library aims at launching a pilot project in the realm of
digital humanism. Under the working title “WE make history,” the first step will be
to enter cooperations with peer institutions documenting the city’s history and to link
and visualize all digitized and digitizable sources. In a second essential step, it will
then be a matter of inquiring who remains invisible and why. As a result, it is
intended to make it possible to enrich, supplement, and retell the city’s multi-
perspective history, in order to offer groups that have been excluded from the
creation of cultural heritage opportunities to contribute their stories and their ver-
sions of history. For instance, in the historical address books from 1859 to 1942,
only the heads of the households were listed, and they were mostly male. Combining
layers of resources and research, we will not only make all the other persons –
especially the female half – visible but also take a step further when we ask the
Viennese public to enrich these layers of date with life stories, photographs, docu-
ments, or other sources. This model project will not only lead to a rethinking of how
cultural heritage is conveyed – it will also help to rethink another important area –
namely, the question of what a library collects and how.
3 Inclusive Collections
librarians and archivists – from “guardians of the past” to actors who are concerned
with the present, the future, and the construction of social memory (see Self/
Education). Secondly, radical repositioning needs the support and participation of
a broad community (see Participatory Turn). Altogether, it might be useful to
remember that the Vienna City Library always took curatorial liberties when
establishing its special collections. It is important to be aware that changes take
time and can only be achieved one step after another, project by project.
While navigating through these uncertain times of digital transformation in the spirit
of digital humanism by self/education, fostering participation, and reframing our
collections, it is helpful to keep our feet grounded and remain physically in place.
The closure of the archives and libraries, due to the COVID-19 restrictions, did not
only lead to debates about the value of archival research but also caused longing for
the special atmosphere of these places. In this context, the reading room is not just an
arbitrary workplace – although these, too, were nostalgically transfigured in the
lockdown – but a special place of exchange that does not only establish the
connection between the material and the researcher but also between the archive
and the researcher, between the researchers themselves, and, ultimately, between the
current collective issues and the collective memory.
The fundamental question is what will remain of archives and libraries, once it
has actually been possible to make cultural heritage digitally and barrier-free acces-
sible to all. Will they dissolve as locations or can they gain new significance as
venues for analogue debate and human encounters – and if so, how? This field of
tension is opening up by digitization, but digital humanism at least hints at answers
as to why spaces of human encounters will remain essential – and the experience of a
pandemic confirms this.
5 Conclusions
Libraries, archives, and our overall attitude toward our cultural heritage are at a
crucial turning point in times of digital transformation.
On the one hand, the more digital our lives become, the more we need places like
libraries to discuss, interact, and invent new innovative solutions together. On the
other hand, the profession of the librarian is often perceived as declining and
compared to a soon extinct species. The contrary is the case. Without librarians,
we would give our cultural heritage and, therefore, our cultural identity out of our
hands. The question of what to collect in the future and how to preserve and protect
the digital heritage can only be discussed in participatory exchange and cooperation
of librarians with (citizen) scientists and – last but not least – computer scientists. In
114 A. Eichinger and K. Prager
this context, librarians need IT experts to understand which spaces they are opening
and closing in the digital realm, how they can position themselves meaningfully at
the interfaces, and where data is secure. But IT also needs librarians more than ever
to recognize that key practices for dealing with cultural heritage are already in place
and that in many cases they can be transformed into the digital realm. The most
crucial thing is to realize that all the challenges ahead can only be met through
multidisciplinary exchange and mutual understanding.
References
Jeanneney, Jean-Noel (2006) Googles Herausforderung. Für eine europäische Bibliothek. Berlin:
Wagenbach
McFarland, R. Spitaler, G. and Zechner, I. (ed.) The Red Vienna Sourcebook. London: Camden
House 2020.
Rivard, C. (2014) Archiving Disaster and National Identity in the Digital Realm: The September
11 Digital Archive and the Hurricane Digital Memory Bank, in: Rak, J. and Poletti, A. (ed.)
Identity Technologies: Producing Online Selves. Wisconsin: University of Wisconsin Press,
pp. 132–143.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Humanism and the Great Opportunity
of Intelligent User Interfaces for Cultural
Heritage
Oliviero Stock
Abstract In the spirit of the modern meaning of the word humanism, if technology
aims at the flourishing of humans, it is of the greatest value to empower each human
being with the capability of appreciating culture, in an inclusive, individual-adaptive
manner. In particular, in this brief chapter, the case is made for the opportunity that
intelligent user interfaces can offer specifically in the area of culture, beyond the
obvious infrastructural advantages we are all familiar with. Insight is provided on
research aimed at the continuous personal enriching of individuals at cultural sites,
approaching the ancient humanistic vision of connecting us to our cultural past, now
made possible for all, not just for an elite.
Humanism puts humans at the center of interest for all aspects of life, on a
philosophical as well as on practical terms. Its roots are in Cicero’s term humanitas,
which in substance meant the development of all forms of human virtue and became
an important movement in Italy in the fourteenth century, including outstanding
figures of culture and art, such as the poet Francesco Petrarca, before spreading to
other areas in Europe. Humanism emphasized the connection to classical culture
and, in a way, offered to overcome limits of time. It was not only passive tribute to
ancient culture, but active connection: authors like Petrarca gave meaning to the
concept of cultural heritage and went all the way to even write letters directly to
classical authors.
I really believe we are now at a historical point, one that can steer the human
relation to cultural heritage and other cultural aspects in the spirit of a modern, digital
humanism. If technology aims at the flourishing of humans, it is of the greatest value
to empower each human being with the capability of appreciating culture, in an
inclusive, individual-adaptive manner. In particular, in this brief chapter, I would
like to make the case for the opportunity that intelligent user interfaces can offer
O. Stock (*)
Fondazione Bruno Kessler, Trento, Italy
e-mail: stock@fbk.eu
In fact, when it comes to the actual visit, the key element is that nothing should
take the place of the emotional experience of being in front of the original artifact.
The computer interface to culture, and in general any interface, must not pose limits
to a natural experience but should augment it. In addition, the computer-based
intervention should connect the current experience to a learning model. We want
the interface to take into account the cognitive, the emotional, and the physical state
of the visitor (where he/she is, tiredness, etc.), to be guided by his/her own motiva-
tion, tastes, and preferences, in case, to negotiate what should not be missed, but not
to impose a rigid agenda. This flexibility and user adaptation require the visitor
model, which must be as accurate as possible, to be continuously updated in the
course of the visit. Presentation strategies then depend on media and modalities
available, but again should take into account movement in space, and what was
explored previously, so that the presentation, which necessarily is language-based, is
appropriately personalized and can refer to what drew the visitor’s attention previ-
ously (Kuflik et al. 2011). Various techniques have been studied, based on hearing,
combination with reading, combination with seeing images, all the way to dynamic
documentaries with a multimedia narration produced automatically for the current
situation (Stock et al. 2007). It is also only natural in this context to be able to
provide cultural information from different points of view. Cultural descriptions may
be controversial, and criticism and diverse viewpoints add to our understanding. On
the visual side, various forms of augmentation of what is being perceived have been
proposed, for instance, reconstruction of a building superimposed on the view of the
existing fragment (Aliprantis and Caridakis 2019).
People tend to visit museums and historical sites in small groups, family, or
friends. Etnographers have shown that conversation is a fundamental factor in the
success of the cultural experience. Sharing the emotion, discussing, criticizing,
enlarging the perspective help going deeper, learning, and developing a taste for
the cultural experience. Also for this aspect, intelligent interfaces can help. Just to
mention one example: while theater in the museum had been proposed for some
time, inspired by a mobile theater tradition that we can date to the Middle Ages, an
original smartphone-based drama technology for the museum was recently created
and initially experimented with. The intelligent technology-based drama system
gives an active role to visitors and subtly induces conversation about the exhibit
contents among the group members, while they move along in their visit (Callaway
et al. 2012). This approach is based on dynamically adapted scenes and requires as
enabling technology, in addition to accurate positioning, also proximity and group
behavior detection. It involves distance communication, and it can be exploited for
allowing participation to the visit by elderly or handicapped members of the small
group, who cannot leave home.
Another aspect that technology consents is some form of interaction across time:
for instance, leaving traces of a visit, in the form of spoken comments that could be
heard by your grandchildren when they will happen to be just at the site you are
visiting now. Or, more sophisticated: entertaining a dialog with someone not there
anymore, through interpretation of visitor’s utterances by a dialog management
118 O. Stock
system and clever understanding and composing interview fragments of the departed
one (Artstein et al. 2014).
The “after the visit” setting is important for group sharing and reflecting about the
visit and consolidating the group experience. At this point, it is obvious that
intelligent technology, having a record of the users’ competence and current expe-
rience, of what each one has seen, what drew his/her attention, etc., can help the
conversation, help reinforcing the memory, and provide new stimuli for the individ-
ual and for specific insights.
So much for the visit to the museum, or to the art site, as the focus point of
“material” culture. A challenge for the future is also to connect all cultural experi-
ences. The idea is in the first place that a system that accompanies you to a visit
knows about your previous visits to the present museum or to other ones, about what
attracted you; it may have a model of how much you may recall and what the novel
knowledge should be integrated to. More ambitious is the idea of ubiquitous cultural
experience: in all circumstances, the fact that you are nearing a certain site may lead
a proactive system to negotiate with your individual model, so to promote some local
cultural aspects related to your interests, and find the best way to get your attention
and the time for exploring the site and having a personalized presentation (Kuflik
et al. 2015). In this spirit, for instance, sites of historical events can be connected to
what was learned in a museum, or history narration can be expressed not only in
relation to locations of big events but also for “bottom-up” history. To complete this
picture, it could be up to local residents, and especially for school projects, to design
contents to value their territory.
Having spoken of the opportunity for cultural heritage appreciation, I would like
to mention a different, but socially important, theme, still related to cultures, in this
case mostly meaning ethnical aspects. I refer to the proposition of technology to
facilitate overcoming a conflict. Attention has been given to technology for helping
solve conflicts by addressing the basic needs of the two sides, in this way supporting
decision-makers. Yet, there is a fundamental question concerned with laypeople
involved in a conflict, a question of recognizing the other and shifting attitudes.
Intelligent experimental systems have been designed to facilitate the joint creation of
a narrative acceptable by participants to the conflict, and studies have been
conducted showing that the experience with such systems can help change the
attitude toward the other (Zancanaro et al. 2012).
A final note is about ethics in interfaces. In most situations I have tried to
describe, the key goal is to motivate individuals and have them find pleasure and
interest in going deeper into cultural heritage. Even more obvious is the case of
group activity, including the last described goal of nudging participants to the shared
narrative. The question of which subtle means for influencing and for nudging
through the interface are ethically acceptable must be posed for interfaces and
communication. Ethical studies on acceptability of machine persuasive communi-
cation (Stock et al. 2016), possibly taking into account different cultures, are an
important research theme to be pursued.
In conclusion, I think that we have an extraordinary opportunity with the affir-
mation of intelligent user interfaces for cultural heritage appreciation. They require
Humanism and the Great Opportunity of Intelligent User Interfaces for. . . 119
References
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part V
Data, Algorithm, and Fairness
The Attention Economy and the Impact
of Artificial Intelligence
Abstract The growing ubiquity of the Internet and the information overload created
a new economy at the end of the twentieth century: the economy of attention. While
difficult to size, we know that it exceeds proxies such as the global online advertising
market which is now over $300 billion with a reach of 60% of the world population.
A discussion of the attention economy naturally leads to the data economy and
collecting data from large-scale interactions with consumers. We discuss the impact
of AI in this setting, particularly of biased data, unfair algorithms, and a user-
machine feedback loop tainted by digital manipulation and the cognitive biases of
users. The impact includes loss of privacy, unfair digital markets, and many ethical
implications that affect society as a whole. The goal is to outline that a new science
for understanding, valuing, and responsibly navigating and benefiting from attention
and data is much needed.
1 Introduction
We hear frequently of the information overload and its stressful impact on humanity.
The growth of networks, digital communications means (most prominently email
and now chat), and the plethora of information sources where access is instant and
plentiful has resulted in creating a situation where we are often faced with poor, or at
best questionable, quality information. This information is often misleading and
potentially harmful. But this phenomenon is not limited to those seeking information
or connection; the passive entertainment channels are so plentiful that even under-
standing what is available is a challenge for normal humans, parents, children,
educators, and professionals in general.
With hundreds of billions of web pages available (Google 2021a, b), how do you
keep up? Eric Schmidt, then CEO of Google, was famously quoted as saying that
humanity produced more information in 2011 alone than it did in the entire history of
civilization, to wit: “There were 5 Exabytes of information created between the dawn
of civilization through 2003, but that much information is now created every 2 days”
(Huffington Post 2011).
While the accuracy of this information was questionable, it proved to be prophetic
of a much more dramatic future that became manifest in the next few years. An
article in Forbes (Marr 2018) claimed: “There are 2.5 quintillion bytes of data
created each day at our current pace, but that pace is only accelerating with the
growth of the Internet of Things (IoT). Over the last two years alone 90% of the data
in the world was generated.”
A quintillion is 1018 bytes, hence 2.5 Exabytes per day, which about matches the
quantity referenced by Eric Schmidt. Whence today, we far exceed the claims made
in 2011. Disregarding the fact that most of this “data” should not be counted as data
(e.g., copying a video to a distribution point does not constitute data creation in the
authors’ opinion), it is unquestionable that recorded information is being generated
faster than any time in the history of civilization. Our ability to consume this
information as humans is extremely limited and getting more challenged by the day.
The real problem is not consuming all this information, as most of it is likely to be
of little value. The [now chronic] problem is one of focusing the attention on the
right information. We can see this problem even in the world of science or academic
publications. The ability to keep up with the quantity of publications in any field has
become an impossible task for any particular researcher. Yet the growth continues.
Finding the value in the right context is now much harder. Reliance on curated
sources—such as journals and conferences—is no longer sufficient. Authors have
many more outlets to publish papers, including sharing openly on the Web. While
this appears to be a great enabler, it creates a serious problem in determining what is
“trusted information.” The problem is compounded by the fact that anyone can write
a “news-like” article and cite these various sources. Whether real or “fake” news,
these articles have a welcoming environment in social media that enables rapid
spread across the Web.
So how do we deal with this growing volume of available information? While
search engines have been a reasonable approach to cope with casual usage, we are far
from having truly powerful search technology. Understanding the meaning and
context of a short query is difficult. Search engines that have a good understanding
of content and are not as reliant on statistical keyword search are not yet practical. As
we explain in the next section, there are many areas of the new “attention economy”
that still need new solutions that are practical. We believe that all the demand points
to the need for search engines that understand semantics, context, intent, and
structure of the domain being searched: from health records, to legal documents,
to scientific papers, to drug discovery, to even understanding the reliability of the
sources. Finally, with social media driving crowdsourcing of user-generated content,
a growing importance is played by monitoring and reliably assessing the type of
content being shared and whether it is appropriate by social, legal, privacy, and
safety criteria and policy. These are all wide open areas needing new research.
The Attention Economy and the Impact of Artificial Intelligence 125
Fig. 1 Global annual online advertising spend by category (Source: Statista 2021)
126 R. Baeza-Yates and U. M. Fayyad
cloud services to enable secure access and management, this is likely to become the
big economy of the future.
Many attempts at sizing the data economy have faced many challenges. Li (2020)
found that data use enables firms to derive firm-specific knowledge, which can be
measured by their organizational capital, the accumulated information, or know-how
of the firm (Prescott and Visscher 1980); the more the data, the greater the potential
derived firm-specific knowledge. They estimated the organizational capital for each
of top seven global online platform companies, Apple, Amazon, Microsoft, Google,
Facebook, Alibaba, and Tencent, and compared their combined organizational
capital with the global data flow during the same period of time. This provides
evidence that large online platform companies have been aggressively investing
capital in order to tap the economic opportunities afforded by explosive global data
growth, which leads to some estimates of the size of the data economy.
No accepted methodology to measure the value of the market for data exists
currently. Apart from the significant conceptual challenges, the biggest hurdle is the
dearth of market prices from exchange; most data are collected for firms’ own use.
Firms do not release information relating to transactions in data, such as the private
exchange or the sharing of data that occurs among China’s big tech companies. Li
et al. (2019) examine Amazon’s data-driven business model from the perspectives of
data flow, value creation for consumers, value creation for third parties, and mon-
etization. Amazon uses collected consumer data and, through recommendations,
insights, and third-party services, creates new products and services for its users, for
example, personalized subscription services and recommendations for all products
and services sold on its marketplace. “In 2019, Amazon had more than 100 million
Prime subscribers in the U.S., a half of U.S. households, with the revenue from the
annual membership fees estimated at over US$9 billion” (Tzuo and Weisert 2018).
By taking advantage of insights on interactions and transactions, Amazon can
capture much of the social value of the data they have accumulated. This is why
any attempt at sizing the data economy can be challenging: the plethora of refine-
ment and reuse forms a huge and poorly understood, yet rapidly growing, space.
What role does AI play in helping us deal with these information overload
problems? We believe that since technology was the big enabler of the information
overload, technology will also hold the keys to attempt to tame it. Because what we
seek is an approach to filter information and focus attention, we naturally gravitate to
approaches that are based on what is typically referred to as AI. The reason for this is
that generating and collecting data and information does not necessarily require
“intelligence.” But the inverse problem: determining meaning and relevance does
require it.
In order to filter, personalize, and meaningfully extract information or “reduce” it,
there needs to be an understanding of the target consumer, the intent of the con-
sumer, and the semantics of the content. Such technologies typically require either
the design and construction of intelligent algorithms that code domain knowledge or
employing machine learning to infer relevance from positive, negative, and relevant
feedback data. We discuss this approach further in Sect. 3.
The Attention Economy and the Impact of Artificial Intelligence 127
The main theme is that AI is necessary and generally unavoidable in solving this
inverse problem. This brings in complications in terms of an ability to correctly
address the problem, to effectively gather feedback efficiently, and, assuming the
algorithms work, to issues of algorithmic bias and fairness that we shall discuss in
Sect. 4.
Much of the attention economy is constructed upon the interaction feedback loop of
users and systems. The components of such a loop are discussed here, and in the next
section we detail the biases that poison them.
The overall setup of this problem as an AI problem is totally analogous to a search
engine operation. Web search requires four major activities:
1. Crawling: web search engines employ an understanding of websites and content
to decide what information to crawl, how frequently, and where to look for key
information.
2. Content modeling: requires modeling concepts and equivalences, stemming,
canonization (e.g., understanding what phrases are equivalent in semantics),
and reducing documents to a normalized representation (e.g., bag of words, etc.).
3. Indexing and retrieval: figuring out how to look up matches and how to rank
results.
4. Relevance feedback: utilizing machine learning (MLR) to optimize the matching
and the ranking based on user feedback: either directly or by leveraging infor-
mation like click-through rates, etc.
Each of these steps above requires some form of AI. The problem of capturing
relevance and taming the information overload in general requires us to solve the
equivalent components regardless of the domain of application: be it accessing a
corpus of scientific applications, tracking social media content, determining what
entertainment content is interesting, or retrieving and organizing healthcare
information.
We note that this framework has the human-in-the-loop component for capturing
the scarce resource accurately: is this content interesting for the target task? This
natural loop is an example of leveraging AI either through direct feedback or through
the construction of reliably labeled training data. The next three elements need
serious consideration as we consider approaches to “human-in-the-loop” solutions.
The first key element is the digital identity as a user. A digital identity can range from
anonymous to a real personal identity. However, in practice, you are never
128 R. Baeza-Yates and U. M. Fayyad
3.2 Algorithms
Any software system is built on top of many algorithms. Some key components are
the algorithms used to gather and store data about the environment and the users,
profile the users to personalize and/or nudge their experience, and monetize their
interaction. Tracking and profiling users can be done at an individual level or can be
coarser, such as assigning a particular persona to each user. A “persona” is a
construction of a realistic exemplar or actual specific user that represents a segment
of users having certain preferences or characteristics. Apple, for example, uses
differential privacy to protect individual privacy, and in the next release of its mobile
operating system, iOS 14, users will decide if they can be tracked or not (O’Flaherty
2021). At the same time, Google is planning to move away from cookies by using
FLoCs or Federated Learning of Cohorts (Google 2021b). Nudging implies manip-
ulating the behavior of the user, from suggesting where to start reading to where we
place elements that we want the user to click.
The attention economy has created a specific data economy that refers to tracking
and profiling users (as discussed above). For this reason, although talking about
television, Serra and Schoolman in 1973 said that “It is the consumer who is
consumed, you are the product of T.V.” So, we are the product and the data economy
The Attention Economy and the Impact of Artificial Intelligence 129
4 Biases
In this section, we cover many relevant biases that exist in software systems,
particularly ML-based systems. Bias is a systematic deviation regarding a reference
value, so in principle the word is neutral. However, we usually think about bias in a
negative way because in the news only negative biases are covered (gender, race,
etc.). We can distinguish between statistical bias, product of a measurement; cultural
or social bias; and cognitive bias that are particular to every person. We organize
them by source, data and algorithms, including the interaction of users with the
system.
4.1 Data
This is the main source of bias as data may encode many biases. In Table 1, we show
examples of different types of generic data sets crossed with key examples of social
bias that might be present, where Econ represents wealth-based discrimination.
However, biases might be subtle and not known a priori. They can be explicit or
implicit, direct or indirect. In addition, sometimes it is not clear what should be the
right reference value or distribution (e.g., gender or age in a given profession).
One important type of data that encodes many biases is text. In addition to gender
or racial bias, text can encode many cultural biases. This can even be seen when it is
used to train word embeddings, large dimensional spaces where every word is
encoded by a vector. There are examples of gender bias (Caliskan et al. 2017),
Table 1 Example of biases Data set Gender Race Age Geo Econ
found on different types of
Faces ✓ ✓ ✓ ✓ ✓
data sets
News ✓ ✓ ✓ ✓ ✓
Resumes ✓ ✓ ✓ ✓ ✓
Immigration ✓ ✓ ✓ ✓
Criminality ✓ ✓ ✓
Recidivism ✓ ✓ ✓
130 R. Baeza-Yates and U. M. Fayyad
race bias (Larson et al. 2016), religious bias (Abid et al. 2021), etc., and their impact
has many ramifications (Bender et al. 2021).
There can be biases also in how we select data. The first one is the sample size. If
the sample is too small, we bias the information (say events) in the sample to the
most frequent ones. This is very important in Internet data as the probability of an
event, say a click, is very small and the standard sample size formula will underes-
timate the real value. Hence, we need to use adapted formulas to avoid discarding
events that we truly want to measure (Baeza-Yates 2015). In addition, in the Internet,
event distributions usually follow a power law with a very long constant tail and
hence are very skewed. For this type of distribution, the standard sampling algorithm
does not generate the same distribution in the sample, particularly in the long tail.
Hence, it is important to use a stratified sampling method in the event of interest to
capture the correct distribution.
4.2 Algorithms
clicks only because they are at the top. To counteract this bias, search engines debias
clicks to avoid fooling themselves. In addition, ratings from other users create social
pressure bias.
5 Societal Impact
There are several areas of impact on society which are summarized by:
1. How the data economy creates a loss of privacy. A loss of privacy that many
people are not aware of as they normally do not read the terms of usage. Shoshana
Zuboff calls this surveillance capitalism (Zuboff 2019), or surveillance economy,
to distinguish it from government surveillance, as it is mainly carried out by large
Internet multinationals. Carissa Véliz (2021) argues that “whoever has the most
personal data will dominate society.” Hence, privacy is a form of power: if
companies control it, the wealthy will rule; while if governments control it,
dictatorships will rule. The conclusion is that society can only be free if people
keep their power, that is, their data. She goes on augmenting that data is a toxic
substance that is poisoning our lives and that economic systems based on the
violation of human rights are unacceptable, not only because of ethical concerns
but because the “surveillance economy threatens freedom, equality, democracy,
autonomy, creativity, and intimacy.”
2. Digital manipulation of people. This goes beyond the digital nudging explained
earlier, say to entice you to click an ad without your noticing it. The main example
is social media and fake news. This new social age is ruled by what Sinan Aral
calls The Hype Machine (Aral 2020), a machine that has disrupted elections,
stock markets, and, with the COVID-19 pandemic, also our health. There are
many examples of country-wide manipulation from governments such as in
Brazil, Myanmar, and the Philippines or from companies such as Cambridge
Analytica using Facebook data which affected the 2016 US presidential election.
Harari is much more pessimistic as some fake news lasts forever and “as a
species, humans prefer power to truth” (Harari 2018). The future danger for
him is the combination of AI with neuroscience, the direct manipulation of the
brain. In all of the above, AI is the key component to predict which person is more
prone to a given nudging and how to perform it.
3. Unfair digital markets (monopolistic behavior and the biased feedback loop
previously mentioned). During 2020, the US government started antitrust cases
against Facebook (US FTC 2020) and Google (US DoJ 2020). However, the
popularity bias also discriminates against the users and items in the tail, creating
unfairness and also instability. Better knowledge of the digital market should
imply optimal long-term revenue and healthier digital markets, but the current
recommendation systems are optimized for short-term revenue. In some sense,
the system is inside a large echo chamber that is the union of all the echo
chambers of its users (Baeza-Yates 2020).
132 R. Baeza-Yates and U. M. Fayyad
4. Ethical implications of all the above. That includes discrimination (e.g., per-
sons, products, businesses), phrenology (e.g., prediction of personality traits
based in facial biometrics (Wang and Kosinski 2018)), data used without consent
(e.g., faces scrapped from the Web used to train facial recognition systems (Raji
and Fried 2020)), etc.
There are also impacts in specific domains such as government, education, health,
justice, etc. Just to exemplify one of them, we analyze the impact on healthcare.
While machine-driven and digital health solutions have created a huge opportunity
for advancing healthcare, the information overload has also affected our ability to
leverage the richness of all the new options: genome sequencing, drug discovery,
and drug design vs. the problem of understanding and tracking what this data means
to our lives. How the healthcare industry is having trouble dealing with all the data is
manifested in these examples:
• Lack of standards created data swamps in electronic health records (EHRs)
• Lack of ability to leverage data for population healthcare studies and identify
panels on demand
• Lack of ability to leverage collected data to study drug and treatment impacts
systematically
• Inability of healthcare providers to stay on top of the data for an individual
• Lack of individuals in ability to stay on top of data about themselves
6 Conclusions
As the tide of information overload rises, we believe that this makes the traditional
long tail problem even harder. The reason for this is that the information coming
from the head of the tail, the most common sources of new data or information, is not
growing as fast as information coming from the less popular sources that are now
enabled to produce more growth. This exacerbates the long tail distribution and
makes information retrieval, search, and focus of attention much more difficult.
As a final concluding thought, one may wonder if the only way out is through use
of AI algorithms. The answer is a qualified yes. There may be better solutions
through design, through proper content tagging and classification and modeling as
content is generated. But the reality is that for the majority of the time, we are living
in a world where we have to react to new content, new threats, new algorithms, and
new discovered biases. As such, we will always be left with the need for a scalable
approach that has to solve the “inverse design” problem—inferring from observa-
tions what is likely happening. This seems to drive “understanding,” especially
semantic modeling, to the forefront. And this seems to drive us to look for algorithms
to solve such inference problems, and whence AI.
There are other related issues that we did not cover. Those include cybersecurity
and the growing dark web economy, as well as other emerging technologies that
The Attention Economy and the Impact of Artificial Intelligence 133
create synergy with AI. The same for the future impact of the proposed regulation for
AI in the European Union that was just published (EU 2021).
References
Abid, Abubakar; Farooqi, Maheen; Zou, James. (2021) Persistent Anti-Muslim Bias in Large
Language Models. https://arxiv.org/pdf/2101.05783v1.pdf
Aral, Sinan. (2020) The Hype Machine, Currency Press.
Baeza-Yates, Ricardo and Saez-Trumper, Diego. (2015) Wisdom of the crowd or wisdom of a few?
An analysis of users’ content generation. In Proceedings of the 26th ACM Conference on
Hypertext and Social Media (Guzelyurt, TRNC, Cyprus, Sept. 1–4). ACM Press, New York,
69–74.
Baeza-Yates, Ricardo. (2015) Incremental sampling of query logs. In Proceedings of the 38th ACM
SIGIR Conference (Santiago, Chile, Aug. 9–13). ACM Press, New York, 1093–1096.
Baeza-Yates, Ricardo. (2018) Bias on the Web. Communications of ACM 61(6), 54-61.
Baeza-Yates, Ricardo. (2020) Bias in Search and Recommender Systems. ACM RecSys 2020, Rio
de Janeiro. https://www.youtube.com/watch?v¼8zetbdx4_08
Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Mitchell, Margaret. (2021) On the
Dangers of Stochastic Parrots: Can Language Models Be Too Big?. ACM FAccT 2021. https://
faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf
Caliskan, Aylin; Bryson, Joanna J. and Narayanan, Arvind. (2017) Semantics derived automatically
from language corpora contain human-like biases. Science 356, 6334, 183–186.
European Union. (2016) General Data Protection Regulation 2016/679.
European Union. (2021) Proposed Regulation for an European Approach to AI. https://digital-
strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence
Forbes. (2019) https://www.forbes.com/sites/greatspeculations/2019/06/11/how-has-the-u-s-
online-advertising-market-grown-and-whats-the-forecast-over-the-next-5-years/
Forbes. (2021) https://www.forbes.com/sites/jonathankeane/2021/01/05/italian-court-finds-
deliveroo-rating-algorithm-was-unfair-to-riders/
Google. (2021a) How Search Works. https://www.google.com/search/howsearchworks/crawling-
indexing/
Google. (2021b) Federated Learning of Cohorts. https://github.com/WICG/floc
Harari, Yuval Noah. (2018) 21 Lessons for the 21st Century. Spiegel & Grau.
Huffington Post, (2011) Google CEO Eric Schmidt: ‘People Aren’t Ready for The Technology
Revolution’. https://www.huffpost.com/entry/google-ceo-eric-schmidt-p_n_671513
Johansen, Johanna; Pedersen, Tore; Johansen, Christian. (2020) Studying the Transfer of Biases
from Programmer to Programs. arXiv, https://export.arxiv.org/pdf/2005.08231
Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens and Mullainathan, Sendhil.
(2018) Human Decisions and Machine Predictions, The Quarterly Journal of Economics,
Oxford University Press, vol. 133(1), 237-293.
Larson, Jeff; Mattu, Surya; Kirchner, Lauren; Angwin, Julia. (2016) How We Analyzed the
COMPAS Recidivism Algorithm. https://www.propublica.org/article/how-we-analyzed-the-
compas-recidivism-algorithm
Li, Wendy C.Y.; Nirei, Makoto; Yamana, Kazufumi. (2019) Value of Data: There’s No Such Thing
as a Free Lunch in the Digital Economy, U.S. Bureau of Economic Analysis, working paper,
h t t p s : / / w w w . b e a . g o v / s y s t e m / fi l e s / p a p e r s /
20190220ValueofDataLiNireiYamanaforBEAworkingpaper.pdf
Li, Wendy C.Y. (2020) Online Platforms’ Creative “Disruption” in Organizational Capital – the
Accumulated Information of the Firm, U.S. Bureau of Economic Analysis working paper.
134 R. Baeza-Yates and U. M. Fayyad
Marr, B. (2018) How Much Data Do We Create Every Day? Forbes. https://www.forbes.com/sites/
bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-
everyone-should-read/
Nielsen, Jakob. (2016) The 90-9-1 Rule for Participation Inequality in Social Media and Online
Communities. https://www.nngroup.com/articles/participation-inequality/
O’Flaherty, Kate. (2021) Apple’s Stunning iOS 14 Privacy Move. Forbes. https://www.forbes.com/
sites/kateoflahertyuk/2021/01/31/apples-stunning-ios-14-privacy-move-a-game-changer-for-
all-iphone-users/
Prescott, E. and Visscher, M. (1980) Journal of Political Economy, vol. 88, issue 3, 446-61.
Raji, Inioluwa Deborah and Fried, Genevieve. (2020) About Face: A Survey of Facial Recognition
Evaluation, AAAI 2020 Workshop on AI Evaluation.
Statista (2021) https://www.statista.com/statistics/276671/global-internet-advertising-expenditure-
by-type/
Tzuo, Tien and Weisert, Gabe. (2018) Subscribed: Why the Subscription Model Will be Your
Company’s Future – And What to Do about It, Publisher: Portfolio/Penguin.
United Nations. (1948) Declaration of Human Rights, Article 12.
United States Department of Justice (2020). https://www.justice.gov/opa/pr/justice-department-
sues-monopolist-google-violating-antitrust-laws
United States Federal Trade Commission (2020. https://www.ftc.gov/news-events/press-releases/
2020/12/ftc-sues-facebook-illegal-monopolization
Véliz, Carissa. (2021) Privacy is Power. Bantam Press, Second edition.
Wang, Yilun; Kosinski, Michal. (2018) Deep neural networks are more accurate than humans at
detecting sexual orientation from facial images. J. of Personality and Social Psychology;114
(2):246-257.
Zuboff, Shoshana. (2019) The Age of Surveillance Capitalism. Public Affairs.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Did You Find It on the Internet? Ethical
Complexities of Search Engine Rankings
Cansu Canca
Abstract Search engines play a crucial role in our access to information. Their
search ranking can amplify certain information while making others virtually invis-
ible. Ethical issues arise regarding the criteria that the ranking is based on, the
structure of the resulting ranking, and its implications. Critics often put forth a
collection of commonly held values and principles, arguing that these provide the
needed guidance for ethical search engines. However, these values and principles are
often in tension with one another and lead us to incompatible criteria and results, as I
show in this short chapter. We need a more rigorous public debate that goes beyond
principles and engages with necessary value trade-offs.
1 Introduction
In our digitalized life, search engines function as the gatekeepers and the main
interface for information. Ethical aspects of search engines are discussed at length
in the academic discourse. One such aspect is the search engine bias, an issue that
encompasses ethical concerns about search engine rankings being value-laden,
favoring certain results, and using non-objective criteria (Tavani 2020). The aca-
demic ethics debate has not (yet) converged to a widely accepted resolution of this
complex issue. Meanwhile, the mainstream public debate mostly ignored the hard
ethical trade-offs, opting instead for a collection of commonly held principles and
values such as accuracy, equality, efficiency, and democracy. In this short chapter, I
explain why this approach leads to unresolvable conflicts and thus ultimately a
dead end.
C. Canca (*)
AI Ethics Lab, Datca, Turkey
e-mail: cansu@aiethicslab.com
Search engines are invaluable. Without them, it is impossible to navigate the massive
amounts of information available in the digital world. They are our main mediator of
information (CoE 2018). Every day over 7 billion searches are conducted on Google
alone,1 accounting for over 85% of all searches worldwide.2 In addition to that,
various searches are conducted on specialized search engines such as Amazon and
YouTube. Even academic and scientific research relies on Google Scholar, PubMed,
JSTOR, and similar specialized search engines—meaning not only we rely on search
engines to access information but we also rely on them while creating further
knowledge.
We defer to search engines’ ranking of information so much that most people do
not even check the second page of the search results.3 On Google, 28.5% of users
click the first result, while 2.5% click on the tenth result, and fewer than 1% click
results on the second page.4 This means that the ranking has a great effect on what
people see and learn.
Search engine ranking is never value-neutral or free from human interference. It is
optimized to reach a goal. Even if we can agree that this goal is relevance to the user,
defining what is relevant involves guesswork and value judgments. By definition, in
most search queries, users do not know what result they are looking for. On top of
that, the search engine has to guess from the search keywords what kind of infor-
mation the user is interested in. For example, a user searching for “corona vaccina-
tion” could be looking for vaccine options, vaccine safety information, anti-vaccine
opinions, vaccination rates, or celebrities who are vaccinated, and they might be
looking for these on a global or local scale. More importantly, they might be equally
satisfied with well-explained vaccine safety information or anti-vaccine opinions
since they might not have prior reasons to differentiate these two opposing results.
Here, the value judgment comes into play in designing the system. Should the
system first show vaccine safety information to ensure that the user is well-informed
or the anti-vaccine opinions since they are often more intriguing and engaging?
Should the system make results depending on user profiles (e.g., being scientifically
or conspiracy oriented)? Should it sort the results by click rates globally or locally,
by personal preferences of the user, by accuracy of the information, by balancing
opinions, or by public safety? Deciding which ranking to present embeds a value
judgment into the search engine. And this decision cannot fully rely on evaluating
user satisfaction about a search query, because the user does not know the full range
1
https://www.internetlivestats.com/google-search-statistics/
2
https://www.statista.com/topics/1710/search-engine-usage/-dossierSummary
3
https://www.forbes.com/sites/forbesagencycouncil/2017/10/30/the-value-of-search-results-
rankings/
4
https://www.sistrix.com/blog/why-almost-everything-you-knew-about-google-ctr-is-no-longer-
valid/
Did You Find It on the Internet? Ethical Complexities of Search Engine. . . 137
of information they could have been shown. Moreover, user satisfaction might still
lead to unethical outcomes.
Imagine basing your decision whether to get vaccinated on false information because
that is what came up on your search (Ghezzi et al. 2020; Johnson et al. 2020).
Imagine deciding whom to vote for based on conspiracy theories (Epstein and
Robertson 2015). Imagine having your perception of other races and genders pushed
to negative extremes because that is the stereotype you are presented with online
(Noble 2018). Imagine searching for a job but not seeing any high pay and high
power positions because of the search engine’s knowledge or estimate of your race,
gender, or socio-economic background.5 Imagine having your CV never appear to
the employers for those jobs that you easily qualify for because of search engine
profiling (Deshpande et al. 2020). These are all ethical issues. They stem from value
judgments embedded in search engine processing and, as a result, impacting indi-
vidual autonomy, well-being, and social justice.
We base our decisions on what we know. By selecting which information to
present in what order, search engines affect and shape our perception, knowledge,
decision, and behavior both in the digital and physical sphere. As a result, they can
manipulate or nudge individuals’ decisions, interfere with their autonomy, and affect
their well-being.
By sorting and ranking what is out there in the digital world, search engines also
impact how benefits and burdens are distributed within the society. In a world where
we search online for jobs, employees, bank credits, schools, houses, plane tickets,
and products, search engines play a significant role in the distribution of opportuni-
ties and resources. When certain goods or opportunities are systematically concealed
from some of us, there is an obvious problem of equality, equal opportunity and
resources, and, more generally, fairness. More to the point, once certain information
is not accessible to some, they often do not even know that it exists. If we cannot
realize the injustice, we also cannot demand a fair treatment.
A running example for search engine bias has been the image search results for the
term “professor.”6 When searching for “professor,” search engines present a set of
overwhelmingly white male images. In the USA, for instance, women make up 42%
5
https://www.nytimes.com/2018/09/18/business/economy/facebook-job-ads.html
6
https://therepresentationproject.org/search-engine-bias/
138 C. Canca
of all tenured or tenure-track faculty.7 In search engine results, the ratio of female
images has been about 10–15% and only recently went up to 20–22%.8 When
searching specifically for “female professors,” the image results are accompanied
by unflattering suggestions: Google’s first suggested term is “clipart” (Fig. 1),
whereas Bing’s top four suggestions include “crazy,” “cartoon,” and “clipart” female
professors (Fig. 2).9
Why is this an ethical problem? Studies show that girls are more likely to choose a
field of study if they have female role models (Lockwood 2006; Porter and Serra
2020). Studies also show that gender stereotypes have a negative effect on hiring
women for positions that are traditionally held by men (Isaac et al. 2009; Rice and
Barth 2016). By amplifying existing stereotypes, search engine results contribute to
the existing gender imbalance in high-powered positions. It is reasonable to think
that this gender imbalance in real life has its roots in unjustified discrimination
against women in the workplace as well as discriminatory social structures, both of
which do not allow female talent to climb the career ladder. The search engine bias
can contribute to the perpetuation of this gender imbalance.
In fact, this is not special to image search results for “professor.” Women are
underrepresented across job images and especially in high-powered positions (Kay
et al. 2015; Lam et al. 2018). Take one step further in this problem, and we end up
with issues such as LinkedIn—a platform for professional networking—
autocorrecting female names to male ones in its search function10 and Google
showing much fewer prestigious job ads to women than to men.11
Most mainstream reactions criticize search engine rankings from commonly held
values and principles: search engines should reflect the truth and be fair; they should
promote equality and be useful; they should allow users to exercise agency and
prevent harm; and so on. On close inspection, however, these commonly held values
and principles fail to provide guidance and may even conflict with one another. In
the next paragraphs, I briefly go over three values—accuracy, equality, and
agency—to show how such simple guidance is inadequate for responding to this
complex problem.
Accuracy One could argue that search engines, being a platform for information,
should accurately reflect the world as it is. This would imply that the image results
7
https://www.aaup.org/news/data-snapshot-full-time-women-faculty-and-faculty-color.
8
Calculated from search engine results of Google, Bing, and DuckDuckGo in May 2021. Note that
these numbers fluctuate depending on various factors about the user such as their region and prior
search results.
9
In comparison, Bing’s suggestions when searching for “male professor” remain within the
professional realm: “black professor,” “old male teacher,” “black female professor,” “black
woman professor,” “professor student,” and such. However, Google fails equally badly in its
suggestions for “male professor.” Its first two suggestions are “stock photo” and “model.”
10
https://phys.org/news/2016-09-linkedin-gender-bias.html
11
https://www.washingtonpost.com/news/the-intersect/wp/2015/07/06/googles-algorithm-shows-
prestigious-job-ads-to-men-but-not-to-women-heres-why-that-should-worry-you/
Did You Find It on the Internet? Ethical Complexities of Search Engine. . . 139
should be revised and continuously updated to reflect the real-life gender ratio of
professors. In contrast, the current search results portray the social perception about
gender roles and prestigious jobs. Note that implementing accuracy in search results
would require determining the scope of information: Should the results accurately
reflect local or global ratio? And what other variables—such as race, ability, age, and
socio-economic background—and their ratio should the results reflect accurately?
Equality Contesting prioritization of accuracy, one could argue that search engines
should promote equality because they shape perception while presenting informa-
tion, and simply showing the status quo despite the embedded inequalities would be
unfair. If we interpret equality as equal representation, this would imply showing
equal number of male and female professors. Implementing equal representation in
search results would also require taking into account other abovementioned vari-
ables—such as race, ability, age, and socio-economic background—and equally
representing all their combinations. A crucial question would then be, what would
the search results look like if all possible identities are represented and would these
results still be relevant, useful, or informative for the user?
Agency Contesting both accuracy and equality, one could argue that the system
should prioritize user agency and choice by ranking the most clicked results at the
top. This is not a strong argument. When conducting a search, users do not convey an
informed and intentional choice through their various clicks. One could, however,
incorporate user settings to the search engine interface to encourage user agency and
provide them with a catalogue of setting options for ranking. Yet, since most people
have psychological inertia and status quo bias, most users would still end up using
the default (or easiest) setting—which brings us back to the initial question: What
should the default ranking be?
An additional consideration must be the content of web pages that these
images come from. It is not sufficient to have an ethically justifiable combination
of images in the search results. It is also important that the web pages that link to
these images follow a similar ethical framework. If, for example, search results show
equal number of female and male images but the pages with female images dispute
gender equality, this would not be a triumph of the principle of equality.
We could continue with other values such as well-being, efficiency, and democ-
racy. They would yield even more different and conflicting ranking outcomes.
Therein lies the problem. These important and commonly held values do not provide
a straightforward answer. They are often in tension with each other. We all want to
promote our commonly cherished and widely agreed-upon values. This is apparent
from the Universal Declaration of Human Rights, from the principlism framework,
and from the common threads within various sets of AI principles published around
the world.12 But simply putting forth some or all of these values and principles and
12
https://aiethicslab.com/big-picture/
142 C. Canca
Thus far I focused on the end product. What is the ethically justified composition for
search engine results? But we also need to focus on the process: how did we end up
with the current results, and what changes can or should be implemented?
Search engines use a combination of proxies for “relevance.” In addition to the
keyword match, this might include, for example, click rates, tags and indexing, page
date, placing of the term on the website, page links, expertise of the source, and user
location and settings.13 Search is a complex task, and the way that these proxies fit
together changes all the time. Google reports that in 2020 they conducted over
600,000 experiments to improve their search engine and made more than 4500
changes.14 Since search engine companies compete in providing the most satisfac-
tory user experience, they do not disclose their algorithms.15 Returning to our
example, when we compare image search results for “professor” in Google, Bing,
and DuckDuckGo, we see that while Google prioritizes Wikipedia image as the top
result, Bing and DuckDuckGo refer to news outlet and blog images for their first ten
results, excluding the image from the Wikipedia page.16
Value judgments occur in deciding which proxies to use and how to weigh them.
Ensuring that the algorithm does not fall into ethical pitfalls requires navigating
existing and expected ethical issues. Going back to our example, content uploaded
and tagged by users or developers is likely to carry their implicit gender biases.
Therefore, it is reasonable to assume that the pool of images tagged as “professor”
would be highly white-male oriented to start with. Without any intervention, this
imbalance is likely to get worse when users click on the results that match their
implicit bias and/or when an algorithm tries predicting user choice and, thereby,
user bias.
13
https://www.google.com/search/howsearchworks/algorithms/; https://www.bing.com/
webmasters/help/webmasters-guidelines-30fba23a; https://dl.acm.org/doi/abs/10.1145/3361570.
3361578
14
https://www.google.com/search/howsearchworks/mission/users/
15
Even if they did disclose their algorithms, this would likely be both extremely inefficient and
ethically problematic. See Grimmelmann (2010) for a more detailed discussion of transparency in
search engines.
16
At the height of these discussions in 2019, the image in the Wikipedia entry for “professor” was
switched to Toni Morrison, a black female professor, thereby making this the first image to appear
on Google image search results. Currently (May 2021), the Wikipedia image has been changed to a
picture of Einstein.
Did You Find It on the Internet? Ethical Complexities of Search Engine. . . 143
6 Conclusions
A comprehensive ethical approach to search engine rankings must take into account
search engines’ impact on individuals and society. The question is how to mitigate
existing ethical issues in search engine processing and prevent amplifying them or
creating new ones through user behavior and/or system structure. In doing so, values
and principles can help us formulate and clarify the ethical problems at hand, but
they cannot guide us to a solution. For that, we need to engage in deeper ethics
analyses which provide insights to the value trade-offs and competing demands that
we must navigate to implement any solutions to these problems. These ethics
analyses should then feed into the public debate so that the discussion can go beyond
initial arguments, reveal society’s value trade-offs, and be informative for decision-
makers.
References
Canca, C. (2019) ‘Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR’,
in AI & Global Governance, United Nations University Centre for Policy Research [online].
Available at: https://cpr.unu.edu/publications/articles/ai-global-governance-human-rights-and-
ai-ethics-why-ethics-cannot-be-replaced-by-the-udhr.html (Accessed: 1 May 2021).
Canca, C. (2020) ‘Operationalizing AI Ethics Principles’, Communications of the ACM, 63(12),
pp.18-21. https://doi.org/10.1145/3430368
Council of Europe (CoE) (2018) Algorithms and Human Rights [online]. Available at: https://rm.
coe.int/ algorithms-and-human-rights-en-rev/16807956b5 (Accessed: 1 May 2021)
Deshpande, K.V., Shimei, P. and Foulds, J.R. (2020) ‘Mitigating Demographic Bias in AI-based
Resume Filtering’, in Adjunct Publication of the 28th ACM Conference on User Modeling,
Adaptation and Personalization. https://doi.org/10.1145/3386392.3399569
Epstein, R. and Robertson R.E. (2015) ‘The search engine manipulation effect (SEME) and its
possible impact on the outcomes of elections’, Proceedings of the National Academy of
Sciences, 112(33). https://doi.org/10.1073/pnas.1419828112
Grimmelmann, J. (2010) ‘Some Skepticism About Search Neutrality’ [online]. Available at: http://
james.grimmelmann.net/essays/SearchNeutrality (Accessed: 1 May 2021)
Ghezzi, P., Bannister, P.G., Casino, G., Catalani, A., Goldman, M., Morley, J., Neunez, M., Prados-
Bo, A., Smeesters, P.R., Taddeo, M., Vanzolini, T. and Floridi, L. (2020) ‘Online Information of
Vaccines: Information Quality, Not Only Privacy, Is an Ethical Responsibility of Search
Engines’, Frontiers in Medicine, 7(400). https://doi.org/10.3389/fmed.2020.00400
Isaac, C., Lee, B. and Carnes, M. (2009) ‘Interventions that affect gender bias in hiring: a systematic
review’, Academic medicine: Journal of the Association of American Medical Colleges, 84(10),
pp.1440–1446. https://doi.org/10.1097/ACM.0b013e3181b6ba00
Johnson, N.F., Velásquez, N., Restrepo, N.J., Leahy, R., Gabriel, N., El Oud, S., Zheng, M.,
Manrique, P., Wuchty, S. and Lupu, Y. (2020) ‘The online competition between pro- and anti-
vaccination views’, Nature, 582, pp.230–233. https://doi.org/10.1038/s41586-020-2281-1
Kay, M., Matuszek, C. and Munson, S.A. (2015) ‘Unequal Representation and Gender Stereotypes
in Image Search Results for Occupations’, in Proceedings of the 33rd Annual ACM Conference
on Human Factors in Computing Systems. https://doi.org/10.1145/2702123.2702520
144 C. Canca
Lam, O., Wojcik, S., Broderick, B. and Hughes, A. (2018) ‘Gender and Jobs in Online Image
Searches’, Pew Research Center [online]. Available at: https://www.pewresearch.org/social-
trends/2018/12/17/gender-and-jobs-in-online-image-searches/ (Accessed: 1 May 2021)
Lockwood, P. (2006) ‘Someone Like Me can be Successful: Do College Students Need Same-
Gender Role Models?’, Psychology of Women Quarterly, 30(1), pp.36–46. https://doi.org/10.
1111/j.1471-6402.2006.00260.x
Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism, New York:
New York University Press.
Porter, C. and Serra, D. (2020) ‘Gender Differences in the Choice of Major: The Importance of
Female Role Models’, American Economic Journal: Applied Economics, 12(3), pp.226–254.
https://doi.org/10.1257/app.20180426
Rice, L. and Barth, J.M. (2016) ‘Hiring Decisions: The Effect of Evaluator Gender and Gender
Stereotype Characteristics on the Evaluation of Job Applicants’, Gender Issues 33, pp.1–21.
https://doi.org/10.1007/s12147-015-9143-4
Tavani, H. (2020) ‘Search Engines and Ethics’, The Stanford Encyclopedia of Philosophy, Fall
2020 Edition in Zalta, E.N. (ed.) [online]. Available at: https://plato.stanford.edu/archives/
fall2020/ entries/ethics-search/ (Accessed: 1 May 2021)
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Personalization, Fairness, and Post-Userism
Robin Burke
1 Introduction
The turn toward questions of fairness in machine learning (Barocas and Selbst 2016;
Dwork et al. 2012; Mitchell et al. 2021) raises some important issues for the
understanding of personalized systems. Researchers studying these systems and
organizations deploying them present a common narrative highlighting the benefits
of personalization for the end users for whose experience such systems are opti-
mized. This narrative in turn shapes users’ expectations and their folk theories
(working understandings) about the functionality and affordances of personalized
systems (DeVito et al. 2017). Fairness requires a different kind of analysis. Rather
than focusing on the individual, fairness is understood in terms of distribution: how
is harm or benefit from a system distributed over different individuals and/or
different classes of individuals? These distributional concerns take the focus at
least partly away from the end user, and thus the implementation of fairness concerns
in personalized systems requires a re-thinking of fundamental questions about what
personalized systems are for and what claims should be made about them.
R. Burke (*)
University of Colorado, Boulder, CO, USA
e-mail: robin.burke@colorado.edu
objectives, have entered into practical recommender systems designs from their first
deployments. However, businesses that employ such recommendation objectives
have generally been very reluctant to identify anything other than user benefit as a
driver for their technical decisions. An explicit consideration of multistakeholder
objectives and one that specifically incorporates fairness is much more recent
(Abdollahpouri et al. 2020).
What might a post-userist take on personalization look like? We examine
multistakeholder considerations that can help answer this question.
First, consider the recruiter-oriented recommender system outlined above. The
challenge here is to achieve provider-side fairness (R. Burke 2017): fair represen-
tation across those providing items to be recommended, in this case the job appli-
cants. So, a desired system design is one in which there are limits to the degree of
personalization that can be performed, even for a single user.
The system would need to ensure that each recommendation list has at least a
minimal degree of diversity across different protected group categories. One would
expect that an end user/recruiter would need to know going in (and might even
require as a matter of law or company policy) that the recommender is enforcing
such fairness constraints.
A more relaxed version of this provider-side constraint might appear in a con-
sumer taste domain, such as the recommendation of music tracks in a streaming
music service. The organization might have the goal of fair exposure of artists across
different demographic groups or across different popularity categories (Mehrotra
et al. 2018). List-wise guarantees might not be important, as there may be some users
with very narrow musical tastes and others who are more ecumenical. As long as the
goal of equalizing exposure is met, the precise distribution of that exposure over the
user population might be unimportant. In this case, it may be desirable to differen-
tiate between types of users for the purposes of fair exposure as in Liu et al. (2019) or
to more precisely target the types of diversity of interest to individual users (Sonboli
et al. 2020). A system that worked in this manner might need to inform users that
non-personalization objectives such as fairness are operative in the recommenda-
tions it produces.
An important distinction between the cases above is that music tracks are
non-rivalrous goods: a music track can be played for any number of users, and its
utility is unaffected by the number of recommendations or their occurrence in time.
A job candidate is different. A highly qualified candidate may be present in the job
market for a very limited period of time. A recruiter who is recommended such a
candidate as soon as their resume appears in the system gets greater utility from the
recommendation than does a user who gets it later. A situation in which a highly
qualified candidate appears only to a limited number of recruiters is more valuable to
them than a situation in which their recommendations are shared with a larger group.
One could imagine that the recruiter-users would be rightly concerned that the
inclusion of multistakeholder considerations in recommendation could put them at
a disadvantage relative to the purely personalized status quo. I say “might” here
because the close study of multistakeholder recommendation is sufficiently new that
it is unclear what the interactions are between recommendation quality as
148 R. Burke
experienced by users and the fairness properties of the associated results. Prelimi-
nary results in some applications indicate that simultaneous improvements in both
dimensions may be possible (Mehrotra et al. 2018). Where there is a tradeoff
between accurate results and fair outcomes, we may need to consider the distribution
of accuracy loss as a fairness concern across the users of the system (Patro et al.
2020).
The picture changes when we consider consumer-side fairness. Here we are
interested in fairness considerations across the end users themselves, and this
requires a community orientation in how the recommendation task is understood.
We can draw on employment as an example yet again, now in terms of
recommending jobs to users.
The tension between personalization and other goals becomes complex when we
consider that users’ behaviors themselves, the raw data over which personalization
operates, may themselves be subject to measurement inconsistency. For example,
female users are known to be less likely to click on ads that contain masculinized
language about job performance (e.g., “rock star programmer”), but this says nothing
about their underlying capabilities for such jobs (Hentschel et al. 2014). Even more
fundamentally, there may be differences among users in their experience of the data
gathering required by personalized systems; individuals experiencing disempower-
ment may identify the surveillance capacities that enable personalization as yet
another avenue of unwanted external control (V. I. Burke and R. D. Burke 2019).
Even if we postulate that profiles can be collected in a fair and acceptable manner,
it still may be the case that a system performs better for some classes of users than
others. Improving fairness for disadvantaged groups may involve lower performance
for others, especially in a rivalrous context like employment, where, as noted above,
recommending something to everyone is not desirable. For example, a recommender
system might optimize for fairness in such a way that a particularly desirable job is
shown more often to members of a disadvantaged group and less often to others.
How should a user think about bearing some of the burden in terms of lower utility of
providing fairer recommendations to other users in the system? Accepting such
behavior in a system requires adopting a pro-social orientation toward the commu-
nity of fellow platform users, something that may not be easy to cultivate in a
multistakeholder recommendation context.
Finally, we should note that the perspectives of recommendation consumers and
item providers do not exhaust the set of stakeholders impacted by a recommender
system. Authors such as Pariser (2011) and Sunstein (2018) have noted the way in
which algorithmic curation of news and information has potential far-reaching
impacts on society and politics. Incorporating this wide set of stakeholders draws
the focus of a recommender system even further from personalization as a defining
characteristic.
Personalization, Fairness, and Post-Userism 149
3 Conclusion
This discussion shows that the need to incorporate fairness into personalization
systems requires more than just technical intervention. The way users think about
these systems will need to change radically, and the onus lies on technologists to
provide new terminology and new narratives that support this change. A key step
may be to acknowledge the multistakeholder nature of existing commercial appli-
cations in which recommendation and personalization are embedded and to chal-
lenge the simplistic user-centered narratives promulgated by platform operators.
Acknowledgments This work was supported by the National Science Foundation under Grant
No. 1911025.
References
Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and
how we think. Penguin.
Patro, Gourab K et al. (2020). “Fairrec: Two-sided fairness for personalized recommendations in
two-sided platforms”. In: Proceedings of The Web Conference 2020, pp. 1194–1204.
Sonboli, Nasim et al. (July 2020). “Opportunistic Multi-aspect Fairness through Personalized
Re-ranking”. In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation
and Personalization. UMAP ’20. Genoa, Italy: Association for Computing Machinery,
pp. 239–247.
Sunstein, C.R. (2018). #Republic: Divided democracy in the age of social media. Princeton
University Press.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part VI
Platform Power
The Curation Chokepoint
James Larus
Abstract A key rationale for Apple and Google’s app stores was that they curate
apps to ensure that they do not contain malware. Curation has gone beyond this goal
and now unduly constrains the apps that you can use on your smartphone. It needs to
stop. App quality should be ensured with other techniques and by a wider range of
organizations than just Apple and Google.
Imagine if you can a dystopia in which your landlord decides the food you can bring
home to your apartment, either to cook or eat. Or that a manufacturer decides if a
movie will play on your TV. Further imagine that your landlord and TV manufac-
turer demanded 30% of your grocery budget and Netflix subscription price for this
“service.”
This scenario seems absurd. However, it is the current situation on your
smartphone. Apple and Google are a duopoly that wrote the low-level software
(the operating systems iOS and Android) that control virtually all smartphones
throughout the world. Apple and Google decide which apps can be installed on
your smartphone, and they charge for this service.
Both companies operate an online “app store” that is the chokepoint in the
distribution of apps. Apple iPhones can only install apps distributed by Apple’s
App Store.1 Google permits alternative app stores but heavily promotes and favors
its Play Store (e.g., it is pre-installed and using another app store may require
changing a setting). Both companies’ app stores heavily curate the apps they accept,
and both charge a 30% commission on purchase of apps and on subsequent
purchases performed within the apps themselves.
Neither company’s app store makes any pretense of being a neutral marketplace.
Listing an app in either store requires agreeing to a contract with precise terms
1
Apple makes an exception to allow companies to write and distribute software on smartphones that
they own.
J. Larus (*)
EPFL, Lausanne, Switzerland
e-mail: James.larus@epfl.ch
2
Apple, in its Proposed Findings of Fact and Conclusions of Law, in Epic’s lawsuit asserts: “the
iPhone platform accounted for just 0.85% of malware infections. DX-3141 at 15. By contrast,
Android accounted for 47.15% and Windows/PC accounted for 35.82%” (Apple 2021).
156 J. Larus
3
Underwriters Labs (UL) is a company that provides safety testing and certification of products,
particularly for the USA. CE certification indicates that a product sold in the European Union
conforms to EU health, safety, and environmental protection standards.
The Curation Chokepoint 157
allow a phone’s user to control over its operation. A sandbox can prevent an app
from accessing the user’s private information, such as their address book, using
privacy-revealing mechanisms such as GPS, or communicating outside of the phone
with a radio or WiFi. It is a powerful mechanism for controlling what runs on
smartphones, too powerful to be left entirely in Apple and Google’s hands.
A phone’s owner should use these mechanisms to impose restrictions that
conform to their desired risk level and expected behavior. A 15-year-old teenager
may want to try edgy new apps and not be particularly concerned about personal
privacy. A 45-year-old CEO is likely to be genuinely concerned about their work
phone’s security but more relaxed about their personal phone. One size does not
fit all.
Apple and Google provide a valuable service by offering carefully inspected
apps. This initial motivation made the app stores and smartphones successful and
provided both companies with powerful commercial leverage to control and finan-
cially exploit the companies that write software for their phones. As governments
increasingly examine these practices, it is essential not to lose sight of these app
stores’ stated motivation. Curation to exclude malware can be done in many ways
and by many parties, and curation by content is only justified if alternative distribu-
tion mechanisms exist and are equally accessible.
References
Apple. 2021. “Apple Inc.’s Proposed Findings of Fact and Conclusions of Law.” Case 4:20-cv-
05640-YGR. https://www.scribd.com/document/502037049/21-04-07-Apple-Proposed-Find
ings-of-Fact-and-Conclusions-of-Law#from_embed.
Chung, Jean. 2021. “Apple Engineer Likened App Store Security to ‘Butter Knife in Gunfight.’”
Financial TImes, April 9, 2021.
Developer, Apple. n.d. “App Store Review Guidelines.” Accessed February 19, 2021. https://
developer.apple.com/app-store/review/guidelines/.
Epic Games. 2021. “Findings of Fact and Conclusions of Law Proposed by Epic Games, Inc.”
https://www.scribd.com/document/502036985/21-04-08-Epic-Games-Proposed-Findings-of-
Fact-and-Conclusions-of-Law#from_embed.
Espósito, Filipe. 2020. “Apple Rejects 3rd-Party Tesla App Update as It Strictly Enforces Written
Consent for Third-Party API Use.” 9To5Mac. August 27, 2020. https://9to5mac.com/2020/08/
27/apple-rejects-watch-for-tesla-app-as-it-starts-requiring-written-consent-for-third-party-api-
use/.
Kirchgaessner, Stephanie, and Michael Safi. 2020. “Dozens of Al Jazeera Journalists Allegedly
Hacked Using Israeli Firm’s Spyware.” The Guardian, December 20, 2020, sec. Media. http://
www.theguardian.com/media/2020/dec/20/citizen-lab-nso-dozens-of-aljazeera-journalists-alleg
edly-hacked-using-israeli-firm-spyware.
Lee, T. B. (2020). Apple backs down on taking 30% cut of paid online events on Facebook. https://
arstechnica.com/gadgets/2020/09/apple-backs-down-on-taking-30-cut-of-paid-online-events-
on-facebook/
Nicas, J. (2021). Apple Fortnite Trial Ends with Pointed Questions and a Toast to Popeyes, https://
thecollegesave.com/2021/05/25/apples-fortnite-antitrust-trial-ends-with-pointed-questions-2/.
158 J. Larus
O’Neill, Patrick Howell. 2021. “How Apple’s Locked down Security Gives Extra Protection to the
Best Hackers.” MIT Technology Review, March 31, 2021. https://www.technologyreview.com/
2021/03/01/1020089/apple-walled-garden-hackers-protected/.
Satariano, A and Nicas, J. (2019). Spotify Accuses Apple of Anticompetitive Practice in Europe,
The New York Times.
Wolff, Josephine. 2019. “Whatever You Think of Facebook, the NSO Group Is Worse.” The
New York Times, November 6, 2019. https://www.nytimes.com/2019/11/06/opinion/
whatsapp-nso-group-spy.html.
Yen, Andy. 2020. “The App Store Is a Monopoly: Here’s Why the EU Is Correct to Investigate
Apple.” ProtonMail Blog (blog). June 22, 2020. https://protonmail.com/blog/apple-app-store-
antitrust/.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Business Model Innovation and the Rise
of Technology Giants
Geoffrey G. Parker
The rise of giant technology firms such as Amazon, Alibaba, Microsoft, and more
has been widely observed and commented upon in the popular press as well as the
academic literature. While such firms were once Silicon Valley and Shenzhen
phenomena, in recent years they have exploded in their power and reach. The drivers
of this rise stem first from dramatic improvements in physics and the material science
of computation technology, digital storage technology, and communications tech-
nology. These improvements have facilitated the growth of the information economy
with implications for nearly every sector of society.
Beyond the raw improvements in information and communications technology
(ICT) hardware and associated software improvements, there have been
corresponding changes in the ways that organizations use their ICT investments.
G. G. Parker (*)
Dartmouth College, Hanover, PA, USA
e-mail: geoffrey.g.parker@dartmouth.edu
At an individual level, the willingness to connect with one another using tools such
as email and social media took years to emerge but has now become part of the fabric
of everyday life. At the firm level, it took many years for organizations to incorporate
improved ICT into their processes. Arguably, the productivity boom of the 1990s
was driven by firm-level learning as described by Brynjolfsson and Hitt (2000): “As
computers become cheaper and more powerful, the business value of computers is
limited less by computational capability and more by the ability of managers to
invent new processes, procedures and organizational structures that leverage this
capability.” This process of learning to use ICT effectively was foreshadowed by the
length of time that it took society to adapt to electric power. This was described by
Harford (2017) as follows. “Factories could be cleaner and safer—and more effi-
cient, because machines needed to run only when they were being used. But you
couldn’t get these results simply by ripping out the steam engine and replacing it
with an electric motor. You needed to change everything: the architecture and the
production process.” A quote from Roy Amara suggested that this delayed adapta-
tion is general: “We tend to overestimate the effect of a technology in the short run
and underestimate the effect in the long run.”1
Similar to the way that firms had to learn to incorporate electricity in the last
nineteenth and early twentieth centuries and then 100 years later learn to integrate
ICT into their business processes in the 1990s, we posit that the giant platforms have
learned to orchestrate the efforts of actors outside the firm so that they were able to
grow much more quickly than they otherwise would have. The dramatic rise of the
tech giant platform firm is as much a business model innovation as it is a technical
innovation. This can be seen in the way that firms such as Airbnb are able to compete
with traditional hotel chains such as Marriott despite owning no hotels and directly
employing very few people. Similarly, firms such as Lyft and Uber entered taxi and
limousine markets despite employing few people and owning minimal physical
assets. In particular, platforms are able to facilitate value creating interactions
between external producers, content providers, developers, and consumers
(Constantinides et al. 2018).
The ecosystem innovation required new managerial capabilities as well as new
economic thinking (Helfat and Raubitschek 2018). The decision of whether to
vertically integrate as firms or to instead use markets has been a long running
topic of the “markets and hierarchies” literature (Williamson 1975). Parker et al.
(2017) introduced the term “inverted firm” to describe the shift from value creation
1
https://www.oxfordreference.com/view/10.1093/acref/9780191826719.001.0001/q-oro-
ed4-00018679
Business Model Innovation and the Rise of Technology Giants 161
inside the firm to outside. Importantly, platform firms are an intermediate organiza-
tional form that lies between pure market and pure hierarchy. The platform organi-
zational form echoes the organizational forms that rely heavily on outsourcing. The
automotive industry provides some of the best documented examples of the heavy
use of outsourcing, especially in the descriptions of Toyota and its tightly integrated
supply chain partners (Womack et al. 1990).
Platforms have taken this idea of working with external partners and made it
possible for many more to participate at much lower cost. To do this, they employ
open architectures with published standards to enable different types of users to
interact to exchange physical and digital goods as well as services (Constantinides
et al. 2018). Platforms also invest in organizational capabilities to provide gover-
nance rules and the capacity to enforce those rules. The goal is to maintain control
over the platform while maintaining incentives for ecosystem partners to participate.
In addition, investments in governance help to foster market safety so that users feel
secure that they will not be taken advantage of when making transactions on the
platform (Roth 2007).
Platforms are also able to facilitate and benefit from network effects that are
otherwise too small to matter. The widely discussed concept of network effects
describes the situation where systems become more valuable to users as the number
of users increases (Shapiro and Varian, 1998). For example, the ratings that users
provide on Netflix or YouTube create benefits for other uses that are too small for
any one individual to try to reward or capture; transaction costs would overwhelm
the benefits from such interactions. By using common systems, platforms are able to
aggregate and analyze ratings information and make them available to all users in the
form of direct rankings and better matching and filtering of content to consumers.
Almost inevitably, the rapid growth of the largest technology firms has spurred calls
for the regulation of those firms. Concerns range over issues such as antitrust and
abuse of dominance, privacy, fair compensation for data provided, the dissemination
of false information, and the regulation of speech. The European Union has led the
world by implementing regulations, such as GDPR (General Data Protection Reg-
ulation) and PSD2 (Revised Payment Services Directive), and has recently proposed
the DMA (Digital Markets Act). A recent panel report from the European Commis-
sion Joint Research Centre analyzes the different issues that the DMA is designed to
address (Cabral et al. 2021). The panel’s goal was to explain the economics behind
the DMA and to comment on the proposals. Under the proposal, all online interme-
diaries offering services and content in the EU, whether they are established in the
EU or not, have to comply with the new standards. Small companies will have
obligations proportionate to their ability and size, but also remain accountable.
One critical impact that the Cabral report notes is creation of “black lists” of
prohibited platform activities and “white lists” of allowable practices. The panel
162 G. G. Parker
proposed a “grey list” of activities that platforms might be able to argue are likely to
benefit consumers more than are likely to do harm. The Cabral panel also focused
considerable attention on data sharing and the obligation for gatekeeper platforms to
allow equal data access to all market entrants at the point of collection—known as
“in situ.” Such a mechanism avoids the need for consumers, or firms that are
authorized to access consumer data, to download data. Instead, an in situ mechanism
brings algorithms to the data where it is collected and stored. This should foster both
security and better access for potential entrants.
4 Conclusions
The rise of technology firms and platform business models has created considerable
benefits for consumer and business users. However, the rise also poses enormous
challenges for regulators, incumbent firms, and individuals. Much of what econo-
mists and business scholars have studied derives from the first wave which was
largely a story of business-to-consumer (B2C) platforms. There are now signs that
the next wave of investment will be directed toward the growth of business-to-
business (B2B) platforms. This raises important questions over the ways that B2C
and B2B platforms will have similar characteristics and along what dimensions they
are mostly likely to differ.
References
Brynjolfsson, E. and Hitt, L.M., 2000. Beyond computation: Information technology, organiza-
tional transformation and business performance. Journal of Economic perspectives, 14(4),
pp.23-48.
Cabral, L., Haucap, J., Parker, G., Petropoulos, G., Valletti, T. and Van Alstyne, M., 2021. The EU
Digital Markets Act, Publications Office of the European Union, Luxembourg, ISBN 978-92-
76-29788-8, doi:https://doi.org/10.2760/139337, JRC122910.
Constantinides, Panos, Ola Henfridsson, and Geoffrey G. Parker. 2018. “Introduction—Platforms
and Infrastructures in the Digital Age.” Information Systems Research 29, no. 2: 381–400.
https://doi.org/10.1287/isre.2018.0794.
Harford, T. “Why didn’t electricity immediately change manufacturing.” BBC World Service
50 (2017). Retrieved on 5-April-2021 from https://www.bbc.com/news/business-40673694.
Helfat, Constance E., and Ruth S. Raubitschek. 2018. “Dynamic and integrative capabilities for
profiting from innovation in digital platform-based ecosystems.” Research Policy 47.8: 1391-
1399.
Parker, Geoffrey, Marshall Van Alstyne, and Xiaoyue Jiang. 2017. “Platform Ecosystems: How
Developers Invert the Firm.” MIS Quarterly 41, no. 1: 255–66. https://doi.org/10.25300/misq/
2017/41.1.13.
Business Model Innovation and the Rise of Technology Giants 163
Roth, Alvin E. 2007. “The art of designing markets.” Harvard Business Review 85.10: 118.
Shapiro, Carl, and Hal R. Varian. 1998 Information rules: A strategic guide to the network
economy. Harvard Business Press.
Williamson, O.E., 1975. Markets and hierarchies: analysis and antitrust implications: a study in the
economics of internal organization. University of Illinois at Urbana-Champaign’s Academy for
Entrepreneurial Leadership Historical Research Reference in Entrepreneurship.
Womack, J. P., Jones, D. T., & Roos, D. 1990. The Machine that Changed the World. Simon and
Schuster.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Scaling Up Broken Systems?
Considerations from the Area of Music
Streaming
Peter Knees
1 Introduction
P. Knees (*)
Vienna University of Technology, Vienna, Austria
e-mail: peter.knees@tuwien.ac.at
Overall, this system favored the record companies over the individual artists and
creators, as controlling the music production means would allow them to exert power
over artistic decisions and aggregate various licensing rights while maintaining
strong ties to the media and other distribution channels. In practice, the market
was controlled by the major record labels and a number of gatekeepers, establishing
an oligopoly that works for a few at the core of the industry at the expense of the
many contributors. As a side effect of this model, however, record companies would
also afford to finance the development of new artists, i.e., to invest in artists and
repertoire (A&R). For independent artists, the diversity of media outlets and distri-
bution channels would nonetheless allow them to find niches and address potential
audiences.
The end of the twentieth century brought new technological developments, again
impacting the music industry—and again, before all others. The adoption of the
Internet as communication medium and the invention of audio data compression
algorithms and formats, specifically MP3, led to active exchange of digital music
files among users via decentralized peer-to-peer networks such as Napster, outside
all established channels, infrastructures, and monitoring mechanisms built by the
music industry. Needless to say, the resulting, almost instant availability of compre-
hensive music repositories to every peer of the network posed a threat to the core
business model of the industry. Instead of embracing the new technological oppor-
tunities, the developers and users of the networks were criminalized and the downfall
of popular musicians and music as an artform itself foretold as the music industry
would not be able to keep up the monetization strategy that supported all stake-
holders, related businesses, and intermediaries involved. As a reaction, the enter-
tainment industry successfully lobbied for changes in copyright and intellectual
property rights laws to further strengthen their position.
A few years into these developments, which indeed brought declining profits for
the music ecosystem in comparison to the peak of CD sales in 2001 (IFPI 2021), new
players and services explored the potential of the informatization of music repro-
duction and the decoupling from a physical media in alternative business models.
These ranged from traditional e-commerce models of digital stores for music files to
newer paradigms such as flat-rate subscriptions or freemium models typically with
an ad-based entry level for unlimited music streaming. Together with the rise of
smartphones that pushed an “always online” philosophy, new forms of access to
music were enabled.
Generally, this framework of digital distribution via online services offers many
advantages over the historic approach and potential for mitigations to its shortcom-
ings for the various stakeholders. For music consumers, instant, cheap, and legal
access to large amounts of music becomes feasible. For music creators, including
those who contribute to the production process, it can instantly provide traceable
168 P. Knees
delivery of their music to their audiences without having to deal with the various
gatekeepers and distribution hubs. For the industry in between, logistics of physical
retail is no longer necessary. For the music platform itself, being the central hub for
delivering music to users allows them to track music listening behavior and build
personalized services such as recommender systems and advanced interfaces for
music discovery (Knees et al. 2020). In this scenario, however, record labels might
lose their position of power and be bypassed easily. Their bargaining chip to retain
relevance in this development is the control over the back catalogues, i.e., licensing
access to records by well-known and popular artists whose availability is imperative
for broad adoption. Ultimately, the disruption of the business introduced streaming
platforms as additional central market players, next to record companies, whereas
brick-and-mortar retail has largely vanished.
While sales of physical media have been declining since 2001, the rising revenue
from streaming has almost compensated for the losses and delivered an overall
turnaround in 2015, today constituting the largest source of revenue (IFPI 2021).
With more than 440 million paid subscription users worldwide (ibid.), streaming is
the dominating modality of (traceable) music consumption. Current catalogues of
the market-leading streaming platforms such as Spotify, SiriusXM Pandora, Apple
Music, Amazon Music, or YouTube Music offer up to 70 million music tracks
(Spotify 2021). To sum up, music streaming has managed to scale up the music
business both in terms of content accessible and users reached.
But how is this situation still (or even more?) profitable with more players to
support? The payments made by streaming companies for individual streams are
extremely small—on average about USD 0.004 per stream (see Pastukhov (2019))—
and often even negligible in sum per artist. Moreover, this amount is paid to the
owners of the master recordings, which are typically the record labels, which then
pass on only a small fraction to the artists. A reason for this practice is that revenue
from streaming is often not explicitly negotiated in contracts, particularly in those
signed before the technology existed. Hence, record companies benefit from licens-
ing their back catalogues as it allows them to keep the largest share of the royalty
payments to themselves. Also, recent efforts to present more justified and “fair”
money distributions models, e.g., by distributing subscribers’ fees according to their
individual listening preferences, as implemented in Deezer’s user-centric payment
system (Deezer 2021), are not improving the situation as again the money is not paid
directly to the artists, but the producers and publishers. Furthermore, despite the
technical possibility to trace all plays of individual tracks, information about play
counts across various platforms remains opaque for music creators, especially on
platforms driven by user-generated content like YouTube on which content identi-
fication focuses again on the needs of big rights owners and neglects independent
artists. These shortcomings themselves led again to the establishment of disruptive
Scaling Up Broken Systems? Considerations from the Area of Music Streaming 169
5 . . . And Beyond?
While in the music business the old industry reinvented their role and managed to
ultimately shape the new music industry based on its assets and absorb the disruptive
element, in other domains, disruption has led to a replacement of the intermediaries
and gatekeepers. In the movie domain, Netflix, first as a recommender-driven mail-in
service, then as a movie streaming pioneer, completely eliminated the established
video and DVD rental business and opened up a market that is now a battleground
for several competitors following the same model. In the transportation area, Uber
entered a long-time undisputed market and took large shares by storm. In the tourism
domain, platforms like Booking and Expedia pushed the traditional business model
of travel agencies close to the edge of the cliff (while the COVID-19 crisis seemingly
finally pushed it over).
While Web-based services are at the center of all these examples and provide a
new dimension of user experience in all these domains, it is never the user side that
makes the business profitable. In music, the changed industry landscape is carried by
besting the composers and creators of music due to old contracts or reduced benefits.
In transportation, the pressure is put on drivers to make up for reduced wages by
increasing customer throughput. In the tourism domain, hotels are put under further
pressure to reduce their own profit margins and undersell their capacities to get
170 P. Knees
visibility on platforms. In the movie domain, on the other hand, the competition of
streaming services and their newly founded studios seems to increase the opportu-
nities within the movie and TV industry rather than forcing the producers and
workers to produce more for less compensation. In this context, it should be noted
that the movie industry, especially in the USA, is strongly unionized.
The user is not spared from possible negative consequences either. Resorting
again to the music example, the status quo of having access to virtually all music all
the time is a utopian situation for music aficionados, and the technical developments
that have enabled this situation over the last decades are a shining example of the
blessings of information technology. At the same time, users become strongly
dependent on and locked into such platforms. On one hand, “having access” does
not mean “to own,” and “personal collections” consisting of playlists created on
platforms might become incomplete or missing due to changed circumstances in
licensing. On the other hand, such services become more indispensable the more
they are used. This might lead to the more general observation that “the rise in
market concentration is greater in industries that are more intensive users of digital
technologies” (Qureshi 2019); i.e., the disruption of industries by means of digital
technologies itself promotes the emergence of mono- or oligopolies. In the long run,
this is disadvantageous for the user and most other stakeholders involved.
To conclude, technology drives and “sells” disruptive business models. The
increased profit of these models is often and largely generated by exploiting the
people who provide the actual value behind the product, especially if they are not
organized to fight for their joint interests. The users further subsidize these busi-
nesses simply through usage. From the perspective of Digital Humanism, this
situation is unsatisfactory. We should first use technology to overcome the many
broken systems before helping scaling them up.
References
Pastukhov, D. (2019). What Music Streaming Services Pay Per Stream (And Why It Actually
Doesn’t Matter). Soundcharts. Online: https://soundcharts.com/blog/music-streaming-rates-
payouts, retrieved: 05/02/2021.
Qureshi, Z. (2019). Inequality in the Digital Era. Work in the Age of Data. BBVA OpenMind
Collection (12), BBVA.
Spotify (2021). Company Info. Online: https://newsroom.spotify.com/company-info/, retrieved:
05/02/2021.
Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier
of Power. London: Profile Books.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
The Platform Economy After COVID-19:
Regulation and the Precautionary Principle
Cristiano Codagnone
Abstract Online platforms are two-sided or multisided markets whose main func-
tion is matching different groups (of producers, consumers, users, advertisers, i.e.,
hosts and guest in Airbnb, audiences and advertised in Google, etc.) that might
otherwise find it more difficult to interact and possibly transact. Some of the potential
critical issues associated with the platform economy include the relationship
between personhood (the quality and condition of being an individual person with
protected sphere of privacy and intimacy) and personal data, on which the platform
economy thrives by extracting behavioral surplus, scale to dominance and market
power, and lockin for businesses. In this chapter, I first shortly review how the
pandemic crisis has impacted the platform economy and what problems are being
exacerbated. I then conclude and focus the core part of my analysis on the issue of
regulation and particularly on the merits and limits of applying the precautionary
principle when addressing the online platform economy.
1 Introduction
C. Codagnone (*)
Dipartimento di Scienze Sociali e Politiche, Università degli studi di Milano, Milano, Italy
e-mail: Cristiano.codagnone@unimi.it
central features of platforms are direct and/or indirect network effects. In platforms,
more users beget more users, a dynamic which in turn triggers a self-reinforcing
cycle of growth. Platforms represent a new structure for organizing economic and
social activities and appear as a hybrid between a market and a hierarchy. They are
match makers like traditional markets, but they are also company heavy in assets and
often quoted in the stock exchange. Platforms, therefore, are not simply technolog-
ical corporations but a form of “quasi-infrastructure.” Indeed, in their coordination
function, platforms are as much an institutional form as a means of innovation.
In my previous work on the topic (Bogliacino et al. 2020; Codagnone et al. 2019;
Mandl and Codagnone 2021), I discussed some of the potential critical issues
associated with the platform economy. First, there is the relationship between
personhood (the quality and condition of being an individual person with protected
sphere of privacy and intimacy) and personal data, on which the platform economy
thrives by extracting behavioral surplus (Zuboff 2019). The extraction of personal
behavioral data, including about things one may consider very personal and secrets,
is a violation of personhood as defined above. In particular, “loss of control over
personal information creates a variety of near-term and longer-term risks that are
difficult for individuals to understand – and, importantly for antitrust purposes,
therefore impossible for them to value” (Cohen 2019, p. 175). Access to such
personal data enables companies to “hyper-nudge” consumers (Yeung 2017).
Short-circuiting behavioral data and algorithmic learning, online platforms enact
very powerful nudges guiding decisions and reducing consumers’ autonomous
choices. Second, there is the problem of competition and the potential monopolistic
or oligopolistic outcomes of the platform economy, considering also the number of
merger and acquisitions (M&A) completed by the most powerful of them (so-called
GAFAM – Google, Amazon, Facebook, and Apple) between 2013 and 2019 (see,
for instance, Lechardoy et al. 2021, pp. 46–47). Third, the platform economy, and
especially online labor platform, is contributing to the fragmentation of work and to
the rise of new precarious work forms (Mandl and Codagnone 2021).
In this chapter, using information mostly from the European Commission “Obser-
vatory on the Online Platform Economy” (https://platformobservatory.eu/ and the
analytical paper by Lechardoy et al. 2021), I first shortly review how the pandemic
crisis has impacted the platform economy and what problems are being exacerbated.
I then conclude and focus the core part of my analysis on the issue of regulation and
particularly on the merits and limits of applying the precautionary principle when
addressing the online platform economy.
systemic surveillance. Furthermore, it has been observed that the pandemic has
helped incumbents such as Google to regain legitimacy and momentum (Cinnamon
2020).
As reviewed, the effects of COVID-19 have increased some of the policy concerns
surrounding the online platform economy. Concerns that before the pandemic out-
breaks were high on the regulators agenda with a debate polarized basically among
two positions.
The first is the libertarian and impossibility statement position. According to this
view, any attempt to regulate online platform and more broadly the current digital
transformation (including Artificial Intelligence, AI) would stifle innovation and
produce undesirable side effects. In extreme fashion, this discourse can be summa-
rized with the view that regulation is the mortal enemy of innovation (Cohen 2019,
p. 178). Hence, for the sake of economic growth and innovation, matters should be
deregulated, and/or their governance should be devolved to the private sector
through various forms of self-regulation and de facto standardization. A corollary
of this discourse is that attempts at regulation are touted as new forms of protection-
ism. A second discourse takes the form of an “impossibility statement.” Regulation
of current development is and will remain technically complex and beyond the reach
of the cognitive tools and processes available to regulators. The impossibility
statement implication is that in the age of algorithmic governance emerging as a
new form of business strategy, regulators cannot keep up and should only hope and
wait until algorithms improve and better self-regulate themselves.
The most sustained counterargument has been developed by law scholar Juliet
Cohen who argues that the current digital transformation requires regulatory inno-
vation not only on the “what” (new rubrics of activities needing regulation) but also
on the “how,” meaning entering the domain of algorithmic governance (2019,
812–185 and 200–201). Cohen and other scholars of the digital transformation are
in favor of applying the precautionary principle to regulate, for instance, the way
platforms gather personal data that generates a behavioral surplus. To a large extent,
the difference between a precautionary approach to regulation and a cost-benefit one
is how the object of analysis is positioned between the two poles of uncertainty and
risk. Under high uncertainty and the possibility that no regulation would produce
serious harms, then a precautionary approach would favor introducing regulation a
priori to preempt such harms. The best analogy is with the introduction of lockdowns
across entire population without any cost-benefit analysis based on the uncertain but
serious danger of wider spread and more deaths from COVID-19.
So, given the complexity and uncertainty surrounding the development of plat-
forms and related technologies, Cohen argues to move from a risk perspective
backing a cost-benefit approach to introducing policy and regulation based on the
precautionary principle. Competition regulation in the context of the digital
The Platform Economy After COVID-19: Regulation and the Precautionary. . . 177
4 Conclusions
Without entering into the merits of the two opposing positions characterized above, I
am going to conclude this chapter with a discussion of the application of the
precautionary principle versus a case-by-case cost-benefit analysis as different
approaches to regulation.
The application of the precautionary principle turns the complexity and uncer-
tainty argument, often used by libertarian and/or tech lobbyists, on its head.
According to Cohen (2019), for instance, platformization and algorithmic gover-
nance create such level of complexity and uncertainty that warrant moving away
from a risk perspective backing a cost-benefit approach to policy and regulation to an
uncertainty perspective backing a precautionary approach that would prescribe
intervening with regulation. From this perspective, a precautionary approach should
be adopted, and more stringent regulation enacted when uncertainties concern
crucial and value-relevant issues. On the other hand, the precautionary approach
has been criticized as “the law of fear” and considered inferior to cost-benefit
analysis approach to policy issues on a case-by-case level (Sunstein, 2005).
Although reasonable a priori, the precautionary principle is usually contested on
two grounds: (a) if regulation is defended on the principle of the worst scenario, then
a lack of regulation can be defended by the same argument when the consequences
of strict regulations are potentially very negative; and (b) the precautionary principle
claims that dangers should not be downplayed, but this exposes the risk of building a
negative public discourse that would block innovators.
There is a point in Sunstein’s critique of the precautionary principle, in that by
reacting to uncertainty and complexity with across-the-board regulation may end up
stifling true innovation without cutting the nails of the incumbents. There are many
innovative platforms, and not all of them are or will become as GAFAM. The latter
and the concerns they raise can only be dealt with new competition policy instru-
ments and cases and with political will to do so. On the other hand, regulators should
178 C. Codagnone
incentivize relevant actors to adopt governance standards and procedures that will
support their efforts to operationalize trustworthy digital transformation and online
platform economy. Furthermore, they should support the development of technolo-
gies, systems, and tools to help relevant actors identify and mitigate relevant risks.
This means incentivizing organizations to adopt robust internal governance and
equipping them with tools to identify and mitigate risk is considered more effective
than a regulatory regime that mandates specific outcomes. New regulation should
support ongoing efforts to build best practices, rather than risk cutting them short
with inflexible rules that may not be able to adapt to a rapidly changing field of
technology.
In conclusion, regulators should carefully weigh the pros and cons of policy
responses adopting the precautionary principles and those that support a case-by-
case cost-benefit analysis before introducing any new piece of legislation.
References
Bogliacino, F., Codagnone, C., Cirillo, V., & Guarascio, D. (2020). Quantity and quality of work in
the platform economy. In K. Zimmermann (Ed.), Handbook of Labour, Human Resources and
Population Economics. London: Springer Nature.
Cinnamon, J. (2020). Platform philanthropy, 'public value', and the COVID-19 pandemic moment.
Dialogues in Human Geography, 10 (2), pp. 242-245.
Codagnone, C., Karatzogianni, A., & Matthews, J. (2019). Platform Economics: Rhetoric and
Reality in the ‘Sharing Economy’. Bingley, United Kingdom: Emerald Publishing Limited.
Cohen, J. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism.
Oxford: Oxford University Press.
Craglia, M. et al. (2020). Artificial Intelligence and Digital Transformation: early lessons from the
COVID-19 crisis. Luxembourg: Publications Office of the European Union.
Lechardoy, L., Sokolyanskaya, A. & Lupiáñez- Villanueva, F. (2021). Analytical paper on the
structure of the online platform economy post COVID-19 outbreak. Brussels: Observatory on
the Online Platform Economy, European Commission.
Mandl, I., & Codagnone, C. (2021). The Diversity of Platform Work— Variations in Employment
and Working Conditions. In H. Schaffers, M. Vartiainen, & J. Bus (Eds.), Digital Innovation
and the Future of Work (pp. 177-195). Gistrup, Denmark: River Publishers.
Sunstein, C. (2005). The Law of Fear. Cambridge: Cambridge University Press.
Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information,
Communication & Society, 20(1), 118-136.
Zuboff, S. (2019). The Age of Surveillance Capitalism. London: Profile Books Ltd.
The Platform Economy After COVID-19: Regulation and the Precautionary. . . 179
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part VII
Education and Skills of the Future
Educational Requirements for Positive
Social Robotics
Johanna Seibt
Abstract Social robotics does not create tools but social ‘others’ that act in the
physical and symbolic space of human social interactions. In order to guide the
profound disruptive potential of this technology, social robotics must be
repositioned—we must reconceive it as an emerging interdisciplinary area where
expertise on social reality, as physical, practical, and symbolic space, is constitu-
tively included. I present here the guiding principles for such a repositioning,
“Integrative Social Robotics,” and argue that the path to culturally sustainable
(value-preserving) or positive (value-enhancing) applications of social robotics
goes via a redirection of the humanities and social sciences. Rather than creating
new educations by disemboweling, the humanities and social sciences, students need
to acquire full disciplinary competence in these disciplines, as well as the new skill to
direct these qualifications toward membership in multidisciplinary developer teams.
So-called social robots are artificial agents designed to move and act in the physical
and symbolic space of human social interactions—as automated guides, reception-
ists, waiters, companions, tutors, domestic assistants, etc. According to current
projections, already by 2025 there will be a US$100 billion market for service
robots, and by 2050 we might have automated 50% of all work activities
(McKinsey 2017). As economists gladly usher in the “automation age” (ibid.), it
is crucial to be clear on a decisive difference between digitalization and
unembodied AI on the one hand and embodied social AI on the other. The ‘as if’
of simulated sociality in embodied AIs (social robots) captivates us with discon-
certing ease. For the first time in human cultural history, we produce, for economic
reasons, technological artifacts that are no longer tools for us—we are building
“social others”.
More than a decade of human-robot interaction (HRI) research reveals how
willingly humans engage with social robots, practically but also at the affective
J. Seibt (*)
Aarhus University, Aarhus, Denmark
e-mail: filseibt@cas.au.dk
level, and these research results raise far-reaching theoretical and ethical questions.
Should the goings-on between humans and robots really count as social actions?
Will we come to prefer the new “friends” we made to human friendships we need to
cultivate? If robots display emotions, which increases the fluidity of social interac-
tions (Fischer 2019), will we be able to learn not to respond with moral emotions
(sympathy)? Or should robots have rights? Will social robots de-skill us for
interacting authentically with other people?
Decisions pertaining to the use of social robots are not only highly complex—as
the example of sex robots may illustrate most strikingly—but also bound to have
momentous socio-cultural repercussions. However, research-based policy making
on social robots is currently bogged down in a “triple gridlock of description,
evaluation, and regulation” (Seibt et al. 2020a, b) combining descriptive and pre-
scriptive uncertainty. Currently, we do not know precisely how to describe human
reactions to robots in non-metaphorical ways; the lack of precise and joint termi-
nology hampers the comparability of empirical studies; and the resulting predictive
uncertainties of our evaluations make it impossible to provide sufficiently clear and
general regulatory recommendations.
Robotics engineers increasingly appreciate that their creations require decisional
competences far beyond their scientific educations. Single publications (see, e.g.,
Nourbakhsh 2013; Torras 2018) and the recent IEEE “Global Initiative on Ethics of
Automated and Intelligent Systems” document impressive efforts “to ensure every
stakeholder involved in the design and development of autonomous and intelligent
systems is educated, trained, and empowered to prioritize ethical considerations so
that these technologies are advanced for the benefit of humanity” (IEEE n.d.).
While we should wholeheartedly endorse these efforts, the question arises
whether new educational standards, requiring obligatory ethics modules in engineer-
ing educations, will be sufficient. More precisely, these efforts may not suffice as
long as we retain the current model of the research, design, and development process
(RD&D model) in social robotics.
According to the current RD&D model, roboticists, supported by some relevant
expertise from other disciplines, create an object (robot) which is supposed to
function across application contexts. What these objects mean in a specific applica-
tion context hardly comes into view. Even if engineering students were to acquire
greater sensitivity for the importance of ethical considerations—e.g., as a first step,
the insight that “ethical considerations” go beyond research ethics (data handling,
consent forms, etc.)—it is questionable that even a full year of study (using the
European nomenclature: a 60 ECTS module) could adequately impart competences
for responsible decision-making about the symbolic space of human social interac-
tions. The symbolic space of human interactions is arguably the most complex
domain of reality we know of—structured not only by physical and institutional
conditions but also by individual and socio-cultural practices of “meaning making,”
with dynamic variations at very different time scales. Even a full master’s study (4–5
years) in the social sciences or the humanities is barely enough to equip students with
professional expertise (analytical methods and descriptive categories) necessary to
understand small regions or certain aspects of human social reality.
Educational Requirements for Positive Social Robotics 185
In short, given that the analysis of ethical and socio-cultural implications of social
robotics applications requires professional expertise in the social sciences or human-
ities, which short ethics modules cannot provide, our current RD&D model for the
development of social robotics applications places responsibilities on the leading
engineers that they cannot discharge.
The way forward is thus to modify the RD&D model for social robotics applica-
tions. In line with design strategies such as “value-sensitive design” (Friedman et al.
2002), “design for values” (Van den Hoven 2005), “mutual shaping” (Šabanović
2010), and “care-centered value-sensitive design” (Van Wynsberghe 2016), the
approach of “Integrative Social Robotics” (ISR) (Seibt et al. 2020a, b) proposes a
new developmental paradigm or RD&D model that is tailormade for our current
situation. As a targeted response to the triple-gridlock and the socio-cultural risks of
social robotics, ISR postulates an RD&D process that complies with the following
five principles (for details, see ibid.):
(P1) The Process Principle: The product of a RD& D process in social robotics are not
objects (robots) but social interactions.
This principle makes explicit that social robotics generates (not instruments but) new
sorts of interactions that (1) are best understood as involving forms of asymmetric
sociality and, even more importantly, (2) belong into a complex network of human
social interactions. This shift in our understanding of the research focus of social
robotics immediately motivates the following principle:
(P2) The Quality Principle: The RD&D process must involve, from the very beginning and
throughout the entire process, expertise of all disciplines that are directly relevant for the
description and evaluation of the social interaction(s) involved in the envisaged application.
Since according to ISR Principle 2, the Quality Principle, researchers from the
humanities and, in particular, ethicists are included in the RD&D process, the
Context Principle ensures that both individual preferences of the stakeholders are
taken into account as well as the interests of society at large. The Context Principle
acknowledges the complexity of social reality, but also expresses a commitment to a
combined empirical (bottom-up) and normative (top-down) determination of “what
matters” in the given application context. This is reinforced by the following
principle:
(P5) The Values-First Principle: Target applications of social robotics must comply with a
specification of the Non-Replacement Maxim: social robots may only do what humans
should but cannot do. (More precisely: robots may only afford social interactions that
humans should do, relative to value V, but cannot do, relative to constraint C.) The
contextual specification of the Non-Replacement Maxim is established by joint deliberation
of all stakeholders. Axiological analyses and evaluations are repeated throughout all stages
of the RD&D process.
Claim 1, postulating the constitutive inclusion of the humanities and social sciences,
is a direct consequence of the second principle of ISR, the “Quality Principle”: it is
scientifically irresponsible to build high-risk applications without involving those
disciplines that can professionally assess the risks of value corruptions (the corrup-
tion of public discourse and fact-finding practices by social media algorithms
illustrates the consequences of irresponsible technology development).
Claim 3 follows from the fifth principle of ISR, the “Values-First Principle,”
which offers a gradual exit, piece-meal, from the current triple-gridlock of research-
based regulation. If developer teams concentrate on positive, value-enhancing
188 J. Seibt
applications, we can gradually learn more about human encounters with social
robots while reducing the potential risks—building the applications “we could
want anyway.” However, as my commentaries on the Values-First Principle may
convey, value discourse and the analysis of values require a certain mindset, a
tolerance for ambiguity, complexity, and deep contextuality, that cannot be learned
by top-down rule application or as “how-to” lateral transfer. While an education in
the humanities cultivates the development of such special mindsets, this is not so in
other disciplines. A preparatory course in applied ethics with well-chosen examples
(see, for instance, course material in connection with Torras 2018) can gradually
train students from non-humanities disciplines to understand more about epistemic
attitudes and standards of reasoning in complex normative domains, but it will likely
not suffice to acquire them. Vice versa, as currently explored in a supplementary
educational module on “Humanistic Technology Development” at Aarhus Univer-
sity, in order to create epistemological and communicational interfaces from the
other side, humanities students should learn a bit of programming and design and
build a rudimentary robot.
Finally, let us consider claim 2, the claim that we should better not push for new
educations, at least not now. This claim might be surprising, in view of
interdisciplines like bioinformatics or nanotechnology where the identification of a
new research field led quickly to the introduction of new educations. On the other
hand, as Climate Research and Systems Biology illustrate, there are interdisciplinary
areas the complexity of which, relative to our current understanding, requires full
disciplinary competences. If we begin to interfere with a domain as intricate as social
reality, and care for more than money, we cannot afford cheap solutions with cross-
disciplinary educations stitched together cut-and-paste. Given the complexity and
contextuality of social reality, given the demand for value-driven applications based
on quality research (see ISR Principles 2–5), social robotics needs developer teams
with full expertise rather than mosaic knowledge in order to create futures worth
living.
References
Calvo, R.A., Peters, D. (2014). Positive computing: technology for wellbeing and human potential.
MIT Press.
Druckman, D., Adrian, L., Damholdt, M.F., Filzmoser, M., Koszegi, S.T., Seibt, J., Vestergaard,
C. (2020). Who is Best at Mediating a Social Conflict? Comparing Robots, Screens and
Humans. Group Decis. Negot. https://doi.org/10.1007/s10726-020-09716-9
Fischer, K. (2019). Why Collaborative Robots Must Be Social (and even Emotional) Actors.
Techné Res. Philos. Technol. 23, 270–289.
Friedman, B., Kahn, P., Borning, A. (2002). Value sensitive design: Theory and methods. Univer-
sity of Washington technical report 02–12.
IEEE, n.d. IEEE SA – The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
[WWW Document]. URL https://standards.ieee.org/industry-connections/ec/autonomous-sys
tems.html (accessed 10.28.20).
Educational Requirements for Positive Social Robotics 189
McKinsey Global Institute, A Future that Works, Automation, Employment and Productivity, 2017,
https://www.mckinsey.com/mgi/overview/2017-in-review/automation-and-the-future-of-work/
a-future-that-works-automation-employment-and-productivity
Nourbakhsh, I.R. (2013). Robot futures. MIT Press.
Šabanović, S. (2010). Robots in society, society in robots. Int. J. Soc. Robot. 2, 439–450.
Seibt, J., Damholdt, M.F., Vestergaard, C. (2020a). Integrative social robotics, value-driven design,
and transdisciplinarity. Interact. Stud. 21, 111–144.
Seibt, Johanna, Vestergaard, C., Damholdt, M.F. (2020b). Sociomorphing, Not Anthropomorphiz-
ing: Towards a Typology of Experienced Sociality, in: Culturally Sustainable Social Robotics--
Proceedings of Robophilosophy 2020, Frontiers of Artificial Intelligence and Its Applications.
IOS Press, Amsterdam, pp. 51–67.
Skewes, J., Amodio, D.M., Seibt, J. (2019). Social robotics and the modulation of social perception
and bias. Philos. Trans. R. Soc. B Biol. Sci. 374, 20180037.
Torras, C. (2018). The Vestigial Heart. MIT Press.
Van den Hoven, J. (2005). Design for values and values for design. Information age 4, 4–7.
Van Wynsberghe, A. (2016). Service robots, care ethics, and design. Ethics Inf Technol
18, 311–321.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Informatics as a Fundamental Discipline
in General Education: The Danish
Perspective
Michael E. Caspersen
M. E. Caspersen (*)
It-vest – Networking Universities, Department of Computer Science, Aarhus University,
Aarhus, Denmark
e-mail: mec@it-vest.dk
To conceive the proper place of informatics in the curriculum, it is natural to compare with
subjects of similar character. One will then realise, that languages and mathematics are the
closest analogies. Common for the three is also their character as tools for many other
subjects.
Once informatics has become well established in general education, the mystery sur-
rounding computers in many people’s perceptions will vanish. This must be regarded as
perhaps the most important reason for promoting the understanding of informatics. This is a
necessary condition for humankind’s supremacy over computers and for ensuring that their
use do not become a matter for a small group of experts, but become a usual democratic
matter, and thus through the democratic system will lie where it should, with all of us.
computational model system – as well as the relation between the two: representation
and interpretation (see Fig. 1).
The equal inclusion of problem domain and interpretation, complementing com-
putational model system and representation, is rather unique and embodies the
Danish curriculum’s perspective on digital humanism.
In 2016, former president Obama launched CS for All as a bold new initiative to
empower all American students from kindergarten through high school to learn
computer science and be equipped with the computational thinking skills they
need to be creators in the digital economy, not just consumers, and to be active
citizens in our technology-driven world (White House 2016).
In 2018, major European and international organizations formed the coalition
Informatics for All (Informatics for All 2018). In many ways, the Informatics for All
initiative mirrors Obama’s CS for All initiative. A crucial element of the European
approach, which distinguishes it from the CS for All initiative, is the two-tier
strategy at all educational levels: informatics as a discrete subject, that is, a funda-
mental and independent subject in school (like language and mathematics), and the
integration and application of informatics with other school subjects, as well as with
study programs in higher education. Perhaps overly simplified, the two tiers may be
characterized as Learn to Compute (discrete subject) and Compute to Learn (inte-
gration); see Caspersen et al. (2019).
194 M. E. Caspersen
Various flavors of informatics have been a topic in Danish upper secondary schools
for more than 50 years (Caspersen and Nowack 2013).
In late 2008, the Ministry of Education established a task force to conduct an
analysis of informatics in upper secondary schools and provide recommendations for
a revitalization of the subject, not as a niche specialty but as a general subject
relevant for all. Subsequently, a new general, coherent, and uniform informatics
subject was developed, tested, and finally made permanent in 2016, however, not yet
as a compulsory subject for all upper secondary education.
A distinct aspect of the Danish informatics curriculum is the focus on digital
empowerment. We define digital empowerment as a concern for how students, as
individuals and groups, develop the capacity to understand digital technology and its
effect on their lives and society at large and their ability to engage critically and
curiously with the construction and deconstruction of digital artifacts (Dindler et al.
2021).
An approach to embrace digital empowerment was present already in the Danish
upper secondary informatics curriculum developed in 2009 (Caspersen 2009). One
Informatics as a Fundamental Discipline in General Education: The Danish. . . 195
Fig. 2 The four competence areas in the Danish informatics curriculum for primary and lower
secondary school
of the six key competence areas was Use and impact of digital artifacts on human
activity. The purpose of this competence area was that students should understand
that digital artifacts and their design have a profound impact on people, organiza-
tions, and social systems. Design of a system is not just design of the digital artifact
and its interface, it is also design of the use and workflow that unfolds around the
artifact. The purpose is that the students understand the interplay between the design
of a digital artifact and the behavioral patterns that intentionally or unintentionally
unfolds (Caspersen and Nowack 2013).
The informatics curriculum for primary and lower secondary education was
developed by mandate of the Danish Ministry of Education in 2018 and is running
on trial until 2021 in about 5% of primary and lower secondary schools across
Denmark.
The author of this chapter and a colleague from the Department of Digital Design
and Information Studies at Faculty of Arts were invited to serve as chairs for the
group developing the curriculum. In choice of chairs, the Minister of Education
signaled the importance of integrating a digital humanism perspective in the design
of the curriculum.
The informatics curriculum for primary and lower secondary school consists of
four competence areas (Danish Ministry of Education 2018):
• Digital empowerment
• Digital design and design processes
• Computational thinking and modeling
• Technological knowledge and skills
An overview of the four competence areas is provided in Fig. 2.
196 M. E. Caspersen
Fig. 3 Mapping of the four competence areas to the four processes in computational modeling of a
problem domain
The computational model system (computer systems, networks, security and pro-
gramming languages, etc.) is a classic component for an informatics curriculum. No
surprises here.
Inclusion of Digital design and design processes recognizes the bipartite nature
of all computation that is directed at purposes in the real world. Thus, we embrace
both problem domain and solution domain: the entire bipartite system – both the
software machine and the physical (or imaginary) world whose behavior it governs.
This is not generally embraced in informatics curricula for general education. The
focus on design process is inspired by the Scandinavian school of Participatory
Design, which originated in the 1970s with subsequent development and prolifera-
tion beyond Scandinavia (Greenbeaum and Kyng 1991). It is also inspired by
Donald Schön’s philosophy of design as a reflective practice from the 1980s
198 M. E. Caspersen
(Schön 1983). The particular notion of problem framing and reframing are essential
parts of this and is also inspired by the seminal work of British computer scientist
Michael Jackson in the 1990s (Jackson 2000).
However, the focus is not only on the two parts of “the bipartite system” –
problem domain and solution domain – but also on the relations between the two
parts: representation and interpretation.
Most aspects of the physical world, which we attempt to capture and represent in
computational models and artifacts, are blurred, uncertain, and nondeterministic.
On the other hand, the computational models we construct are fundamentally
strict, certain, and deterministic.
The challenge has two faces. One is the representational challenge: How can we
model the blurred, uncertain, and nondeterministic aspects of the world in compu-
tational artifacts?
The other is the interpretational challenge: How do we avoid to constrain and
eventually dehumanize our understanding of phenomena and concepts in the real
world when our worldview is increasingly defined through the lenses of strict,
certain, and deterministic computational models and artifacts?
The representational challenge is addressed by the competence area Computa-
tional thinking and modeling (data, algorithms, structuring, etc.), which is again a
self-evident component in an informatics curriculum.
The interpretational challenge is addressed by the competence area Digital
empowerment, which represents the ability to analyze and evaluate digital artifacts
with a focus on intention and use through a critical, reflexive, and constructive
examination and understanding of consequences and possibilities of a digital artifact.
This competence area is for digital artifacts what literature analysis is for novels, but
with the additional liberating component of reframing and redesign – realizing that
digital artifacts are human-made and could have been designed differently if other
perspectives had been applied.
7 Conclusions
Acknowledgment I would like to thank the anonymous reviewers for valuable feedback to an
earlier version of the manuscript.
References
Caspersen, M.E. (2009). Kernekompetencer i informationsteknologi (in Danish). Notes for minis-
terial working group. Accessed 21st April 2021.
Caspersen, M.E. & Nowack, P. (2013). Computational Thinking and Practice — A Generic
Approach to Computing in Danish High Schools, Proceedings of the 15th Australasian Com-
puting Education Conference, ACE 2013, Adelaide, South Australia, Australia, pp. 137-143.
Caspersen, M.E., Gal-Ezer, J., McGettrick, A.D. & Nardeli, E. (2019). Informatics as a
Fundamental Discipline for the 21st Century. Communications of the ACM 62 (4), DOI:https://
doi.org/10.1145/3310330.
CECE (2017). Informatics Education in Europe: Are We All In The Same Boat?, Report by the
Committee on European Computing Education, Informatics Europe and ACM Europe.
Accessed 12th March 2021.
CECE’s Map (2017). CECE's Map of Informatics in European Schools, ACM Europe and Infor-
matics Europe. Accessed 12th March 2021.
Danish Ministry of Education (2018). Indholdet i forsøgsfaget teknologiforståelse (in Danish). The
Danish Ministry of Education. Accessed 14th March 2021.
Dindler, C., Iversen, O.S., Caspersen, M.E. & Smith, R.C. (2021). Computational Empowerment.
In Computational Thinking Education in K-12: Artificial Intelligence Literacy and Physical
Computing, MIT Press, 2021. Eds. Kong, S-C & Abelson, H. Scheduled for publishing in spring
2021.
200 M. E. Caspersen
Greenbeaum, J. & Kyng, M., Eds. (1991). Design at Work: Cooperative Design of Computer
Systems. CRC Press.
European Commission (2020a). Digital Education Action Plan (2021-2027) – Resetting education
and training for the digital age. European Commission. Accessed 12th March 2021.
European Commission (2020b). Commission Staff Working Document. European Commission.
Accessed 12th March 2021.
Informatics for All (2018). Informatics for All. Accessed 12th March 2021.
Jackson, M. (2000). Problem Frames: Analyzing and structuring software development problems.
Addison-Wesley.
Kissinger, H. (2018). How the Enlightenment Ends. The Atlantic, June 2018. Accessed 21st April
2021.
Madsen, O.L., Møller-Pedersen, B. & Nygaard, K. (1993). Object-Oriented Programming in the
BETA Programming Language. Addison-Wesley.
Naur, P. (1967). Datalogi – læren om data (in Danish). The second of five Rosenkjær Lectures in
Danish Broadcasting Corporation 1966-67 published as Datamaskinerne og samfundet,
Munksgaard. Accessed 21st April 2021.
Naur. P. (1992). Computing: A Human Activity, ACM Press.
Schön D.A. (1983). The Reflective Practitioner: How professionals think in action. Temple Smith.
White House (2016). Computer Science For All. The White House. Accessed 12th March 2021.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
The Unbearable Disembodiedness
of Cognitive Machines
Enrico Nardelli
Abstract Digital systems make up nowadays the communication and social infra-
structure and fill every parcel of space and time, affecting our lives both profession-
ally and personally. However, these “cognitive machines” are completely detached
from the human nature, whose comprehension is beyond their capabilities. It is
therefore our duty to ensure their actions respect human rights and values of a
democratic society. Education is one of the main tools to attain this goal, and a
generalized preparation in the scientific basis of the digital technologies is a required
element. Moreover, it is fundamental to understand why the digital automation has a
nature completely different from the traditional industrial one and to develop an
appreciation for human and social viewpoints in the development and deployment of
digital systems. These are the key issues considered in this chapter.
E. Nardelli (*)
Università di Roma “Tor Vergata”, Rome, Italy
e-mail: nardelli@mat.uniroma2.it
These are automatic systems that – by manipulating signs they ignore the meaning
of, according to instructions they ignore the meaning of – transform data that have a
sense for human beings, in a way that is significant to them.
The comprehension of this aspect is crucial to educate for Digital Humanism
appropriately. Informatics can be aptly defined the science of automated
processing of representations since it does not deal with “concrete” objects but
with their representations, which are built by means of a set of characters taken from
a finite alphabet. When human beings look at a representation, they usually tend to
associate it to a meaning, which very often depends on the subject and is shaped by
shared social and cultural conventions. For example, the sequence of characters
“camera” will evoke one meaning to English speakers and a different one to Italians.
Devices produced by the technological developments of informatics, instead, deal
with representations without any comprehension of their meaning. Moreover, they
process these representations, that is, they receive input representations and produce
output representations (maybe through the production of a very long series of
intermediate or ancillary representations) by acting in a pure automatic
(or mechanical) way (Denning 2017; Nardelli 2019). Again, they do not have any
comprehension of the meaning of the transformation processes they execute. How-
ever, in the end, these machines carry out operations that, considered from the
viewpoint of human beings, are of a cognitive nature and meaningful to them.
These machines are now pervading the entire society, and this represents a real
revolution, the “informatics revolution” – called the third “revolution of power
relations” (Nardelli 2018), because for the first time in the history of humanity,
cognitive functions are carried out by machines. This third revolution “breaks” the
power of human intelligence, creating artifacts that can mechanically replicate
cognitive actions, which until now were a characteristic of people.
Every scholar in our field is aware that these cognitive machines are characterized
by a meaningless process of transformation (meaningless from the point of view of
the mechanical subject carrying out the transformation) which produces a meaning-
ful outcome (meaningful in the eyes of the human observer of the transformation).
However, outsiders and common people usually neglect this. To properly understand
the challenges education for Digital Humanism faces, a full and deep comprehension
of this aspect is absolutely required. That is why it is important to discuss this
revolution in the historical perspective of other equally disruptive revolutions in the
history of humankind, to understand similarities and differences among them.
Let us consider what happened previously: the first two revolutions in power
relations, bringing radical changes both in the technical and in the social spheres,
were the printing press one and the industrial one.
The invention of movable type printing in the fifteenth century caused both a
technical and a social revolution in society: a technical one, because it made possible
The Unbearable Disembodiedness of Cognitive Machines 203
to produce texts in a faster and cheaper way, and a social one, because it made
possible a more widespread circulation of knowledge. Ultimately, what happened
was the first revolution in power relations: authority was no more bound to the
spoken word; it was no longer necessary to be in a certain place at a certain time in
order to know and learn from the living voice of the master. Knowledge always
remains a great power, but now this power is not confined any more to the people
who possess it or to those who are able to be close to them in time and space. The
replicability of the text implies the replicability at a distance of time and space of the
knowledge contained in it. All those who can read can now have access to knowl-
edge. This set in motion epochal social changes: the diffusion of scientific, legal, and
literary knowledge gave a huge impact to the evolution of society, which became
increasingly democratic.
In fact, in the space of two and a half centuries, almost 800 million books printed
in Europe caused an irreversible process of social evolution. Scientific knowledge,
revolutionized by the Galilean method, thanks to the printing press, spread through-
out Europe and constituted one of the enablers of the subsequent revolution, the
industrial one, identified as the second revolution in power relations.
Started in the eighteenth century, this was equally disruptive: the availability of
industrial machines made the physical work of people replicable. Human arms now
are no longer needed because machines operate in their place. A technical revolution
is achieved, because artifacts are replicated faster and in the absence of human
beings. Machines can produce day and night without getting tired, and they can
even produce other machines. They are amplifier and enhancer of the physical
capabilities of human beings. A social revolution is obtained: physical limitations
to movement and action are broken down. A single person can move huge quantities
of dirt with a bulldozer, travel quickly with a car, and talk to anyone in the world
through a telephone. Evolution and progress of human society are therefore further
accelerated by the possibility of producing physical objects faster and more effec-
tively, not to mention the consequences in terms of transporting people and things.
The power relation that is revolutionized in this case is that between man and nature:
humanity subjugates nature and overcomes its limits. One can quickly cross the seas,
sail the skies, harness water and fire, and move mountains.
The printing press revolution had given humanity an extra gear on the immaterial
level of information; the Industrial Revolution did the same for the material sphere.
The world is filled with “physical artifacts” (the industrial machines) that begin to
affect the nature of the planet in an extensive and deep way.
Then, in the middle of the twentieth century, after the realization of about
800 billion industrial machines,1 the third revolution in power relations, that of
information technology (IT), slowly begins. At first, it seems to be nothing more than
an evolved variant of the automation of physical work and production processes
caused by the Industrial Revolution, but after a few decades, we begin to understand
1
Estimate by the author of the overall number of industrial machines of various kinds produced
since around 1700 until more or less 1950.
204 E. Nardelli
that it is much more than that, because it affects the cognitive level and not the
physical one. We are no longer replicating the static knowledge brought by spoken
words and the physical strength of people and animals, but that “actionable knowl-
edge” which is the real engine of development and progress. This expression denotes
that kind of knowledge which is not just a static representation of facts and relation-
ships but also a continuous processing of data exchanged dynamically and interac-
tively between a subject and the surrounding context.
Because of the informatics revolution, this actionable knowledge (i.e., knowledge
ready to be put into action) is reproduced and disseminated in the form of software
programs, which can then be adapted, combined, and modified according to specific
local needs. The nature of the artifacts, of the machines we produce, has changed.
We no more have concrete machines, made by many physical substances: we now
produce immaterial machines, made by abstract concepts, ultimately boiling down to
configurations of zeroes and ones. We have the “digital machines,” born – due to the
seminal work of Alan Turing (Turing 1936) – as pure mathematical objects, capable
of computing any function a person can compute, and which can be made concrete
by physically representing them under some form, it does not matter which. Indeed,
beyond the standard implementation of digital machines by using levels of voltage
that, in some physical electric circuit, give substance to their abstract configurations,
we also have purely mechanical implementations, with levers and gears, or
hydraulic ones.
Even though these digital machines clearly require some physical substrate to be
able to operate, they are no more physical artifacts. They are “dynamic cognitive
artifacts,” frozen action that is unlocked by its execution in a computer and generates
knowledge as a result of that execution. Static knowledge of books becomes
dynamic knowledge in programs. Knowledge capable of automatically producing
new knowledge without human intervention. Therefore, most appropriately, they
have been defined “cognitive machines” (Nardelli 2018).
2 Cognitive Machines
These machines are a reminiscence of those that, in the course of the Industrial
Revolution, made possible the transformation from the agricultural society to the
industrial one. Actually, they are different and much more powerful. Industrial
machines are amplifiers of the physical strength of man; digital machines produced
by the informatics revolution are cognitive machines (or “knowledge machines”),
amplifiers and enhancers of people’s cognitive functions. They are devices that boost
the capabilities of that organ whose function is the distinctive trait of the human
being.
On the one hand, we have a technical revolution, that is, faster data processing; on
the other hand, we also have a social revolution, that is, the generation of new
knowledge. In this scenario, what changes is the power relation between human
The Unbearable Disembodiedness of Cognitive Machines 205
intelligence and machines. Humankind has always been, throughout the history, the
master of its machines. For the first time, this supremacy is challenged.
Cognitive activities that only humans, until recently, were able to perform are
now within the reach of cognitive machines. They started with simple things, for
example, sorting lists of names, but now they can recognize if a fruit is ripe or if a
fabric has defects, just to cite a couple of examples enabled by that part of infor-
matics that goes under the name of artificial intelligence. Certain cognitive activities
are no longer the exclusive domain of human beings: it has already happened in a
large set of chessboard games (checkers, chess, go, etc.), standard fields for mea-
suring intelligence, where now the computer regularly beats the world champions. It
is happening in many work activities that were once the exclusive prerogative of
people and where now the so-called bots, computer-based systems based on artificial
intelligence techniques, are widely used.
There are at least two issues, though, whose analysis is essential in the light of the
educational viewpoint discussed in this chapter.
The first issue is that these cognitive machines have neither the flexibility nor the
adaptability to change their way of operating when the context they work in changes.
It is true that modern “machine learning”-based approaches give some possibility for
them to “sense” changes in their environment and to “adapt” their actions. However,
this adaptation space has its own limits. Designers must somehow have foreseen all
possible future scenarios of changes, in a way or another. People are inherently
capable of learning what they do not know (whereas cognitive machines can only
learn what they were designed for) and have learned, through millions of years of
evolution, to flexibly adapt to changes in the environment of unforeseen nature
(while knowledge machines can – once again – only adapt to changes of foreseen
types). We cannot therefore let them work on their own, unless they operate in
contexts where we are completely sure that everything has been taken into account.
Games are a paradigmatic example of these scenarios. These cognitive machines are
automatic mechanisms, giant clocks that tend to behave more or less always in the
same way (or within the designed guidelines for “learning” new behaviors). This is
why digital transformation often fails: because people think that, once they have
built a computer-based system, the work is completed. Instead, since no context is
static and immutable, IT systems not accompanied by people able to adapt them to
the evolution of operational scenarios are doomed to failure.
The second problem is that these cognitive machines are completely detached
from what it means to be human beings. Someone can see it as a virtue, while on the
contrary it is a huge flaw. There is no possibility of determining a single best way of
making decisions. Those who think that algorithms can govern the human society in
a way that is the best for everyone are deluded (or have hidden interests). Since the
birth of the first forms of human society, the task of politics has been to find a
synthesis between the conflicting needs that always exist in every group of people.
Moreover, the production of this synthesis requires a full consideration of our human
nature. The only intelligence that can make decisions appropriate to this context is
the embodied intelligence of people, not the incorporeal artificial intelligence of
cognitive machines. This does not imply there is not a role for cognitive machines.
206 E. Nardelli
Their use should remain confined to those of powerful personal assistants, relieving
us from the more repetitive intellectual work, helping us in not making mistakes due
to fatigue or oversight, and without leaking our personal data in the wild. People
have always to remain in control, and the final decisions, above at all those affecting
directly or indirectly other individuals and their relations, should always be taken by
human beings. To discuss a current topic, it is understandable to think that, to some
degrees, final decision of judges in routine cases may be affected by extrajudicial
elements, unconscious bias and contingent situations and emotions. After all, even
well-trained judges are anyhow fallible human beings. However, speculating on the
basis of data correlation, as done in Danziger et al. (2011), that after lunch judges
tend to be more benevolent is wrong. A more careful analysis highlighted other
organizational causes for this effect (Weinshall-Margel and Shapard 2011). A
cognitive machine just learning from data without a thorough understanding of the
entire process would have completely missed the point. That is why the decision in
France to forbid analytics on judges’ activity is well taken (Artificial Lawyer 2019).
Because incorporeal decision systems convert a partial description of what happened
in the past in a rigid prescription of how to behave in the future, stealing human
beings of their most precious and more characteristic qualities, free will.
Cognitive machines are certainly useful for the development of human society.
They will spread more and more, changing the kind of work executed by people.
This has already happened in the past: in the nineteenth century, more than 90% of
the workforce was employed in agriculture; now it is less than 10%. It is therefore of
the utmost importance that each person is appropriately educated and trained in the
conceptual foundations of the scientific discipline that makes possible the construc-
tion of such cognitive machines. Only thus, everyone will be able to understand the
difference between what they can do and what they cannot and should not do.
Education on the principles of informatics should start since the early years in school
(Caspersen et al. 2019). The key vision that a digital computing system operates
without any comprehension, by the system itself, of what is processed and how it is
processed, needs to accompany the entire education process. Moreover, it should
always go hand in hand with the reflection that the process of modeling reality in
terms of digital data and processing them by means of algorithms is a human activity.
As such, it may be affected by prejudice and ignorance, both of whom may, at times,
be unconscious or unknown.
Only in such a way, in fact, it will be possible to understand that any choice, since
the very first ones regarding which elements to represent and how to represent them,
to the ones deciding the rules for the processing itself, is the result of a human
decision process and is therefore devoid of the absolute objectivity that too often is
associated to algorithmic decision processes.
The Unbearable Disembodiedness of Cognitive Machines 207
2
These are the so-called red-green Turing machines. See van Leeuwen and Wiedermann (2012) for
their description and a list of other equivalent models.
208 E. Nardelli
become a relevant and important part of it. As the health emergency of 2020 has
unfortunately taught us, we cannot disregard them any longer. They have become an
integral and constitutive component of our personal and social life. Hence, the
necessity of giving protection to people rights not only for what regards their body
and their spirit but also to their digital projections (Nardelli 2020).
As a side note, note that the disembodiedness of cognitive machines has a dual
counterpart in the fact that this digital dimension of our existence is populated by
“life forms” which we have not the sensor to be aware of. Digital viruses and worms,
which are not benign towards our “digital self,” much like their biological counter-
parts are not benevolent to our physical bodies, continue to spread at an alarming rate
without us being able to counter them effectively. Indeed, we would need the digital
counterpart of those hygiene rules that such a big role have had in the improvement
of living conditions in the twentieth century (Corradini and Nardelli 2017). Once
again, it is only through education that we can make the difference, and it has to start
as early as possible.
While the general education of citizens happening in school should be focused on
the above principles, when considering the tertiary education level, something more
is needed.
We need to prepare our students in a way similar to how we train medical
doctors. In the early years, they study the scientific basis of their field: physics,
chemistry, and biology. In this context, universal and deterministic laws apply.
Then, as they progress in their educational path, aspiring doctors begin the study
of the “systems” that make up human beings (motor, nervous, circulatory, etc.), thus
learning to temper and combine mathematical determinism with empirical evidence.
Finally, when they “enter the field,” they face the complexity of a human being in its
entirety, for whom, as general practitioners well know, a symptom can be the
manifestation not only of a specific disease but also of a more general imbalance.
At this point, the physician will no longer be able to apply simply one of those
universal laws she learned in the early years. This does not mean abdicating the
scientific foundations to return to magical rites or apotropaic formulas but acting to
solve the specific problem of the specific patient she is facing, in the light of the
science that she has introjected. The informatician, like the doctor, must have his feet
firmly planted in science, but his head clearly aimed at making people and society
feel better (just as a doctor does).
More specifically, informatics students should be prepared to have good basis in
mathematics, algorithmic, semantics, systems, and networks, but then they should be
able to solve automation problems regarding data processing (intended in its widest
meaning) without making people appendices to IT systems. To support the goals of
Digital Humanism, they should merge their “engineering” design capabilities with
attention to human-centered needs. They should be educated to develop an appreci-
ation for human and social viewpoints regarding digital systems. They have to tackle
the challenges of digital transformation while improving social well-being of people
and not only enriching the “owner of steam,” who has every right to an adequate
remuneration of his capital but not at the price of dehumanizing digital systems end
users.
The Unbearable Disembodiedness of Cognitive Machines 209
That is why we need to broaden the educational horizon of our study courses,
complementing the traditional areas of study with interdisciplinary and
multidisciplinary education, coming above all from the humanistic and social
areas. Only in such a way it will be possible to recover the holistic vision of
technological scenarios that is characteristic of a humanism-based approach, where
respect for people and values of a democratic society are the guiding forces.
Acknowledgments The author thanks the anonymous reviewers for their insightful comments,
which enabled to improve the presentation of this chapter.
References
Caspersen, M.E., Gal-Ezer, J., McGettrick, A., and Nardelli, E. (2019) ‘Informatics as a funda-
mental discipline for the 21st century’. Communication of the ACM, 62(4).
Corradini, I. and Nardelli, E. (2017) Digital hygiene: basic rules of prevention. Link&Think,
[Online]. https://link-and-think.blogspot.com/2017/11/digital-hygiene-basic-rules-of.html
Danziger, S., Levav, J., and Avnaim-Pesso, L. (2011) ‘Extraneous factors in judicial decisions’,
Proceedings of the National Academy of Sciences, 108 (17).
Denning, P.J. (2017) ‘Remaining trouble spots with computational thinking’, Communication of
the ACM, 60(6).
France Bans Judge Analytics, 5 Years In Prison For Rule Breakers, (2019) Artificial Lawyer,
[Online]. https://www.artificiallawyer.com/2019/06/04/france-bans-judge-analytics-5-years-in-
prison-for-rule-breakers/
van Leeuwen, J. and Wiedermann, J. (2012) ‘Computation as an unbounded process’, Theoretical
Computer Science, 429:202-212, 2012.
Longo, G. (2018) ‘Letter to Alan Turing’, Theory, Culture & Society, Special Issue on “Transversal
Posthumanities” (M. Fuller, R.Braidotti, eds.), 2018
Nardelli, E. (2010) The maintenance is the implementation OR Why people misunderstand IT
systems, 6th European Computer Science Summit, [Online]. https://www.informatics-europe.
org/ecss/about/past-summits/ecss-2010/conference-program.html
Nardelli, E. (2014) Senza la cultura informatica non bastano le tecnologie, (in Italian) [Online].
https://www.ilfattoquotidiano.it/2014/02/01/senza-la-cultura-informatica-non-bastano-le-
tecnologie
Nardelli, E. (2016) La RAI che vorrei: diffondere la “conoscenza in azione” per far crescere
l’Italia, (in Italian), Key4Biz, [Online]. https://www.key4biz.it/la-rai-che-vorrei-e-nardelli-
contribuisca-allalfabetizzazione-digitale/156540/
Nardelli, E. (2018) The third “power revolution”, Link & Think, [Online]. https://link-and-think.
blogspot.com/2018/05/informatics-third-power-revolution.html
Nardelli, E. (2019) ‘Do we really need computational thinking?’, Communication of the ACM, 62
(2).
Nardelli, E. (2020) Does our “digital double” need constitutional rights? Link&Think, [Online].
https://link-and-think.blogspot.com/2020/12/does-our-digital-double-need-constitutional-
rights.html
Poincaré, H. (1892) Méthodes Nouvelles. (in French), Paris, 1892.
210 E. Nardelli
Schwab, K. (2015) The Fourth Industrial Revolution. Foreign Affairs, [Online]. https://www.
foreignaffairs.com/articles/2015-12-12/fourth-industrial-revolution
Turing, A. (1936) ‘On computable Numbers, with an application to the Entscheidungsproblem’.
Proc. London Mathematical Society, 42, 1936.
Weinshall-Margel, K. and Shapard, J. (2011) ‘Overlooked factors in the analysis of parole deci-
sions’, Proceedings of the National Academy of Science, 108 (42), Oct 2011.
Wolfram, S. (2013) ‘The Importance of Universal Computation’. In Alain Turing, his work and
impact (B. Cooper, ed.), Elsevier, Amsterdam, 2013.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part VIII
Digital Geopolitics and Sovereignty
The Technological Construction
of Sovereignty
Paul Timmers
Around the year 2000, Lawrence Lessig, a Harvard law professor, put forward his
famous statement “code is law” (Lessig 2000). In brief, at that time, this was about
the observation that the way the internet is technologically constructed (“code” in the
sense of software code) to a large extent determines the rules of behavior in the
internet. Code acts like law.
Recently, however, we would rather say: “law is code.” “Law” is here understood
as the requirements that governments would like to impose on the digital world.
Nowadays, these requirements are evermore driven by concerns about sovereignty.
Governments want more control over cybersecurity in 5G and open up access to
P. Timmers (*)
University of Oxford, Oxford, UK
European University Cyprus, Engomi, Cyprus
e-mail: paul.timmers@iivii.eu
gatekeeper digital platforms.1 States feel that they have to act to protect their national
economic ecosystem and are worried about the erosion of society’s values such as
privacy. They fear that the very authority of government is being undermined.
Clearly, technology as given does not safeguard sovereignty and has even become
a threat. Sovereignty and strategic autonomy have become Chefsache.2
What is happening here from a conceptual point of view? I will stress two ideas,
without claiming any originality in doing so.3 The first is that technological con-
struction of reality is as valid a notion as is the social construction of reality. The
second is that there is a strong interplay between social and technological construc-
tion (Fig. 1).
The corollary is that design of social constructs such as law and design of
technological constructs can and may go hand in hand. Even stronger: by ignoring
that interplay, exploitative powers (dictators, populists, criminals, unscrupulous
companies) will step in the void and gamble with our economies, societies, and
democracy.
1
For example, as in the EU reflected in the 5G Cybersecurity Recommendation and in the Digital
Markets Act.
2
GAIA-X, the European cloud initiative, takes (data) sovereign by design as a guiding principle for
the development of software and services; see Franco-German Position on GAIA-X, 18 Feb 2020.
3
For the origins of the underlying idea of constructivism, see Immanuel Kant, Critique of Pure
Reason (1781).
The Technological Construction of Sovereignty 215
The idea of social construction of reality rose to prominence from 1966 onward,
thanks to Peter Berger and Thomas Luckmann (1967). Since that time, we accept
that much of what we consider real in everyday life, such as money, values, citizens,
or state, is a social construct. This holds for state sovereignty as well. Indeed,
30 years after Berger and Luckmann’s The Social Construction of Reality, the
excellent book State Sovereignty as a Social Construct was published (Biersteker
and Weber 1996).
Can reality also be technologically constructed?4 Pretty obviously “yes” when we
just consider the many technological physical artifacts around us. These are the
tangible technological reality “as we know it.”
Such technological reality can even shape artifacts in our mind such as our
perception of reality. Jean Baudrillard, a French sociologist and philosopher, argued
in his provocative set of essays “The Gulf War Did Not Take Place” that this war was
presented to us through technology with a specific imagery (Baudrillard 1991).
Remember the cockpit images of bombing targets in the cross-hairs? These became
for many the reality of the Gulf War (as long as you were not on the ground. . .).
Technology-generated perception becomes evermore part of reality. Some young
people have an unhealthy obsession with their image on social media (McCrory et al.
2020).
But can social reality, social artifacts, also be technologically constructed? The
answer is affirmative here too. Consider Lessig’s “code is law” as mentioned before.
Lessig focused on the interplay of technology and law. Law is of course a social
construct par excellence. Julie Cohen, in her 2017 book Between Truth and Power,
built on Lessig and the 1970s governmentality concept of Michel Foucault (Cohen
2019). She analyzed the interplay of technological and social construction in the
governance of law development by governments and tech companies. One conclu-
sion: technology may be malleable, but such social constructs are malleable as well.
4
That is, the reality of technological artifacts, technology mediating reality, and technology shaping
or conditioning social reality.
216 P. Timmers
Secondly, technology can redefine core privileges of the state such as the
identification of citizens (the French call it une fonction régalienne). Electronic
identity or eID raises the question of control. Can only a government-issued identity
be an official eID? Could it also be a self-sovereign identity? Or even an identity
owned by a platform like Facebook or Google? Should the fonction régalienne loose
its state anchor? The technological choice, in combination with social constructs
such as law and market power, can redefine a core aspect of sovereignty.
Thirdly, technology, in its Baudrillard’s sense of intermediator to reality, unlocks
cultural heritage which is clearly a sovereign asset. Technology, properly designed,
protects and strengthens our values. Privacy by design is an illustration.
What then about digital technologies shaping internal and external legitimacy,
those core qualities of sovereignty? Internal legitimacy implies accountability and
transparency of the legitimate authority. As citizen we may wonder: is my court case
treated fairly? Why have I been singled out for income tax scrutiny? Which civil
servant is looking at my personal data?
On the one hand, transparency can be enabled by an appropriate technology
architecture. Estonia has chosen to base its e-government platform on blockchain—
which cannot be tampered with—for that purpose. On the other hand, internal
legitimacy can also be undermined by technology that intentionally or unwittingly
does not respect fundamental and human rights. In the Netherlands, recently “smart”
but discriminatory technology for detecting misuse of child support in combination
with strict bureaucracy and blind politics led to serious injustice for thousands of
citizens. The Dutch government fell over the case. It lost its internal legitimacy.
The counterpart of technology-defined control of government is technology-
defined control of citizens. Already today, even in free societies, ever-smarter
cameras are ubiquitous. COVID-19 apps have raised concerns about surveillance
creep (Harari 2020). Democratic processes everywhere are heavily shaped by social
media, which stimulate by their very design the formation of echo chambers and
thereby give raise to polarization. Hostile states seek to undermine the very legiti-
macy of incumbent governments by making use of the architecture of social media
platforms to spread misinformation. Alternatively, social media are put under gov-
ernment control in order to suppress any citizen movement that may contest the state.
This is a main motivation of online censorship in China (King et al. 2014).
External legitimacy can equally be shaped by technology. Kings and castles have
fallen at the hand of new technologies such as trebuchet and cannon balls. The
nuclear bomb prompted France to develop its own atomic capacity to safeguard its
sovereignty. Asserting legitimacy in cyberspace has become a technological war
where the power of one nation vis-à-vis others is increasingly being defined by
militarizing artificial intelligence. One may wonder, though, what the nature is of
such AI. How will it interpret aggression, and will it counterstrike autonomously or
not? AI is a technology that can take over agency from the state, shaping external and
internal legitimacy and thereby redefining sovereignty in the digital age.
Technological construction starts to reshape social constructs such as sover-
eignty. The writing is on the wall. The rise of cryptocurrencies challenges central
banks as a sovereign institution. The rise of interoperable data spaces worries some
The Technological Construction of Sovereignty 217
data holders who fear that their autonomy is threatened. Data hoarding by digital
platforms makes governments realize that their presumed sovereignty is evermore in
the hands of a few global corporates and foreign governments. Technology is
re-allocating legitimacy between the extremes of massive decentralization—such
as with blockchain or personal data pods—and massive centralization in the hands of
a few actors that escape democratic control.
4 Conclusions
The reader, having come all the way until here to read about something she or he
already knew or at least intuited, may be left with the question: “so what?” The
answer is that technology fundamentally shapes sovereignty and it is us who can
influence the shaping of such technology.
Policy-makers who are concerned about strategic autonomy do not need to accept
technology “as is.” Technology is neither a force of nature nor to just be left to the
market nor to be taken for granted, as an exogenous factor. Policy-makers can insist
that (digital) technology is designed in such a way that internal and external
legitimacies are strengthened. Digital technology can be required to be designed
such as to grow assets “that belong to us” and protect our values, human rights, and
humanism (TU Wien 2019).
Sure, we then sacrifice one holy cow as we must conclude that technology is not
neutral. Fine. But there is a more radical proposition here, namely, that during the
design of law policy-makers would sit together with technology designers. They
would engage in a dialogue about technology requirements such as sovereignty
safeguards. They would not be satisfied until there is mutual re-assurance of the
compatibility of technology and law (or policy).
There is also no need to take the law and organization of government and
administration for a given. Sure, we want stability with law. But if technology can
do a better job, law-as-is should not stand in the way. This then leads to a second
radical proposition: to consider in the design of any law whether promotion of
technological disruption should be included in that same law. The intent would be
to enable replacement of the social constructs in that law by technological constructs.
Of course, only provided the end result is better.
An example would be to include in future laws that seek to safeguard sovereignty
(such as on data or AI or cloud) a chapter on R&D for sovereignty-respecting
technology, with corresponding budget and objectives. That same law should then
foresee to scale back human oversight following proper and successful assessment of
the resulting technology.
Co-design of law and technology in the way proposed here is not yet found
anywhere, as far as the author is aware of. It would likely be seen as a radical change.
But hopefully this chapter convinced the reader that this change is thinkable,
enlightening, and, above all, necessary today in order to construct the sovereignty
that we aspire. We have a choice.
218 P. Timmers
References
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
A Crucial Decade for European Digital
Sovereignty
George Metakides
Abstract The current decade will be critical for Europe’s aspiration to attain and
maintain digital sovereignty so as to effectively protect and promote its humanistic
values in the evolving digital ecosystem. Digital sovereignty in the current geopo-
litical context remains a fluid concept as it must rely on a balanced strategic
interdependence with the USA, China, and other global actors. The developing
strategy for achieving this relies on the coordinated use of three basic instruments,
investment, regulation, and completion of the digital internal market. Investment, in
addition to the multiannual financial framework (2021–2027) instruments, will draw
upon the 20% of the 750 billion recovery fund. Regulation, in addition to the Digital
Governance Act and the Digital Market Act, will include the Data Act, the new AI
regulation, and more that is in the pipeline, leveraging the so-called Brussels effect.
Of key importance for the success of this effort remains the timing and “dovetailing”
of the particular actions taken.
It looks increasingly likely that the decade of the 2020s will be crucial for the future
of the economic, political, and social impact of digital technologies worldwide.
It will be particularly critical for Europe’s aspiration to attain and maintain digital
sovereignty while protecting and promoting cherished values that humanity honed
and filtered through fifth century BC Athens and the eighteenth century AD Enlight-
enment to our days. Digital sovereignty remains a fluid concept as analyzed in
Moerel and Timmers (2021). Here, it is not intended to mean digital autarky or
absolute autonomy but rather a strategic positioning that ensures a globally balanced
interdependence where a contestant’s attempt to damage others risks self-damage.
At the beginning of the last decade, the 2010s, the digital landscape looked
decidedly . . .rosier. It was still a time of “digital innocence” and great optimism
for how, on balance, digital technologies would transform the world.
G. Metakides (*)
Digital Enlightenment Forum, Zoetermeer, The Netherlands
e-mail: george@metakides.net
There was the Arab Spring of 2011 and the credible promise that digital technol-
ogies in general and the internet and its “social machines” in particular would help to
create an informed citizenry that would, in turn, foster and strengthen democracy and
social cohesion.
Then came the Snowden revelations and the Cambridge Analytica scandal in
2013, the Brexit referendum and the US elections in 2016, and an avalanche of
related developments so that now, at the start of 2021, we are looking at
misinformation, conspiracy theories, and fake news, turbo-charged by the social
media platforms, having circled the globe many times over by the time truth is
checked and outed. The 2011 Arab Spring case is now studied as a “Precursor of the
Disinformation Age”!
We are also looking at fast-evolving, data-gobbling, non-transparent AI algo-
rithms which, combined with the business models used by the social media plat-
forms, push people to the extremes of the political spectrum, thus emptying the
Aristotelian “middle” which is crucial for both democracy and social cohesion.
The rosy digital landscape, and its geopolitical context of the start of the decade of
the 2010s, has been replaced at the start of the 2020s with a very different one where
all kinds of red lights are flashing and warning bells are ringing.
The era of digital innocence and unbridled hope is over, replaced by mitigated
belief that the accelerating advances in digital technologies can still be a force for
good overall but only provided that governments, industry, academia, and individual
citizens take actions that can ensure that this belief is realized.
Most of these actions currently proposed and debated are controversial and
viewed very differently in the USA, China, Europe, and other parts of the world.
This is at the heart of a complex developing global power play in the context of
which Europe is trying to position herself.
Before going further to see how this new decade might play out and why it might
well be critical in shaping perhaps several decades that will follow, let us first take a
look at the evolving geopolitical context.
Practically all recent studies conclude the same world superpower array as does
the February 2021 Gallup report (Gallup International 2021) that polled perceptions
on this very issue worldwide.
That is, by 2030, there will be two major superpowers combining economic and
military prowess that will be jockeying for first place, the USA and China.
Quite far behind (further than now) will be the EU closely followed by Russia.
The falling further behind of the EU does not come as a surprise as the EU is not
expected to come close to the two big ones without a common foreign and defense
policy which, in turn, is not expected to happen during this decade.
But there is an additional finding in the aforementioned Gallup report. The second
key question asked (besides which will be the major superpowers by 2030) asked
which superpowers would be “stabilizing factors” and which would be destabilizing
ones in the world (Fig. 1).
A Crucial Decade for European Digital Sovereignty 221
60%
40%
20%
0%
US China Russia EU
Fig. 2 Presentation of Dr. Yvo Volman at the Digital Humanism panel discussion of February
23, 2021 (DigHumTUWien 2021)
It was the EU that led the way to create a regulatory framework that would level
the playing field and create a more contested digital economy, while the USA relied
on voluntary self-regulation and China on her authoritarian power.
During the last decade, the USA has been, to put it mildly, skeptical about such
regulation at the time as risking the stifling of innovation, but policies are now
changing across the Atlantic as well.
Anti-competition issues including the possibility of break-ups are now on the
table in the USA, and broader regulatory issues are proposed. Decisions taken (or not
taken) there will have crucial impact on how the entire digital ecosystem evolves
during the decade of the 2020s.
The EU and the USA remain at odds on a number of digital issues concerning
citizen privacy, the related issue of date transfer across the Atlantic, and the ongoing
spat of how to tax the tech oligarchs.
At the same time, especially after the 2020 US elections, both the USA and
Europe are developing a common concern about China’s use of digital technologies
unhindered by ethical, privacy, and human rights concerns and the comparative
advantage this could give China in leading the race to more and more advanced,
data-dependent, AI technologies.
Following the example of GDPR, the EU has proposed or is about to propose an
impressive, in scope at least, arsenal of regulatory measures including the AI
regulation, Digital Services Act, Digital Market Act, Data Governance Act, and
Data Act (see also Fig. 2).
Regulatory global influence constitutes for Europe a necessary, though by no
means sufficient, step toward attaining and maintaining digital sovereignty.
A Crucial Decade for European Digital Sovereignty 223
Quick decisions and follow-up steps are not easy for the European Union, an
incomplete construct of 27 small, in the global context, countries that do not have a
common foreign and defense policy which can provide a tool to help face China’s
state-supported industrial policy and the defense procurement innovation pull of
the USA.
This presents a real challenge for the European Sovereignty during this decade.
To counter this inherent handicap, in some strategically selected cases, a subset of
some EU countries could potentially move forward together quickly in a particular
technology area where they are very advanced (e.g., quantum computing) with the
possibility left open that other fellow member states could join later.
Perhaps selected quick, decisive moves during this decade in Europe could be
more effective than the unavoidably slow process of attempting to tease out a full
industrial policy equivalent first.
A truly unhindered digital internal market, regulation, and investments are actions
that constitute powerful building blocks of a potentially effective Digital Sover-
eignty strategy for Europe.
Part of this strategy is the timing of new regulation and new investment in a
particular area so that this investment and the players involved can draw maximum
benefit as the regulation’s “first users.”
This is why the timeline of impact, the dynamic dovetailing of the specific actions
emanating from aforementioned building blocks so as to maximize the added value
resulting from combining them in a timely and effective fashion while dynamically
factoring in developments in the geopolitical context, will be of “make or break”
significance during the decade.
Success for this strategy complemented by the continued nurturing of humanistic
values and leaders that embrace them would make the 2020s a very good decade for
Europe.
References
Moerel, L., Timmers, P. (2021). Reflections on Digital Sovereignty, EU CYBER DIRECT, January
2021.
Gallup International. (2021). Superpowers in the World in Y2030. February 2021.
Bradford, Anu. (2020). The Brussels Effect, Oxford University Press.
DigHumTUWien. (2021). Preventing Data Colonialism without resorting to protectionism – The
European strategy [Online video]. 23 February 2020. Available at: https://www.youtube.com/
watch?v¼O7tSwB_3a1w (Accessed: 9 June 2021).
Ministerial Council of the EU. (2020). Berlin Declaration on Digital Society and Value-based
Digital Government at the ministerial meeting during the German Presidency of the Council of
the European Union on 8 December 2020. Available at: https://ec.europa.eu/newsroom/dae/
document.cfm?doc_id¼75984 (Accessed: 3 May 2021)
A Crucial Decade for European Digital Sovereignty 225
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Geopolitics and Digital Sovereignty
Ciaran Martin
Abstract The geopolitical dialogue about technology has, for a quarter of a century,
essentially revolved around a single technological ecosystem built by the American
private sector. An assumption took hold that, over time, clearer “rules of the road”
for this digital domain would take hold. But progress toward this has been surpris-
ingly slow; we sometimes refer to “grey zone” activity, because the rules, insofar as
they exist, are fuzzy.
In the meantime, the digital climate is changing. China’s technological ambitions
are not to compete on the American-built, free, open Internet, but to design and build
a completely new, more authoritarian system to supplant it. This is forcing a
bifurcation of the Internet, and organizations like the European Union and countries
across the world have to rethink whether the regulation of American technology is
really where the focus should be, rather than working with the USA to contest
China’s ambitions.
C. Martin (*)
Blavatnik School of Government at the University of Oxford, Oxford, UK
e-mail: ciaran.martin@bsg.ox.ac.uk
or at least a competitive position with that model. So before the governance of the
American-led model properly settled down, a new geopolitical contest is underway.
Let us first look at the perhaps surprisingly slow progress in the governance of the
American-led model. The absence of international rules and standards in many areas
of digital life remains a concern for many of America’s Western allies. Tax remains
problematic: in the summer of 2020, talks between the USA and the European Union
on the taxation of digital services broke down; in 2021, the new Biden administra-
tion was still challenging the attempts of the UK, now outside the European Union,
to introduce a digital services tax which would apply mostly to American compa-
nies. The UK, the other non-US Five Eyes, and the EU institutions remain at
loggerheads with Silicon Valley over the security and law enforcement implications
of end-to-end encryption, with increasingly vocal frustration at the powerlessness of
Western governments to reverse the move toward its ubiquity. There have been
some movements toward mutually recognized standards in digital trade and data
protection. But overall it is hard to claim that the governance of the “free” Internet
pioneered on the US West Coast in the late 1990s has progressed much.
Moreover, there are few if any common understandings, let alone rules, about
acceptable and unacceptable conduct on that free and open Internet. The first phase
of online geopolitical competition has been played on Western or, more precisely,
American terms: the USA has most of the infrastructure, the companies, the influ-
ence in standards bodies, and much else. So hostile forces, like Russia, when seeking
to undermine the USA, are doing so within the digital environment created by the
Americans, rather than competing with it. They have exploited its ambiguities and
vulnerabilities in a series of what have become known as “grey zone” operations.
The zone is “grey” precisely because norms have not been established.
True, the United Nations Open-Ended Working Group on cyber norms unexpect-
edly reached a unanimous consensus in its third and final round in early 2021. But it
is too early to tell whether this compromise will have any lasting impact in terms of
the quest for rules of the road in cyberspace. In the meantime, the characterization of
a supply chain intrusion for espionage purposes (the so-called Holiday Bear cam-
paign carried out against the SolarWinds company and others) as an act of war by
senior members of the US Congress—the sort of activity routinely carried out by
Western intelligence services for information gathering purposes—demonstrates the
chasm in understanding when it comes to norms. There remains no Western con-
sensus on acceptable activity in cyberspace: for example, Microsoft’s quest for a
Digital “Geneva Convention” has never attracted serious support from its own
government in Washington or any of the USA’s most important allies. Western
governments, particularly the Five Eyes, show no particular appetite for them.
So the rules of the road remain largely absent. But in the meantime, the geopol-
itics have evolved significantly. A decade ago, however, one almost universal
assumption was that the geopolitics of the new technological age would remain a
contest on America’s terms. Silicon Valley had no strategic competitor. Moreover,
the apparent success of the open and free model was seen as a grave threat to
authoritarian regimes seeking to challenge the USA, so shaping the global rules
would extend liberal, democratic values.
Geopolitics and Digital Sovereignty 229
1
For an account of how the Chinese Communist Party wrought control over online communications
in China, see Consent of the Networked: The Worldwide Struggle for Internet Freedom (Rebecca
MacKinnon, 2013).
2
For a superb account of this trend, see Dan Wang, Annual Letter, 2019 https://danwang.co/2019-
letter/
230 C. Martin
very far from digital sovereignty. Two telecommunications equipment giants aside,
the European continent of half a billion of the world’s wealthiest Internet users has
precious little home-grown technological capability. Insofar as it is a tech super-
power, it is only a regulatory one. There is obviously an attempt to ground this
regulatory posture in digital humanism, but without the industrial capability, this
already complicated task is even harder.
Brussels policy is currently facing in two different directions. In July 2020, the
German government, in its official program for its presidency of the European
Council, announced its intention “to establish digital sovereignty as a leitmotiv of
European digital policy” (Pohle and Thie 2020). In its cyber security strategy of the
end of that year, the Commission set out, for the first time, some serious ideas on
how the development of European technology might take root. But in the same
month, the Commission published an overture to the incoming Biden administration
proposing economic cooperation on technology, including the establishment of a
Transatlantic Trade and Technology Council. This seemed to be a recognition that
digital sovereignty in Europe could not be achieved quickly (if it can be achieved at
all). Therefore, given the need to align with one of the two genuine technology
superpowers, the Americans were the only option, particularly after the departure of
President Trump who famously cared little for the USA’s alliances in Europe.
Three other more disparate groups of countries will be important in this great
geopolitical contest. One, likely to be of great interest to the Biden administration not
least because of Europe’s ambivalence toward US technology and its current
administrative paralysis over coronavirus vaccines, are a set of rich, hi-tech democ-
racies across the Five Eyes partnership and Asia (Japan, Singapore, South Korea).
None of these have any serious aspirations for digital sovereignty, though Japan and
Korea have some serious techno-industrial clout. But they will be keen to align with
Washington to counter China’s technological ambitions. The challenge is that such
an effort does not work like a security alliance: aligning commercial strategies is
harder than forming a military pact.
Then there is a group of authoritarian countries who dislike the US technological
model every bit as much as China does, but have no capabilities of their own. Russia
and Iran are examples of this. They may pursue a version of digital sovereignty
which is, in effect, Chinese-style control over the use of the technology without the
increasingly Chinese control over the ownership of it. Russia has held some at least
partially successful experiments in “disconnecting” itself from the Internet, though a
recent test seems to have backfired and hit the Russian government’s own infra-
structure. In time, such countries may become enthusiastic champions of China’s
digital model (Russia is already showing signs it may be heading in this direction).
Then, finally, there is the rest of the world: mostly middle- and lower-income
countries with no serious expectation of digital sovereignty. Many of these countries
increasingly fear being caught between the USA and China and having to choose.
For this section of the globe’s population which is where much digital growth is
expected as gaps in digital inclusion close, the term “digital sovereignty” rings
hollow amidst the struggle of the two technological heavyweights. As Deborah
M. Lehr of the Paulson Institute put it in 2019, “if a new economic Iron Curtain is
Geopolitics and Digital Sovereignty 231
to fall, it will be in areas like the Middle East and Africa” (Lehr 2019). Such
countries will worry less about the West’s ongoing struggles to write rules of the
road for America’s Internet and more about the implications of this splintering of the
technological globe.
References
Pohle, Julia and Thie, Thorsten (2020). “Internet Policy Review” Digital Sovereignty 4(9).
Lehr, Deborah M. (2019). How the US-China Tech War Will Impact The Developing World.
Published in The Diplomat, 23 February 2019.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Cultural Influences on Artificial
Intelligence: Along the New Silk Road
Lynda Hardman
While 20 years ago China was still learning from the international AI community,
investments and policies have led to the current situation where China is rapidly
overtaking the USA and EU in expertise in AI research, education, and innovation.
China has ambitions to become a world leader in AI by 2030 (CISTP 2018),
elevating the phrase “made in China” to a data-driven, hi-tech ecosystem for
manufacturing goods and technology. China developed an AI strategy in the
“China AI Development Report 2018” (CISTP 2018) in conjunction with partners
representing both academic and commercial interests in the country, overseen by the
The opinions in this chapter are the author’s own and do not necessarily reflect those of her
employers or the organizations she represents. This chapter is based on the Amsterdam Data
Science blog item “AI Research with China: to Collaborate or not to Collaborate – is that the
Question?” and Hardman (2020). https://amsterdamdatascience.nl/ai-research-with-china-to-
collaborate-or-not-to-collaborate-is-that-the-question/
L. Hardman (*)
CWI, Amsterdam, The Netherlands
University of Utrecht, Amsterdam, The Netherlands
e-mail: Lynda.Hardman@cwi.nl
China Institute for Science and Technology Policy (CISTP) at Tsinghua University.
Among the policy goals stated in the report are:
• To increase public awareness
• To promote the development of AI industry (to retail, agriculture, logistics,
finance, and reshaping production, e.g.)
• To act as a reference for policy makers
Societal goals for the use of AI are:
• Helping with an aging population
• Supporting sustainable development
• Helping the country transform economically—toward China as hi-tech developer
and supplier, rather than consumer
The first two of the societal goals are shared by Europe and the USA, leading to
benefits in collaboration. The transformation of China to a hi-tech supplier is more
likely to lead to competition, a valid endeavor in its own right, but results in global
competition for talent.
The report is realistic about the Chinese context, stating “Even recognized
domestic AI giants such as Baidu, Alibaba and Tencent (BAT) don’t have an
impressive performance in AI talent, papers and patents, while their
U.S. competitors like IBM, Microsoft and Google lead AI companies worldwide
in all indicators” (CISTP 2018, p. 6).
The executive summary concludes with “Currently, China’s AI policy has
emphasized on promoting AI technological development and industrial applications
and hasn’t given due attention to such issues as ethics and security regulation”
(CISTP 2018, p. 7).
China takes a long-term view, and this can be seen in its investments in AI
research and innovation, and particularly its tech talent. Huge efforts have been made
to attract successful Chinese AI researchers back to their home country to continue
their internationally competitive research and to educate new generations of talent.
China’s presence in the international AI research community is growing, as demon-
strated by the increasing percentage of papers in the top international AI conferences
that are co-authored by Chinese colleagues, working from China or from abroad
(Elsevier 2018).
The CISTP report also observes that the priorities of the USA are economic
growth, technological development, and national security (CISTP 2018, p. 5),
whereas the concerns of Europe are the ethical risks caused by AI on issues such
as security, privacy, and human dignity (CISTP 2018, p. 5). These different regional
policies seem aligned with underlying cultural differences among the regions.
Cultural Influences on Artificial Intelligence: Along the New Silk Road 235
The EU started developing national and European strategies around 2018, for
example, establishing the European-wide High-Level Expert Group on Artificial
Intelligence,1 which has produced Ethics Guidelines for Trustworthy AI and
corresponding Policy and Investment Recommendations toward sustainability,
growth, competitiveness, and inclusion (Craglia et al. 2018), and later publishing a
Coordinated Plan on Artificial Intelligence.2 This drive for AI investment is probably
also fueled by the huge investments in China, creating a “fear of losing out” if
Europe is not able to remain competitive. The European investment drive is not
solely from an economic perspective, as illustrated by the different aspects of the
report, such as sustainability and inclusion.
Around the same time that the High-Level Expert Group was developing its
report, many computer science academics and professionals were themselves
concerned by the growing impact of AI technology and potential unintended nega-
tive implications. Informatics Europe and the ACM Europe Council published the
joint report “When Computers Decide: European Recommendations on Machine-
Learned Automated Decision Making”3 in February 2018. The report states the
utility and dangers of decisions made by automated processes and provides recom-
mendations for policy leaders.
While better, and more, researchers across the globe is generally good news for
academic research, in AI we need to remain cautious. China’s enormous investments
in AI have led to domination in a narrow set of sub-fields around machine learning,
with an emphasis on computer vision and language recognition. This domination
could be perceived as cause for concern from an international standpoint. For example,
computer vision techniques can be developed for facial recognition to track the
movements of citizens. Different cultures perceive the benefits and dangers of these
applications differently. Using these same techniques for other applications, such as
distinguishing cancerous from benign cells, is, however, widely perceived as good.
The international community has closely aligned objectives in application areas
of AI such as climate change, transport, energy transition, and the health and well-
being of their aging citizens.
1
https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
2
https://ec.europa.eu/knowledge4policy/publication/coordinated-plan-artificial-intelligence-
com2018-795-final_en
3
https://www.informatics-europe.org/component/phocadownload/category/10-reports.html?
download¼74:automated-decision-making-report
236 L. Hardman
This brings us to the difficult political and scientific choices that need to be made
as to when and how to collaborate with China and when to politely decline. Do AI
researchers need to completely halt all collaboration with Chinese academics and
companies? Cessation of collaboration would be counter to the established interna-
tional research culture of openness and dialogue.
European AI researchers are unlikely to want to work with Chinese colleagues on
topics that may aid the Chinese state in actions that do not conform to European civil
rights and values. That there is a cultural difference in the desirability ascribed to the
trade-offs between privacy and security for those living in China and the West is
hard to understand in and of itself and even harder when researchers are not familiar
with the Chinese culture. AI researchers are not the most knowledgeable of global
cultural differences, nor do many European researchers spend extended periods of
time in China to learn first-hand.
Given the relatively small amount of research funding in European countries and
from the European Union, the welcome addition of funds from abroad would seem
like a golden opportunity. But things are not always as easy as they may appear.
Firstly—do European AI researchers want to work with Chinese colleagues? Sec-
ondly—do European academic institutions want to be funded by Chinese companies?
It is currently more common for European researchers to collaborate with large
US-based corporations. They fund research collaborations and attract high-profile
staff to work with them. At the same time, they have created the data economy that
led to the passing of EU law to give European citizens at least some control of the
data that they (often unknowingly) hand over to these corporations. There is, to my
knowledge, little discussion in my academic field as to whether we should think
carefully about collaborations with these US-based companies.
foundation of the AI and the computer science fields, the same talent can be attracted
to China, Europe, or the USA.
There are, however, differences in the willingness of students to leave their
continent to seek their fortune. It is likely that Chinese AI and computer science
students will study for some time abroad before returning to China where research
resources are currently (anno 2021) plentiful. On the other hand, the attraction of the
European work/life balance may play a factor.
European students are much less familiar with Chinese cultures and language
than Chinese students are with the English language and American and European
cultures. This creates a larger barrier to move to a region where the currently
perceived academic benefits are few. This may change as awareness of the techno-
logical speed of change and available research resources in China increases, creating
a stronger pull for both European and American students.
While China is attractive to young hi-tech talent, working weeks are long, giving
little opportunity to spend time with family and friends. In Europe, it is not just one’s
standard of living that is important but also one’s quality of life. Development of AI
technologies provides hope that both can be achieved through more efficient use of
the limited resources available.
Huge efforts have been made to attract successful Chinese AI researchers back to
their home country to continue their internationally competitive research and to
educate new generations of “home-grown” talent. Both generations of researchers
bring with them the competitive, individualistic risk-driven culture learned abroad.
Just as in winning in top sport—be it gymnastics, football, or table tennis—two
things are essential: individuals with intrinsic potential and motivation and an
environment that polishes and hones the required internationally competitive skills.
A characteristic of the successful Western research culture is questioning received
wisdom, which goes against the grain of Chinese and many other Asian cultures
where authority is highly respected.
colleagues is a much easier and definitely more pleasant task. Read the Lee (2018)
book, which gives insights into taking the Silicon Valley start-up culture and
transferring it to China while at the same time metamorphosing it to the rules of a
new “Wild East.” Learn Chinese and visit your colleagues in China.
AI research and innovation is taking place along the New Silk Road. European
researchers are already accustomed to global collaborations across different cultures.
One of the characteristics of international research culture is the independent
exchange of critical feedback, at the same time remaining aware of the implications
of the research outcomes. Chinese researchers are developing their own research and
innovation strategies, making strategic investments in academic education and
research enabling them to become an influential partner on the global stage in
research as well as innovation. From both European and Chinese perspectives, AI
researchers need to develop a better understanding of the cultures in which we
operate. Learning from each other’s cultural perspectives is something that our AI
systems are not yet able to do for us.
References
Bekkers F., Oosterveld W. and Verhagen P. (2019) Checklist for Collaboration with Chinese
Universities and Other Research Institutions. The Hague Centre for Strategic Studies, January.
https://hcss.nl/report/checklist-for-collaboration-with-chinese-universities-and-other-research-
institutions/
CISTP (2018) China AI Development Report 2018. China Institute for Science and Technology
Policy (CISTP) at Tsinghua University. http://www.sppm.tsinghua.edu.cn/eWebEditor/
UploadFile/China_AI_development_report_2018.pdf
Craglia M. et al., 2018. ‘Artificial Intelligence: A European Perspective’, Publications Office of the
European Union, March. https://ec.europa.eu/jrc/en/publication/artificial-intelligence-european-
perspective
Elsevier (2018) Artificial Intelligence: How knowledge is created, transferred, and used. Trends in
China, Europe, and the United States. https://www.elsevier.com/research-intelligence/resource-
library/ai-report
Hardman L. (2020) Artificial Intelligence along the New Silk Road: Competition or Collaboration?
Chapter in: (Van der Wende, 2020). https://ir.cwi.nl/pub/29940
Lee K.-F. (2018) AI Superpowers: China, Silicon Valley, and the New World Order. Houghton
Mifflin Co., USA. https://www.aisuperpowers.com
Van der Wende M.C., Kirby W.C., Liu N.C. and Marginson S. (eds.) (2020) China and Europe on
the New Silk Road: Connecting Universities across Eurasia. Oxford University Press. https://
global.oup.com/academic/product/china-and-europe-on-the-new-silk-road-9780198853022
Cultural Influences on Artificial Intelligence: Along the New Silk Road 239
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Geopolitics, Digital Sovereignty. . .
What’s in a Word?
Hannes Werthner
1 The Context
Nations (and groupings thereof), industry, and international organizations have spent
megabillions in the last decades to connect the globe. Sub-marine cables, satellite
communications, fixed and mobile networks, the Internet, enabled global connec-
tivity and access in the most remote places.
To enable this, the geo-political dynamics shaped a world of multi-lateral collab-
oration, with the WTO1 as the arbiter of a frictionless global trade. The global village
1
The World Trade Organization is an intergovernmental organization which regulates international
trade. The WTO officially commenced on 01/01/1995 under the Marrakesh Agreement, signed by
123 nations in 1994, replacing the General Agreement on Tariffs and Trade (GATT), which
commenced in 1948 (www.wto.org).
H. Werthner (*)
Vienna University of Technology, Vienna, Austria
e-mail: hannes.werthner@tuwien.ac.at
was not a rose garden indeed, but multi-lateral regulatory cooperation would fix it,
and global trade would make the world a safer and wealthier place for all. So was the
narrative, for example, when China became a member of the WTO in 2001.
Fast forward to 2021, “second life” has become our life, digital connectivity has
infused through the economy and society worldwide, M-Turks from Nigeria or India
work in real time for Silicon Valley corporations, and geography is ended (so is
privacy, but this is a different subject). And yet, as technology, economics, and
politics have shaped this global cyber-reality, globalization is challenged by the
(re)formation of more or less antagonistic trade blocs. After three decades of moving
toward a single global market governed by the rules of the WTO, the international
order has undergone a fundamental change, and an open, unified, global market may
indeed become a thing of the past (Fischer 2019).
With different narratives, each regional trading bloc is developing its own roadmap
to achieve global digital success and indeed global supremacy. Be it President Biden
signing an executive order strengthening the “Buy American” provisions, China
asserting its primacy in digital matters and global trade, India promoting a techno-
nationalistic agenda, or Russia developing its offensive cyber capabilities, it is as if
global trade in the twenty-first century was bound to be a discordant zero sum game.
What’s more, this global competition is not only industrial, technological, or
economic but also about visions, values, and methods. Whether trade irritants can
dissolve in good intentions remains to be seen.2
In a conference of the Centre for Economic and Policy Research in February
2021, Dr. Christian Bluth3 argued that the most important challenge for EU today
was the increasingly charged geopolitics of trade. He argued that trade policy is
increasingly used for projecting power rather than generating prosperity and several
countries are “weaponizing” the trade dependence that others have on them.
Be it for tea in China, spices from the Malabar Coast, gold in South America, coffee
or rubber in Africa, and oil in the Middle East, Europe did not have a problem with
global supply chains or national sovereignty when it was dominant in worldwide
trade and industry. It even got support from moral or political authorities (Treaty of
2
Some call for “differentiated digital sovereignty.”
3
https://www.bertelsmann-stiftung.de/en/about-us/who-we-are/contact/profile/cid/christian-
bluth-1
Geopolitics, Digital Sovereignty. . .What’s in a Word? 243
Tordesillas in 1494,4 Berlin Conference, 18855) and carved the political concepts to
lean on, e.g., Westphalian sovereignty. Today’s situation is indeed slightly different.
Once an economic giant (but a political dwarf), Europe’s ambitions for the “digital
decade” are caught in between a duopoly with the USA and China dominating the
global digital economy. The Old Continent appears to have already lost the artificial
intelligence battle—to name but one. We need to wake up to the fact that we are
falling behind in 5G development, and its application in service and industrial
verticals, and so running the risk of becoming a minor player in the global contest.6
GAFAMs (a short for Google, Amazon, Facebook, Apple, and Microsoft) and BATs
(Baidu, Alibaba, Tencent) are not only global platforms with revenues much larger
than many countries’ GDP. They also integrate vertically and horizontally, absorb-
ing potential competition, shaping the whole economy including for strategic sectors
and the provision of public services. Incidentally, they also alter the fundamentals of
the labor market. This market is not EU’s strong suit as shown in Fig. 1.
4
An agreement between Spain and Portugal aimed at settling conflicts over lands newly discovered
by Christopher Columbus and other late fifteenth-century voyagers.
5
Conference at which the major European powers negotiated and formalized claims to territory in
Africa
6
IDATE, Digiworld 2020 https://en.idate.org/the-digiworld-yearbook-2020-is-available/
244 H. Werthner
7
EU Commissioner for industrial policy, internal market, digital technology, defense, and space
8
https://bit.ly/3l0MBH4
9
https://bit.ly/3aWrv94
Geopolitics, Digital Sovereignty. . .What’s in a Word? 245
The digitalization of the world adds a meta layer on top of the political authority. To
set the rules of the game in its own jurisdiction, EU policy makers devised a series of
regulatory measures: General Data Protection Regulation,10 Cybersecurity Act,
Directive on Network and Information Security,11 Digital Services Act, and Digital
Market Act.12
The legal framework is set—and this is not trivial, but does it suffice to build
EU’s capacity to be sovereign in the digital competition? Many EU companies that
play in the global league are not particularly fervent of the concept of sovereignty as
most of their operations and revenue are overseas. How about the indecision with
GAIA-X13 constituency or ARM (a British leading chip maker) acquired by its US
rival NVIDIA (“a disaster for Cambridge, for the UK and for Europe” H. Hauser
BBC Radio 4 in September 2020) and envisaging to subsidize a US company to
build EU chip industry capacity?
Defined by F.H. Hinsley (1986, p. 1), sovereignty is “the idea that there is a final and
absolute political authority in the political community [. . .] and no final and absolute
authority exists elsewhere.” This implies, on the one hand, that no political authority
can be half sovereign and, on the other hand, that the entity from which sovereignty
emanates should be monolithic or at least sufficiently integrated to project “final and
absolute political authority.” Both characteristics are in contradiction with the way
the EU is constructed and the breakdown of jurisdiction and competence between
EU and Member States.
Does the exclusive competences of the EU14 grant EU lawmakers the means to
walk the talk? What means sovereignty without jurisdiction? How sovereign when,
for example, 15 EU countries representing over 50% of the entire EU membership
10
https://bit.ly/3nKbzvW
11
For the Cybersecurity Act and NIS Directive, see https://bit.ly/3gYlE6M.
12
For the Digital Services Act and the Digital Market Act, see https://bit.ly/2QIfHR2.
13
GAIA-X is a project for the development of a competitive, secure, and trustworthy federation of
data infrastructure and service providers for Europe, supported by representatives of business,
science, and administration from Germany and France, together with other European partners
(https://bit.ly/3nD241q).
14
Customs union, competition rules for the internal market, monetary policy for the euro area,
common fisheries policy, common commercial policy, and conclusion of international agreements
246 H. Werthner
sign with China’s Belt and Road Initiative or European automotive brands ink deals
with the GAFAs for data analytics, machine learning, and artificial intelligence?
This might partly explain the bids from the European Commission to seize
activities in areas the Treaties allot in principle to Member States, such as radio
spectrum allocation, health, or e-identity. Maybe “life on life’s terms” will change
this breakdown of competence; for the time being, the EC’s bids have not been
welcomed with open arms in EU capitals.
4 Where Next?
EU may not be a leading player in several areas of the digital economy, e.g.,
platforms. Yet it has a series of assets to build on, such as:
• The largest GPD in the world and a market of 500 million
• Leadership in several domains (e.g., aeronautics, cryptography, banking, auto-
motive retail)
• A very dynamic SME scene
• R&D and intellectual capital
• High-end connectivity and networks (transport, energy, telecoms)
• Cultural diversity and a fundamental rights charter
Europe is a deep wellspring of talent, with a tremendous capacity to rebound,
and a rare power of innovation: (. . .). Europe is also synonymous with actions and
projects driven by exacting values and a commitment to positive and progressive
construction.15
We argue that leadership in digital does not mean leading in all segments of the tech
industry, but rather the capacity to digitalize industry and services in a safe, secure,
and trustworthy way. With this come the questions of how to combine those assets,
which battles to choose, which allogenous bricks can be part of the plan, who to
partner with on what terms, etc.
In other words, select strategic sectors, and within those, make one’s own rules
and own plans on its own terms. Strategic autonomy, literally, as in auto-nomos.
This implies acting in several directions and selecting what to do and what NOT to
do—which is often the forgotten part.
15
IDATE, Digiworld 2020, op. cit.
Geopolitics, Digital Sovereignty. . .What’s in a Word? 247
This is not new. To a large extent, the EC (and several capitals) are headed in this
direction with the recent Regulations mentioned above, the increased powers granted
to ENISA16 in cybersecurity, or the rules to participate in EU-funded research
projects.
Contrarily to the concept of digital sovereignty, the concept of strategic autonomy
does not hint at any notion of protectionism, rather the idea that “you’re most
welcome to operate in my jurisdiction as long as you play by the rules I set.” It is
also much more operational as it almost self-contains the notion of a dynamic
planning.
In a webinar of a Brussels think tank in February 2021, Anthony Gardner, ex-US
Ambassador to the EU, said in a very poetic manner that “Digital sovereignty as
sometimes heard in EU circles is chasing moonbeams.” Strategic autonomy on the
contrary is very down to earth and operational.
16
European Union Agency for Cybersecurity, https://www.enisa.europa.eu/
248 H. Werthner
Acknowledgment This chapter was written after a long discussion and exchange with a third
person who prefers to remain anonymous.
References
Hinsley, F.H. (1986). Sovereignty. 2nd edition. Cambridge: Cambridge University Press.
Fischer, J. (2019). ‘The End of the World As We Know It’ Projekt Syndicate. 3 June [online].
Available at: https://www.project-syndicate.org/commentary/us-china-break-europe-by-
joschka-fischer-2019-06 (Accessed: 15 June 2021)
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part IX
Systems and Society
Work Without Jobs
Daniel K. Samaan
Abstract Technology has always had an impact on the world of work. This chapter
compares the transformation of our societies during the Industrial Revolution with
potential transformations that digitalization may bring about today. If digitalization
is truly disruptive, more may be at stake than job losses in some sectors and job gains
in others. Identifying several key features of digitalization, this chapter sketches a
future of work in which not jobs but work itself stands in the center of economic
activity. Such a development could open a pathway to more humanistic, more
democratic, and more sustainable societies but would require rethinking entirely
how we organize and reward work on a societal level.
Around 200 years ago, many societies in Europe and North America fundamentally
altered the way in which work was organized and remunerated. Facilitated by
technological advances like the steam and combustion engines, as well as expedited
by regulatory changes, mass production and standardization of goods became the
prevailing modes of production. This newly emerging factory system also entailed
changes in the work organization: it has been characterized by a high physical
concentration of labor in production facilities and a hitherto unseen division of
labor, orchestrated by hierarchical organizations. Both changes, mechanization and
standardization of production processes and the corresponding new work organiza-
tion, have led to unprecedented productivity gains to which we owe much of today’s
living standards. In his famous example of the pin factory, Adam Smith has
illustrated the magnitude of such productivity increases more than two centuries
ago, whereby output per worker could be increased to 4800 pins from less than
20 (Smith 1776).
We all know today that historians would later refer to this decades-long period of
continuous and fundamental changes to the world of work as the Industrial Revo-
lution (IR). Closely tied to this revolution is what Frithjof Bergmann (2019) calls the
D. K. Samaan (*)
International Labour Organization (ILO), Geneva, Switzerland
e-mail: samaan@ilo.org
“job system”: We bundle the vast majority of our work activities (“tasks”) into
“jobs.” We call standardized descriptions of such jobs “occupations.” These jobs are
then bought and sold on the (labor) market for a supposedly fair share of society’s
overall output (wage). Hence, the functioning of the industrial society is centered,
not about work that we do for us, but about obtaining and performing jobs for others.
The question I want to pursue in this chapter is whether in a “digital society” this
interdependence will be any different.
The importance of the “job system” for our societies can hardly be
underestimated. It is at the center of how we act and how we conduct our lives:
We educate ourselves, predominantly, in order to “learn an occupation” and to “get a
job.” We want to spend our lives being “employed” and not “unemployed.” Being
“unemployed” and without a “real job” are social stigmata and lead to loss of income
and social standing. Political competition in every Western democracy is critically
concerned about creating new jobs or proposing suitable conditions for companies to
crank out more jobs. We are prepared to accept all kinds of unpleasant trade-offs,
like destroying our environment or heating up the climate, if only job creation is not
harmed. Because without jobs, we have no work, no income, no taxes, no public
services, no social security systems, no more support for democratic votes, and
finally no more society, as we currently know it.
This way of thinking has not changed much since the IR. In 2021, we do reflect
about the future of work, but our imagination of the future is restricted and domi-
nated by the “job system” and by all the institutions and terminology that we created
around it: “the labor market,” “re-skilling,” “unemployment,” “part-time work,”
“contingent work,” etc. This list could be easily expanded and filled with the
respective literature on the future of work. In other words, with some exceptions
(e.g., Precht 2018), most of the discussion on the future of work sees the job system
as a given centerpiece of our societies.
The job system has not always existed. In pre-industrialization times, working
from home or in small community shops on one’s own terms, self-controlled and
owning the means of production, was the norm. Several factors drove us into the
creation of the “job system” at the time.
How does digitalization figure in this debate? It has awakened old fears among
workers, politicians, corporate leaders, middle managers, and others. Specifically,
they worry that this most recent wave of digitalization will lead to unprecedented
automation and hence a massive loss of jobs (Frey and Osborne 2017). And as we
have seen above, once the jobs are gone, the downward spiral (no work, no income,
etc.) is triggered. So, this fear is justified.
Yet, I would like to look at this question from a slightly different angle in this
chapter: A society might run out of jobs, but it can never run out of work. The real
question that we face today is therefore whether or not digitalization and its powerful
Work Without Jobs 253
offspring, big data and artificial intelligence (AI),1 are going to eradicate the “job
system” and, if so, how we can live without it.
There are three reasons why digitalization, understood as a technology, has the
potential to destroy the job system. Firstly, artificial intelligence is a general-purpose
technology (Brynjolfsson and McAfee 2014). It is not an invention, like the radio or
many others, which have a confined impact on certain economic sectors and societal
domains, like the radio has had on mass media, the printing press, and perhaps the
military sector. AI, and digitalization more broadly, is more comparable to electri-
fication. We can find applications and devices in virtually all economic sectors for
consumers and producers, workers, management, governments, and many other
actors alike. This qualification as a general-purpose technology is a major ingredient
for a revolutionary change. The economic system is shocked from many different
contact points at the same time.
Secondly, big data provides economic actors2 with information on the “states of
the world” and facilitates decentralized decision-making and decentralized action.
Most of economic activity on the societal level (often also on the individual level) is
about making decisions under uncertainty to allocate resources efficiently: A priori,
we do not know who needs which goods and services under what conditions at a
certain time. Neither do we know who can supply which resources under which
conditions and who has which capabilities. We do not know and cannot directly
observe in which “state” the world is. This was the price we had to pay for the high
division of labor. Traditionally, this lack of information and this problem of coor-
dination have been solved by adding middlemen and by mass-producing a standard-
ized good or service for the average consumer. Those “middlemen” can be persons
inside an organization, like middle managers, who pass information from the top
management to the workers and make sure the orders are carried out. The middlemen
can also be outside an organization (say a firm) and facilitate contact to the right
customers or carry out marketing surveys. The information flow and the feedback are
typically coordinated through specified channels that follow hierarchical structures.
These are remains of the factory system. Digitalization makes much of this frame-
work unnecessary. Production plants and workers do not need to be concentrated,
neither spatially nor in time. Output does not have to be standardized but can be
customized for a specific individual. We can think about the industrial economic
world as a picture of islands of producers, customers, workers, and managers,
whereby the middlemen are connecting the islands. The whole picture (“state of
the world”) is not fully visible. Now big data is rapidly filling the empty spaces with
many small dots and establishing direct connections among them.
1
Digitalization encompasses more than the massive amounts of digital data and processing of
information through AI systems, but I will have mainly these two aspects in mind when I refer to
digitalization in this chapter.
2
In fact, not all economic actors have the same access to information, and the accumulation of
digital data over the last years has already led to a power shift across enterprises.
254 D. K. Samaan
really want” (Bergmann 2019), even though the desire to work and contribute to the
well-being of society in one way or another exists in virtually every human being. Is
digitalization not giving us the tools to devise a better, more human coordination and
remuneration mechanism? According to some estimates, the percentage of jobs in
“transaction industries”3 has risen from about 15% in 1900 to about 40% in 1970 in
the US economy. Unfortunately, I do not have more recent, comparable numbers
available, but these estimates were from before the big wave of “servicification” hit
our economies. I would therefore estimate that today more than 2/3 of the jobs are
“transaction jobs,” including a large overlap with what Graeber (2018) calls “bullshit
jobs.” Karl Marx (1885) referred to labor that is essentially spent to circulate capital
and goods as “unproductive labor,” compared to “productive labor” that is
concerned with the creation of use values for society.4 If we adopt this notion of
productivity, only a small proportion of all performed labor is still productive for
society. Interestingly, going back to the more modern works of Frey and Osborne
(2017) and Brynjolfsson et al. (2018), we can see that the jobs/tasks with a high risk
of automation and high suitability for machine learning are exactly these jobs/tasks
in the “transaction industries.” Finally, the job system, conjointly with mass pro-
duction, has a terrible record in terms of resource productivity. It is a waste producer.
A society with more meaningful and satisfying work is possible. Most of us want
to spend more time on care work, education and cultural work, and preserving and
protecting our environment. The digital revolution is not threatening our work, it is
threatening the job system, and this is good news.
References
Bergmann, Frithjof (2019): ‘New work, new culture – work we want and a culture that strengthens
us’, Zero Books, Winchester U.K, Washington U.S.
Brynjolfsson, Erik, and Andrew McAfee (2014): ‘The Second Machine Age: Work, Progress, and
Prosperity in a Time of Brilliant Technologies’. Reprint. W. W. Norton & Company.
Brynjolfsson, E., T. Mitchell, and D. Rock (2018): ‘What Can Machines Learn and What Does It
Mean for Occupations and the Economy?’ AEA Papers and Proceedings, 108, 43-47.
Frey, Carl Benedikt, and Michael A. Osborne (2017): ‘The Future of Employment: How Suscep-
tible Are Jobs to Computerisation?’, Technological Forecasting and Social Change 114:
254–80. https://doi.org/10.1016/j.techfore.2016.08.019.
3
Sectors and professions concerned with processing and conveying information (accountants,
clerks, lawyers, insurance agents, foremen, guards, etc.).
4
Marx did not use the terms “productive” and “unproductive labor” consistently throughout all
three volumes of capital.
256 D. K. Samaan
Graeber, David (2018): ‘Bullshit Jobs – The rise of pointless work and what we can do about it’,
Penguin Books, UK, USA.
Marx, Karl (1885): ‘Capital’, Volume 2, Penguin Edition of 1978, 1885.
Precht, Richard David (2018): ‘Jäger, Hirten, Kritiker – Eine Utopie für eine digitale Gesellschaft’,
Goldmann Verlag, 1. Auflage, München.
Smith, Adam (1776): ‘An Inquiry into the Nature and Causes of the Wealth of Nations’, Modern
Library; 12/26/93 edition (December 26, 1993), New York.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Why Don’t You Do Something to Help Me?
Digital Humanism: A Call for Cities to Act
Michael Stampfer
Abstract Cities across the globe face the challenge of managing massive digitiza-
tion processes to meet climate goals and turn urban agglomerations into more livable
places. Digital Humanism helps us to see and define how such transformations can
be done through empowerment of citizens and administrations, with a strong
political agenda calling for inclusion, quality of life, and social goals. Such an
approach appears to be much more promising than top-down technological fantasies
as often provided by large companies in fields like housing, transport, the use of
public space, or healthcare. The title refers to a question put to Stan Laurel by Oliver
Hardy in countless movies. Here the latter stands for a city calling industry for help.
The delivery as we know can lead straight to disaster, but in real life it is less funny
than with the two great comedians.
Cities across the world face a number of pressing and long-term challenges, includ-
ing massive urbanization with growth in size and density as well as the need to
de-carbonize the whole urban metabolism. This includes transport, construction, and
consumption as well as implementation of climate mitigation strategies. Further, and
in many different ways, cities play an important role in key policy areas like social
cohesion, education, housing, and health, with the aim to provide for an affordable
and high quality of life. Finally, as most real politics is local, cities are pivotal for
further developing democracy, sourcing the creative potential, fostering innovation,
and increasing the political participation of their inhabitants.
In the last decades, digitalization has been entering through all kinds of doors,
with great promises and massive power to transform traditional forms of evidence-
gathering, business models, governance structures, communication patterns, and
decision-making. Ubiquitous optical and sensor systems and broadband networks
provide for massive and reliable data which can be analyzed with powerful methods
ranging from machine learning to complex systems analysis. In fields like health or
M. Stampfer (*)
Vienna Science and Technology Fund (WWTF), Vienna, Austria
e-mail: michael.stampfer@wwtf.at
public transport, the power to collect, own, combine, and interpret data has become
as important as the ownership of operating theaters or subway lines. Cities across
Europe therefore speed up with digital policies and actions. Speed however is a
relative term as many cities face huge obstacles with endless layers of bureaucracy,
laws cementing the status quo, conflicting interests, and limited budgets.
Now we can close our eyes for a moment and think of a stressed-out, impatient
Oliver Hardy turning to Stan Laurel, inconspicuously standing close by: “Why don’t
you do something to help me?” This is how many cities have acted: Most conve-
niently, industry also happens to wait already on the doorstep with beautifully
rendered turnkey solutions or mobile apps to solve most wicked societal problems.
Unfortunately, for miracles like thumb-as-lighter, finger-wiggling, and kneesy-
earsy-nosey, the transfer of skills to the unprepared mind has its consequences:
We let our imagination still flow for a moment, to uphill piano transports, escalating
cream pie fights, or the sweet dynamism of destroyed kitchen porcelain. At the end,
we listen to a sobbing Stan Laurel pleading innocent and to the famous Oliver Hardy
line: “That’s another nice mess you’ve gotten me into.” However, contrary to Stan,
industry often does not end on the losers’ side.
“Smart City” has become a key term for urban policies dealing with data-driven,
often large-scale solutions to better manage urban agglomerations. For cities to enter
the next steps of digitalization means facing a number of challenges; the following
five points can be seen as examples:
A first one might be just termed “turnkey”—like supplying whole neighborhoods
with readymade supply for de-centralized energy production and storage with smart
grids and meters. This is a very good idea (seriously, and at the same time we hear
Oliver Hardys’ voice again), a big trend, and it helps transform the energy system,
but it is also extremely tricky and in need of long and patient co-development
between private and public actors.
A second challenge is “you may keep the hardware,” as the tram network or bus
fleet stays with the public utilities, but the data solutions managing the user interface
are being serviced by private providers. That one might not be such a good idea, as
data today defines strategies and directs revenue streams. Therefore, cities try to
establish their own data management structures.
A third one can be termed “improved, electrified, without driver, no change of
mind required” which is a nice combination of current user habits and future industry
profits. Take as an example the effort to help cities get rid of the car pandemic . . . by
providing technically enhanced cars: As without drivers they for sure will after the
trips miraculously disappear somewhere. A better idea might be to establish demand-
based co-creation processes with citizens on how to redefine and regain urban space
and broader mobility concepts.
A fourth challenge is the “law-overriding platform economy,” successfully ignor-
ing taxation, employment laws, or sector regulations, just because they can. When
cities or regions hit back with their still powerful old-world instruments like taxi
license ordnances, regulations for touristic accommodation, or labor standards, they
might fend off the tech giants for a while, albeit at the price of stifling innovation and
paying rents to incumbents.
Why Don’t You Do Something to Help Me? Digital Humanism: A Call for Cities. . . 259
A fifth example is “give us your city and make us happy,” with again tech giants
collecting and sucking off all kinds of data without leaving sustainable profits for the
city and its inhabitants. Many examples start with low capabilities of city actors to
frame, organize, manage, and capitalize digital platforms and services, therefore
handing over to industrial actors. They then provide street view, health diagnosis, or
pan-optical surveillance, without telling what they do with the data collected and
how they re-use and sell them.
Such examples hopefully show how important it is for cities to develop and
implement an active political approach toward data and data policies. This starts with
focusing on data protection and privacy issues as well as with building up in-house
competencies, extending to the creation and nurturing of local networks and knowl-
edge hubs, as well as cooperation with academia and civil society. Strategies are
important, and more so are scores of individual projects, ranging from supported
small citizens’ initiatives to large-scale change efforts in health, transport, energy, or
participation.
The bottom-up approach is specifically important: Cities have to become active
and experiment, for three reasons: First, in many fields connected with large-scale
data collection, the platform economies, and their digitally driven influencing
strategies on our future behavior, we see national, European, or global regulation
still at an infant stage, with large companies successfully battling legislative efforts.
Second, for changes in the way cities work and resources are being used, the ideas
and needs of citizens are often the best way to reclaim public space, data sovereignty,
and carbon neutrality. Data sovereignty is of specific importance here, as city
administrations are also hungry for data and should not misuse them. Third, living
labs and trial and error are often much better-suited for lasting innovation than the
top-down, turnkey solutions. As we of course also need general laws and regula-
tions, such an approach can serve as a valuable learning space.
Digital Humanism plays a central role in shaping what cities do and how they do
it, as a state of mind and as a guiding principle. City politics and administration do
not have to re-invent the wheel as there are many successful examples in history how
to create public goods in areas like housing or public transport. The “state of mind”
issue appears to be of specific importance as the question is: Who shall prosper?
How to include all—or at least most—inhabitants in decision-making and have them
share the benefits? What is good life in cities in key elements like social interaction,
health, resource consumption, or space to live?
These are not really new questions; take the example of a city like Vienna in
pre-digital times: Powerful public infrastructures, a huge communal housing pro-
gram, top-class health for all, and strong social networks have been the political
priorities for more than 100 years. As one consequence, Vienna in the last decade
could successfully take the next step and become one of the leading Smart Cities
across the globe, combining goals and measures for social inclusion, for innovation,
and for reduction in resource consumption and carbon emissions.
The next, ongoing step is to frame digitalization politically and in all policies to
respond to the Digital Humanism number one question: How can we preserve and
constructively transform the best of our current civilization into the digital world?
260 M. Stampfer
We have to build laws, norms, structures, and knowledge bases to allow our key
institutions to thrive also in the future: This is the representative democracy, the
welfare state, the rule of law, and the social market economy. All of them can unfold
even better in a strongly digitized world. However, leaving this transformation
without strong political action, we invite big market actors to suck off our data, to
not pay taxes, or to manipulate elections and our individual actions. Without such
frameworks, we write an invitation letter to illiberal democracy and to the surveil-
lance state, as unfortunately the libertarian equation of greater freedom through
unregulated digital interactions has proved to be a two-edged sword at best.
Digital Humanism therefore means active politics and regulatory frameworks in
addition to ethics, knowledge bases, and infrastructures. Cities can do a lot in all
these respects:
• Active politics is about setting an agenda, finding majorities, and making issues
visible through priority setting and sometimes loud, bold statements. Mayors and
cities are more powerful than one might be aware of. They can mobilize an
electorate and are in charge for big decisions in city development. A number of
cities like Barcelona or Amsterdam have already implemented policies that
certain data sets collected by big companies have to be made available to citizens,
small companies, and initiatives. Cities can play an important role in guaranteeing
individual and collective data sovereignty and help make data a resource for
social and economic purposes on a local level. Cities can form coalitions and
pressure groups, as with the Digital Services Act on the European level.
• As in former industrial revolutions, regulatory frameworks in many ways still
wait to be developed. In the meantime, policy makers also on the city level have
to respond by shifting the objective: As already stated, such regulations might not
always be rooted in the digitization as such. Instead, we find them in good old
labor laws, transport licenses, or accommodation rules. As a consequence, Uber-
style transport might come to an abrupt halt; and Airbnb hosts find unexpected
obstacles. Such measures help stop the erosion of justified standards in labor
relations or fair competition. They might at the same time also stifle competition,
feed lazy incumbents, and prevent customer value from materializing. Therefore,
we need new forms of regulation, and cities can be strong actors between a
Charybdis called good old times and the Scylla of unregulated global platforms
devouring scores of workers and businesses.
• On the basis of strong political and regulatory will, ethics issues come in through
many doors. A first door is education, with new curricula to help the gap between
the “lots of ethics but no idea about technology” approach in the more humanistic
colleges and the “give us points to connect and gratifications and progress at all
price” ideology of technical education. Such an integration should start early on,
and regional authorities can play a role as they often run the more basic education.
A next door is the purchasing power of cities asking for high ethical standards in
all kinds of digital goods and services. A third one is supporting grassroot and
civil society actors concerned with privacy, data sovereignty, or creative work;
cities can subsidize and promote movements, festivals, neighborhood initiatives,
Why Don’t You Do Something to Help Me? Digital Humanism: A Call for Cities. . . 261
and a score of other activities. One final point here goes in the same direction,
supporting a critical discourse, where the ethics part is about taking the side of the
less fortunate part of the population by countering a narrative like: “we all have to
be techno-optimists” by stating that without caring mainly the strong and pow-
erful will collect the benefits when this narrative materializes.
• Knowledge bases—besides schools—include strong universities and research
providers. Many cities employ active policies to support their local research
base. Cities like Berlin have created cross-disciplinary centers, while in Vienna
we find a research funding program called Digital Humanism to link Computer
Sciences with Social Sciences and the Humanities. Here the idea is to collabora-
tively create theories, methods, approaches, and practices as well as to find a
common understanding of new technological and social phenomena. Such initia-
tives shall help to wake up the soft sciences for the challenge of the digital
revolution, while it shall provide the engineers with a framework of how a
good, inclusive society shall further unfold with the help of their models and
artifacts. One guidepost for such activities is the Vienna Manifesto for Digital
Humanism that has been co-developed by local and international researchers of
various backgrounds with support from Vienna policy makers.
• Infrastructures is another broad topic where cities can play an important role: First
by helping to provide top-class broadband networks, second by strongly
supporting their own departments and utilities as well as across private industries
to come forward with up-to-date solutions, both technically and non-technically.
Third, and perhaps most important, is the ability of the public sector to effectively
deal with their data. Currently public actors collect loads of data but often without
proper policies and practices how to best store, validate, connect, and share them.
Within the framework of the recent European data protection regulations, there
are many ways to better steer policy, deliver results, and allow research to access
data. Unfortunately, many public actors including cities currently face a dilemma:
As they cannot always effectively transform traditionally high standards of
service into the digital sphere, they have to give carte blanche to all kinds of
companies including the global platform firms by letting them collect, analyze,
and capitalize the data. Examples from the health and public transport sector
show that such an approach can be dangerous: Public actors should at least have
the competence to govern public domain data, being able to decide what shall
remain in the public domain and what can be handed over to the private sector.
As we see, Digital Humanism is a mindset and a tool for cities, a mindset in
emergence and a tool in the making. We all can be part of this process. Why don’t
you do something to help us? for now is a serious question.
262 M. Stampfer
Further Reading
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Ethics or Quality of Life?
Hubert Österle
For decades, machine intelligence has changed companies and the economy. Now it
is affecting our lives in significantly more direct ways and generating hopes and
fears. Ethical initiatives such as Digital Humanism want to align machine intelli-
gence with the quality of life (happiness and unhappiness) of all humans.
For highly developed societies at least, technology and capitalism have brought
enormous material prosperity and satisfied needs such as food, security, and health,
i.e., the needs of self-preservation and preservation of the species.
But the affluent society can do more than satisfy basic needs (yellow background
in Fig. 1). The needs of selection (light-blue background) come to the fore (Österle
2020, p. 68–80) and drive human beings onto a treadmill in which, consciously or
unconsciously, everyone is constantly working on their status, whether through
H. Österle (*)
University of St. Gallen, St. Gallen, Switzerland
e-mail: hubert.oesterle@unisg.ch
Phrases such as “for the benefit of humanity” have become a common element of
corporate mission statements. But who actually believes in such grandiose state-
ments? What has ethics, especially business ethics, as formulated by Max Weber
100 years ago (Weber 1915), actually achieved? It is certainly helpful to ask what
kind of interests guide ethics.
Ethics or Quality of Life? 265
At the American Business Round Table, nearly 200 CEOs of leading US companies
signed a “fundamental commitment to all of our stakeholders.”1 Many media articles
have described it as an attempt to sugarcoat the social ills of digitalization through
simple declarations of intent. Interestingly, the statement of these business represen-
tatives does not even mention the much more concrete international standard ISO
26000 on Corporate Social Responsibility (Schmiedeknecht and Wieland 2015),
which was adopted 10 years ago. Digitalization requires many corporate leaders to
demonstrate, among other things, the responsible handling of personal data. Indi-
vidual management consultants have reacted to this with offers for data ethics, aimed
primarily at maintaining company ratings.
Avoiding the dangers of digitalization and seizing the opportunities for the benefit of
human beings is a task for all citizens. Everyone must consider how they use digital
services and what they expect from companies and politicians, for example, what
personal data they give to Facebook, and where politicians should protect them from
abuse. The danger arises when the discussion is dominated by do-gooders, who often
1
https://www.businessroundtable.org/business-roundtable-redefines-the-purpose-of-a-corporation-
to-promote-an-economy-that-serves-all-americans
2
https://www.msci.com/documents/10199/123a2b2b-1395-4aa2-a121-ea14de6d708a
3
https://www.inrate.com/index.cfm
4
https://www.oecd.org/finance/Investment-Governance-Integration-ESG-Factors.pdf
266 H. Österle
argue purely emotionally, usually represent a very narrow partial view, and use vocal
debate to compensate for their lack of knowledge and thus influence politics. Typical
“enemies” are the greed of shareholders, the totalitarian manipulation in China, the
taxation of foreign corporations, and the “zombification” of mobile phone users.
Do-gooders altruistically stand up for the good of the community but demand
sacrifices mostly from others. In many cases, their commitment is a search for
recognition for their efforts and a striving for self-esteem, which is often described
as a “meaningful life” or similar phrases.
Politicians need votes or the trust of their constituents. So they pick up on the popular
mood and translate it into pithy catchphrases. A good example is the European
Union’s announcement of the digital future of Europe5 with populist values such as
fairness, competitiveness, openness, democracy, and sustainability. In addition to
emphasizing fashionable topics such as artificial intelligence, the chapter focuses on
the regulation of digitalization, while it hardly presents any concepts on how Europe
should keep pace with the USA and China and therefore actively contribute toward
shaping digital services. The focus is on restricting entrepreneurial activity, not on
exploiting potentials such as the Internet of Things (5G, sensor, and actuator
technology). The addressed citizens do not know these technologies or know them
too little, and they have neither the time nor the motivation and the prerequisites to
understand the technologies and their consequences. It is therefore much easier to
evoke the previously mentioned “bogeymen” than to arouse enthusiasm for misun-
derstood technologies.
This is also confirmed by the discussion on the use of mobile phone users’
location data to curb the spread of COVID-19. The data that has long been used,
for example, for planning public transport, is virtually negligible compared to the use
of data voluntarily submitted to Google, Apple, or Facebook. Even classic personal
data such as the traffic offenders’ register in Flensburg, credit scorings, and customer
data in the retail sector allow for far more dangerous misuse. Ethical values culti-
vated by do-gooders and attention-grabbing media hamper any serious discussion on
how the rapidly growing collections of personal and factual data could help to make
human coexistence healthier, less conflictual, and more enjoyable,6 rather than
concentrating on tightening criminal law.
5
https://ec.europa.eu/commission/presscorner/detail/de/ip_20_273
6
https://www.lifeengineering.ch/post/social-scoring-the-future-of-economy-and-society
Ethics or Quality of Life? 267
Ethics is looking for rules that should bring the highest possible quality of life for
everyone. If we accept that digitalization cannot be stopped and that it will bring
about massive socio-cultural change, we need mechanisms, now more than ever, to
guide this change for the benefit of humankind. But do ethics and the underlying
interests provide the tools? Two essential prerequisites are missing: First, ethics does
not determine what quality of life actually constitutes. Second, there is a lack of
procedures for objectively measuring quality of life.
A discipline called Life Engineering should start right there. It should develop a
robust quality of life model based on the findings of psychology, neuroscience,
consumer research, and other disciplines and validate this model using the increas-
ingly detailed and automatically collected personal and factual data. The network of
needs can be a starting point if each of the needs, like health, is broken down into its
components, such as age, pain, weight, strength, and sleep quality, and the causal
relationships are statistically recorded.
Once the factors of quality of life are better understood, it will be possible to
better assess the opportunities and risks of digital services. The sensors of a
smartwatch can measure possible influencing factors on health so that individualized
correlations between physical activity and sleep behavior or heart rhythm distur-
bances can be recognized and the wearers of smartwatches can thus increase their
health and well-being by taking simple measures. Such concrete, statistically sound
evaluations of digital services currently remain the exception. However, a quality of
life model, even in such a rudimentary form as the network of needs outlined above,
provides at least a framework for discussion in order to evaluate technical develop-
ments in terms of arguments, as shown by the example of Instagram.
Ethics is based on values such as dignity, respect, trust, friendship, responsibility,
transparency, and freedom. However, such values are only relevant to people if they
meet their needs and thus trigger positive or negative feelings. What does the ethical
value trust mean for needs like security, power, or energy?
It very quickly becomes clear how far away we are from a quality of life model
that combines behavior, perceptions, needs, feelings, and knowledge. However,
looking at the tasks of ethics, it is hardly justifiable not to at least try what is feasible.
Right now, we are leaving this development to the Internet giants, who, like Google,
for instance, with its knowledge graph, try to better understand and model these
connections, while these companies and their management are being measured by
their economic success, not by human quality of life. It is therefore almost inevitable
that they will have to persuade customers to make the decisions that generate the
most revenue.
Never before in the history of humankind have we had such comprehensive and
automatically recorded datasets that allow statements about behavior and quality of
life. The Internet and sensors are documenting our lives more and more seamlessly,
as Melanie Swan discovered as early as 2012 under the banner of the “quantified
self” (Swan 2012, p. 217–253). The instruments of machine learning and modeling
268 H. Österle
in neural networks offer us the chance to recognize quality of life patterns and to
make them effective in digital assistants of all kinds, from shopping to nutrition, for
the benefit of human beings. Never before has such intensive support been provided
for people by machines in all areas of life through digital services. Never before has
it been possible to give people such well-founded and well-targeted help and advice,
to guide them in a recognizable but subtle way. The thought of this frightens the
pessimists and excites joyful expectation among the utopians.
With the methods of data analytics, health insurance companies evaluate the
personal and factual data of their policyholders in order to better calculate the
individual risks. They adjust the individual premiums in line with the individual
risks and ultimately reduce claim costs for the same income. For some policyholders,
this leads to savings, but for those who are disadvantaged in terms of health and
therefore financially less well-off at the same time in most cases, it means higher
payments. The redistribution of risk in the sense of solidarity is lost.
If an insurance company succeeds in better understanding the influences on health
and—what is even more difficult—in guiding the insured to health-promoting
behavior through digital services, then this machine intelligence helps both the
insured and the insurers.
7
https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-sys
tems.html
Ethics or Quality of Life? 269
Apart from these obvious rules, which do not have to be derived from scientific
studies, it would be helpful if ethics could be based on an operational quality of life
model. It is positive that version 2 of the IEEE guidelines on Ethically Aligned
Design, unlike the first version, attempts to do just that. It is based on approaches and
metrics for well-being. Its recommendations on the different aspects of ethics for
machine intelligence ultimately provide a comprehensive agenda for Life
Engineering.
In order to ever be able to meet such requirements, a Life Engineering discipline
needs the following, in addition to financial resources:
• Access to the digital personal and factual data
• Exchange of knowledge about behavior patterns and their effects on quality of life
• Ability to influence the development of digital services
• Political incentives for positive developments and prohibitions of negative
developments
Life Engineering offers the chance to transfer ethics from the stage of a religion to
a stage of science, just as the Enlightenment did in the eighteenth century. This has
brought about a human development that probably only few people today would like
to reverse.
References
Österle, H. (2020). Life Engineering – Machine Intelligence and Quality of Life. https://doi.org/10.
1007/978-3-030-31482-8, p. 68-80.
Swan, M. (2012). ‘Sensor Mania! The Internet of Things, Wearable Computing, Objective Metrics,
and the Quantified Self 2.0’. Journal of Sensor and Actuator Networks, 1(3), 217–253. https://
doi.org/10.3390/jsan1030217
Schmiedeknecht, M. H. & Wieland, J. (2015). ISO 26000, 7 Grundsätze, 6 Kernthemen. In
Corporate Social Responsibility. Verantwortungsvolle Unternehmensführung in Theorie und
Praxis. Berlin, Heidelberg: Springer Gabler.
Weber, M. (1915) Die Wirtschaftsethik der Weltreligionen. Jazzybee Verlag.
Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: PublicAffairs.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Responsible Technology Design:
Conversations for Success
Abstract Digital humanism calls for new technologies that enhance human dignity
and autonomy by educating, controlling, or otherwise holding developers responsi-
ble. However, this approach to responsible technology design paradoxically depends
on the premise that technology is a path to overcoming human limitations while
assuming that developers are themselves capable of super-human feats of prognos-
tication. Recognizing developers as subject to human limitations themselves means
that responsible technology design cannot be merely a matter of expecting devel-
opers to create technology that leads to certain desirable outcomes. Rather, respon-
sible design involves expecting the technologies to be designed in ways that provide
for active, meaningful, ongoing conversations between the developer and the tech-
nology, between the user and the technology, and between the user and the devel-
oper—and expecting that designers and users will commit to engaging in those
conversations.
Digital humanism calls for new technologies that enhance human dignity and
autonomy by infusing ethics into the design process and into norms and standards.
These calls are even echoed by politicians in the international arena (Johnson 2019):
the mission ... must be to ensure that emerging technologies are designed from the outset for
freedom, openness and pluralism, with the right safeguards in place to protect our peoples.
. . . we need to agree on a common set of global principles to shape the norms and standards
that will guide the development of emerging technology (Boris Johnson, Address to the UN,
2019)
While there are many nuances and variations, one of the simplest ways to
understand the foundational premise of digital humanism is to consider a digital
humanist in contrast with a digital technologist. Both want to create a better world
and improve the human condition. Both see technology as a way of overcoming
human limitations and making things better.
Where they differ is that digital technologists see the creation of technologies that
eliminate the need for human involvement through automation as the primary path to
sustained, substantial improvement in the human condition. Self-driving cars seek to
remove the need for a human driver. AI-based facial recognition seeks to remove the
human from the recognition process. In contrast, digital humanists pursue change by
encouraging development of technologies that place humans at the center,
empowering them to enact their own well-being. Wikis enable humans to create
collections of useful information for efficient learning. Social platforms allow people
with similar goals to create mutually supportive communities of practice.
However, while they differ significantly with respect to the role of humans in the
application of technology to improve the human condition, one thing that many
digital technologists and humanists share is an assumption about the relationship
between developers and the technologies they create. Whether it is implicit or
explicit, it is often assumed that developers create technologies which in turn
shape the actions and choices of the users (Gibson 1977). The actions of developers
lead to the different technologies existing (or not), having particular features (or not),
and creating affordances which enable (or prevent) users from taking particular
actions or making particular choices (e.g., Anderson and Robey 2017).
It is this assumed power of the developer to shape the technology and subsequent
user behavior that is the basis of efforts to bring about responsible technology design
by educating, controlling, or otherwise holding developers responsible. Efforts to
infuse ethics into Computer Science curricula reflect this assumption, for example,
Embedding EthicsTM @ Harvard which:
embeds philosophers directly into computer science courses to teach students how to think
through the ethical and social implications of their work. (https://embeddedethics.seas.
harvard.edu/)
how to balance the desires of readers with the inclusion of silenced voices and
peoples. Social platforms must track the volume of posts and types of content, but
must also continually consider trade-offs between the economic goals of the pro-
viders and the civic goals of the larger society. Engaging in dialog with a system
requires that developers engage with these issues as well while balancing the needs
for privacy and security.
These conversations can occur at multiple levels and in diverse forms. An agile or
co-design approach creates a direct dialog between users and designers that is much
richer than is possible with the waterfall method of development. Regulation also
puts users, developers, and technologies in conversation with one another. Users can
also use the marketplace to express their preferences. Of course, responsible tech-
nology design cannot mean that a developer is responsible for all of the outcomes of
the technology. Rather, we argue that they are responsible for creating and engaging
in systems that support ongoing dialog, engagement, and adaptation between devel-
opers, technological elements, and other stakeholders.
By their very nature, digital technologists necessarily set themselves up with a
fundamentally harder problem with respect to enabling responsible design. By
setting their sights on eliminating meaningful involvement of humans in systems
through automation, they necessarily make the dialog between developers and those
systems more difficult to support and achieve. Building in traceability, detailed logs,
exception reports, and an extensive investigative operation to review and respond to
this data in a timely fashion become essential. At best, building this capability for a
more responsible system requires substantial additional cost and effort. At worst, it
requires developers to incorporate features and functions which are counter to the
goals of automation, setting responsible design up in opposition to what a digital
technologist considers to be an effective design process.
In contrast, a digital humanist who seeks to improve the human condition by
empowering people is already predisposed to enabling dialog between the human
and technical element of a socio-technical system because that dialog is integral to
their approach. Incorporating additional features and functions that enable devel-
opers to participate in this dialog is therefore a more straightforward proposition and
less likely to be seen as counter to the goals of the design process. Digital humanism
can help us recognize the limitations of humans and the role that technology can play
in empowering humans to overcome those limitations. This is a significant contri-
bution that digital humanism can make. Recognizing the limitations of developers
and users and adopting models of responsible design and use that accommodate
those limitations by putting these communities in continual conversation could be an
even more powerful contribution of digital humanism.
Whether it is self-driving cars enabled by the internet of things, artificial intelli-
gence for facial recognition, self-monitoring wikis, online community platforms, or
some other application of emerging technology, it is not the consequences of
designers who failed to anticipate the impact of their creations that we should fear.
This we must expect even with extensive training in ethical decision-making. There
is no other outcome that is possible. Instead, it is the designer who has the hubris to
believe that they can fully anticipate the outcomes of their creations—and as a result
Responsible Technology Design: Conversations for Success 275
fails to allow for and participate in the conversations that are needed to adaptively
engage the technology and its implications that are the irresponsible parties who
should be the object of concern.
References
Anderson, C. and Robey, D. (2017). Affordance potency: Explaining the actualization of technol-
ogy affordances, Information and Organization, 27(2), 100-115, 2017.
Embedding EthicsTM @ Harvard, https://embeddedethics.seas.harvard.edu/, retrieved April
15, 2021.
Gibson, J. L. (1977). A Theory of Affordances. In R. Shaw and J. Bransford (Eds.) Perceiving,
Acting and Knowing: Toward an Ecological Psychology, Hillsdale, NJ: Lawrence Erlbaum
Associates, Inc. pp. 67-82.
Johnson, B. (2019). Prime Minister speech for the UN General Assembly, Sept, 14, https://www.
gov.uk/government/speeches/pm-speech-to-the-un-general-assembly-24-september-2019.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Navigating Through Changes of a Digital
World
N. Hauk (*)
Fraunhofer Institute for Open Communication Systems (FOKUS), Berlin, Germany
e-mail: nathalie.hauk@fokus.fraunhofer.de
M. Hauswirth
Technical University of Berlin, Berlin, Germany
Weizenbaum Institute, Berlin, Germany
e-mail: manfred.hauswirth@tu-berlin.de
the issues of morality, ethics, and legality in the development of technologies since
the ultimate limit of technologies must be the ethical and moral limits. For more
details on the topic, see chapter “Our Digital Mirror” in “Ethics and philosophy of
technology.”
A prime example of the increasing reliance on technology in the modern society
is Artificial Intelligence (AI). AI algorithms and technologies have already found
their way into everyday life. Hence, the question of whether AI technologies should
be employed no longer arises. The performance of routine tasks, such as using web
search engines, opening a smartphone with face ID, or running automatic spell
checks when writing an email, relies on AI, often unnoticed by the user. Neverthe-
less, as with any new technology, the use of AI brings both opportunities and risks.
While AI can help with protecting citizens’ security, improving health care, promot-
ing safer and cleaner transport systems, and enabling the execution of fundamental
rights, there are also justified concerns that AI technologies can have unintended
consequences or can even be used for malicious purposes.
Good exemplary demonstrations of these fundamental problems can be found in
the area of Machine Learning (ML). ML is used to discover patterns in data, e.g.,
identifying objects in images for medical diagnoses. The big advantage is clear: An
ML algorithm never gets tired and performs the tedious analysis task for enormous
numbers of images, at high frequencies and speeds. With the invention of quantum
computers, even larger amounts of data could/will be analyzed in real time, tackling
problems that are out of reach until now. However, ML algorithms are often “black
boxes”—capable of performing a learned or trained behavior without offering
insight into how or why a decision is made (for a brief overview on explainable
AI, see Xu et al. 2019). For the ML training process, the appropriate selection of
training data sets is of crucial importance. The deployment of inappropriate or biased
data sets often only becomes apparent after a training process has already been
completed, as the following three examples illustrate:
1. Automated decision-making processes are deployed increasingly in recruiting
and human resources management. In 2018, Amazon had to cease an AI
recruiting tool after discovering that the underlying algorithms of their software
discriminated against women. Presumably so because the initial training data set
contained more male applicants than female applicants. Hence, the algorithm
learned that the best job candidate was more likely to be a male.
2. Tay, a chatbot developed to research conversational understanding, released on
Twitter by Microsoft in 2016, started using abusive language after receiving vast
quantities of racist and sexist tweets from which Tay learned how to conduct a
conversation.
3. An image recognition feature, developed by Google in 2015, miscategorized two
black people as gorillas. The fact that the company failed to solve the problem
(but rather blocked the categories “gorilla,” “chimp,” “chimpanzee,” and “mon-
key” in the image recognition entirely) demonstrates the extent to which ML
technologies are still maturing.
Navigating Through Changes of a Digital World 279
A product must ensure the same standard of safety and respect for fundamental
rights, regardless of whether its underlying decision-making processes are human-
based or machine-based. Moreover, we create AI systems that are able to write texts
and can communicate in natural language with us. Some of them do this so
eloquently that we are no longer able to distinguish whether a real person commu-
nicates with us or a system. This fundamental problem has been shown by Joseph
Weizenbaum with his simple natural language processing system ELIZA
(Weizenbaum 1967). Since then, such systems, e.g., virtual assistants, have evolved
significantly, while the problem still remains unsolved. The potential danger is that
we do not know whether a given piece of information comes from a human or a
machine. Thus, we cannot infer the reliability of a given information, or we may
have to re-define the concept of reliability altogether. These issues become even
more delicate and pressing when fundamental rights of citizens are directly affected,
for example, by AI applications for law enforcement and jurisdiction. Traceability of
how an AI-based decision is taken and, therefore, whether the relevant rules are
respected is of utter most importance.
2 Conclusions
References
Ars Electronica (2019) Out of the Box – the Midlife Crisis of the Digital Revolution. Ars
Electronica Festival. Linz, Austria, September 5th-9th 2019.
Bannister, F., & Connolly, R. (2011) Trust and transformational government: A proposed frame-
work for research. Government Information Quarterly, 28, pp. 137-147. https://doi.org/10.
1016/j.giq.2010.06.010
De Visser, E.J., Pak, R., & Shaw, T.H. (2018) From ‘automation’ to ‘autonomy’: the importance of
trust repair in human–machine interaction. Ergonomics, 61(10), pp. 1409-1427. https://doi.org/
10.1080/00140139.2018.1457725
Dietz, G., & Den Hartog, N.D. (2006) Measuring trust inside organisations. Personnel Review, 35
(5), pp. 557-588. https://doi.org/10.1108/00483480610682299
European Commission (2020) White Paper on Artificial Intelligence: a European approach to
excellence and trust. Brussels, February 19th 2020. Available at: https://ec.europa.eu/info/
sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf (accessed:
March 3rd 2021)
282 N. Hauk and M. Hauswirth
Følstad A., Nordheim C.B., Bjørkli C.A. (2018) What Makes Users Trust a Chatbot for Customer
Service? An Exploratory Interview Study. In: Bodrunova S. (eds) Internet Science. INSCI 2018.
Lecture Notes in Computer Science, 11193, pp. 194-208. Springer, Cham. https://doi.org/10.
1007/978-3-030-01437-7_16
Hancock, P.A., Kessler, T.T., Kaplan, A.D., Brill, J.C., & Szalma, J.L. (2020) Evolving Trust in
Robots: Specification Through Sequential and Comparative Meta-Analyses. Human Factors,
pp. 18720820922080-18720820922080. https://doi.org/10.1177/0018720820922080
Lankton, N.K., McKnight, D.H., & Tripp, J. (2015) Technology, Humanness, and Trust: Rethink-
ing Trust in Technology. Journal of the Association for Information Systems, 16(10),
pp. 880-918. https://doi.org/10.17705/1jais.00411
Mayer, R.C., Davis, J.H., & Schoorman, F.D. (1995) An Integrative Model Of Organizational
Trust. Academy of Management Review, 20(3), pp. 709-734. https://doi.org/10.5465/amr.1995.
9508080335
Ryan, M. (2020) In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and
Engineering Ethics, 26(5), pp. 2749-2767. https://doi.org/10.1007/s11948-020-00228-y
Schäfer, K.E., Chen, J.Y., Szalma, J.L., & Hancock, P.A. (2016) A Meta-Analysis of Factors
Influencing the Development of Trust in Automation: Implications for Understanding Auton-
omy in Future Systems. Human Factors, 58(3), pp. 377-400. https://doi.org/10.1177/
0018720816634228
Siau, K., & Wang, W. (2018) Building Trust in Artificial Intelligence, Machine Learning, and
Robotics. Cutter Business Technology Journal, 31(2), pp. 47-53.
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., & van Moorsel, A. (2020) The
relationship between trust in AI and trustworthy machine learning technologies. In Conference
on Fairness, Accountability, and Transparency (FAT* ’20), January 27th –30th 2020, Barce-
lona, Spain. ACM, New York, NY, USA, pp. 272-283. https://doi.org/10.1145/3351095.
3372834
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019) Explainable AI: A Brief Survey
on History, Research Areas, Approaches and Challenges. In: Tang, J., Kan, M.Y., Zhao, D., Li,
S., & Zan, H. (eds) Natural Language Processing and Chinese Computing. NLPCC 2019.
Lecture Notes in Computer Science, (11839), pp. 563-574. Springer, Cham. https://doi.org/10.
1007/978-3-030-32236-6_51
Weizenbaum, J. (1967). Contextual Understanding by Computers. Communications of the ACM, 10
(8), pp. 474-480. https://doi.org/10.1145/363534.363545
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part X
Learning from Crisis
Efficiency vs. Resilience: Lessons from
COVID-19
Moshe Y. Vardi
Abstract Why was the world not ready for COVID-19, in spite of many warnings
over the past 20 years of the high likelihood of a global pandemic? This chapter
argues that the economic goal of efficiency, focused on short-term optimization, has
distracted us from resilience, which is focused on long-term optimization. Comput-
ing also seems to have generally emphasized efficiency at the expense of resilience.
But computing has discovered that resilience is enabled by redundancy and distrib-
utivity. These principles should be adopted by society in the “after-COVID” era.
By March 2020, COVID-19 (Coronavirus disease 2019) was spreading around the
world. From a local epidemic that broke out in China in late 2019, the disease has
turned into a raging pandemic, the likes of which the world has not seen since the
1918 Spanish flu pandemic. By then, thousands have already died, with the ultimate
death toll growing into the millions. Attempting to mitigate the pandemic, individ-
uals were curtailing travel, entertainment, and more, as well as exercising “social
distancing,” thus causing an economic slowdown. Businesses hoarded cash and cut
spending in order to survive a slowdown of uncertain duration. These rational
actions by individuals and businesses were pushing the global economy into a
deep recession.
Observing the economic consequences of this unexpected crisis, William
A. Galston asks in a March 2020 Wall Street Journal column1 “What if the relentless
pursuit of efficiency, which has dominated American business thinking for decades,
has made the global economic system more vulnerable to shocks?” He went on to
argue that there is a tradeoff between efficiency and resilience. “Efficiency comes
through optimal adaptation to an existing environment,” he argued, “while resilience
requires the capacity to adapt to disruptive changes in the environment.”
1
https://www.wsj.com/articles/efficiency-isnt-the-only-economic-virtue-11583873155
M. Y. Vardi (*)
Rice University, Houston, TX, USA
e-mail: vardi@cs.rice.edu
A similar point was made by Thomas Friedman in a May 2020 New York Times
column:2 “Over the past 20 years, we’ve been steadily removing man-made and
natural buffers, redundancies, regulations and norms that provide resilience and
protection when big systems—be they ecological, geopolitical or financial—get
stressed. . . . We’ve been recklessly removing these buffers out of an obsession
with short-term efficiency and growth, or without thinking at all.”
Both Galston and Friedman were pointing out that there is a tradeoff between
short-term efficiency and long-term resilience. This tradeoff was also raised, in a
different setting, by Adi Livnat and Christos Papadimitriou (2016). Computational
experience has shown that simulated annealing, which is a local search—via a
sequence of small mutations—for an optimal solution, is, in general, superior
computationally to genetic algorithms, which mimic sexual reproduction and natural
selection. Why then has nature chosen sexual reproduction as almost the exclusive
reproduction mechanism in animals? Livnat and Papadimitriou’s answer is that sex
as an algorithm offers advantages other than good performance in terms of approx-
imating the optimum solution. In particular, sexual reproduction favors genes that
work well with a greater diversity of other genes, and this makes the species more
adaptable to disruptive environmental changes, that is to say, more resilient.
The tradeoff between efficiency and resilience can thus be viewed as a tradeoff
between short-term and long-term optimization. Nature seems to prefer long-term to
short-term optimization, focusing on the survival of species. Indeed, Darwin sup-
posedly said: “It’s not the strongest of the species that survives, nor the most
intelligent. It is the one that is most adaptable to change.”
And yet, we have educated generations of computer scientists on the paradigm
that analysis of algorithm only means analyzing their computational efficiency. As
Wikipedia states:3 “In computer science, the analysis of algorithms is the process of
finding the computational complexity of algorithms—the amount of time, storage, or
other resources needed to execute them.” In other words, efficiency is the sole
concern in the design of algorithms. (Of course, the algorithm has to meet its
intended functionality.) The Art of Computer Programming,4 a foundational text
in computer science by Donald E. Knuth, is focused solely on efficiency. What about
resilience? Quoting Galton again: “Creating resilient systems means thinking hard in
advance about what could go wrong and incorporating effective countermeasures
into designs.” How can we make our algorithms more resilient?
Of course, fault tolerance has been part of the canon of computing-system
building for decades. Jim Gray’s 1998 Turing Award citation5 refers to his invention
of transactions as a mechanism to provide crash resilience to databases. Leslie
Lamport’s 2013 Turing Award citation6 refers to his work on fault tolerance in
2
https://www.nytimes.com/2020/05/30/opinion/sunday/coronavirus-globalization.html
3
https://en.wikipedia.org/wiki/Analysis_of_algorithms
4
https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
5
https://amturing.acm.org/award_winners/gray_3649936.cfm
6
https://amturing.acm.org/award_winners/lamport_1205376.cfm
Efficiency vs. Resilience: Lessons from COVID-19 287
distributed systems. Nevertheless, computer science has yet to fully internalize the
idea that resilience, which to include reliability, robustness, and more, must be
pushed down to the algorithmic level. Case in point is search-result ranking.
Google’s original ranking algorithm was PageRank,7 which works by counting the
number and quality of links to a page to determine how important the website is. But
PageRank is not resilient to link manipulation, hence “search-engine optimization.”
As pointed up by Friedman and Galston, the relentless pursuit of economic
efficiency prevented us from investing in getting ready for a pandemic, in spite of
many warnings over the past several years, and pushed us to develop a global supply
chain that is quite far from being resilient. Does computer science have anything to
say about the relentless pursuit of economic efficiency? Quite a lot, actually.
Economic efficiency means8 that goods and factors of production are distributed
or allocated to their most valuable uses and waste is eliminated or minimized. Free-
market advocates argue9 that through individual self-interest and freedom of pro-
duction and consumption, economic efficiency is achieved and the best interests of
society, as a whole, are fulfilled. But efficiency and optimality should not be
conflated. The First Welfare Theorem,10 a fundamental theorem in economics, states
that under certain assumptions a market will tend toward a competitive, Pareto-
optimal equilibrium; that is, economic efficiency is achieved. But how well does
such an equilibrium serve the best interest of society?
In 1999, Elias Koutsoupias and Papadimitriou undertook (Koutsoupias and
Papadimitriou 1999) to study the optimality of equilibria from a computational
perspective. In the analysis of algorithms, we often compare the performance of
two algorithms (e.g., optimal vs. approximate or offline vs. online) by studying the
ratio of their outcomes. Koutsoupias and Papadimitriou applied this perspective to
the study of equilibria. They studied systems in which non-cooperative agents share
a common resource and proposed the ratio between the worst possible Nash equi-
librium and the social optimum as a measure of the effectiveness of the system. This
ratio has become known as the “Price of Anarchy,”11 as it measures how far from
optimal such non-cooperative systems can be. They showed that the price of anarchy
can be arbitrarily high, depending on the complexity of the system. In other words,
economic efficiency does not guarantee the best interests of society, as a whole, are
fulfilled.
A few years later, Constantinos Daskalakis, Paul Goldberg, and Papadimitriou
asked (Daskalakis et al. 2006) how long it takes until economic agents converge to
an equilibrium. By studying the complexity of computing mixed Nash equilibria,
they provide evidence that there are systems in which convergence to such equilibria
can take an exceedingly long time. The implication of this result is that economic
7
https://en.wikipedia.org/wiki/PageRank
8
https://www.investopedia.com/terms/e/economic_efficiency.asp
9
https://www.investopedia.com/terms/i/invisiblehand.asp
10
https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics
11
https://en.wikipedia.org/wiki/Price_of_anarchy
288 M. Y. Vardi
12
https://www.chronicle.com/article/covid-19-is-just-the-warm-up-act-for-climate-disaster
13
https://www.washingtonpost.com/opinions/2020/10/06/fareed-zakaria-lessons-post-pandemic-
world/
14
https://www.wired.com/story/who-will-we-be-when-the-pandemic-is-over/
Efficiency vs. Resilience: Lessons from COVID-19 289
References
Daskalakis, C., Goldberg, P.W., and Papadimitriou, C.H. (2006) The complexity of computing a
Nash equilibrium. STOC 2006: 71-78
Livnat, A. Papadimitriou, C.H. (2016) Sex as an algorithm: the theory of evolution under the lens of
computation. Commun. ACM 59(11), pp. 84-93
Koutsoupias, E. and Papadimitriou, C.H. (1999) Worst-case Equilibria. Proc. 16th Annual Sym-
posium on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science 1563,
Springer, pp. 404-413
Yoo, C.S. (2018) Paul Baran, Network Theory, and the Past, Present, and Future of the Internet.
Colo. Tech. LJ 17, p.161.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Contact Tracing Apps: A Lesson in Societal
Aspects of Technological Development
Walter Hötzendorfer
Abstract Overall, there might be more important aspects of the COVID-19 pan-
demic and the global fight against it than contact tracing apps. But the case of the
contact tracing apps tells us an interesting story in the context of Digital Humanism.
It shows us that the principle of privacy by design has reached practice and what it
can mean and achieve. Unfortunately, it is also a lesson about the societal limitations
of privacy by design in practice. It is a good thing that people are skeptical and ask
questions about privacy and data protection. However, it is necessary to differentiate
and try to make educated decisions or trust experts and civil society.
In the app stores of Google and Apple, we can find so many popular apps which
track their users and trample on data protection. Most people do not question that at
all and use these apps heedlessly. In the face of the COVID-19 pandemic, we have
developed some of the most privacy-friendly and best-scrutinized apps, and people
have questioned them widely—which is a good thing. In the resulting public
discussion, it turned out to be difficult to explain a privacy-by-design solution to
the public. Clearly, it is hard to understand how tracing of individual contacts and
anonymity (or pseudonymity) can be possible at the same time.
One particularly perfidious characteristic of SARS-CoV-2 is that an infected
person can already infect others while she still feels perfectly healthy. Therefore, if
Alice has been infected by Bob, Alice must be warned immediately and stay at home
as soon as Bob learns that he has been infected. Then, those who Alice would have
otherwise met later are saved. Contact tracing is a measure to achieve such warnings
of contact persons.
Almost immediately after we realized that the virus had reached Europe in early
2020, various projects started in different countries to develop an app which could
supplement the usual contact tracing carried out by health authorities. Out of these
developments, soon a broad and qualified international discussion emerged about
W. Hötzendorfer (*)
Research Institute AG & Co KG, Vienna, Austria
e-mail: walter.hoetzendorfer@researchinstitute.at
1
It must be noted that ease of implementation not only shortens time to market but is also an
important security feature. The more complex a system is, the more difficult it is to secure, and this
relation is definitely not linear.
2
For example, a Bluetooth-based solution is both temporally and geographically more precise and
more privacy-friendly than a GPS-based solution. It would go beyond the scope to explain all the
details here; see Troncoso et al. (2020).
3
Admittedly, there may be additional parameters, but it is apparent that these three parameters are
pivotal for bringing an effective, lawful, and trusted app into the field as quickly as possible.
4
See https://github.com/DP-3T.
5
See, for example, Contact Tracing Joint Statement (available at https://www.esat.kuleuven.be/
cosic/sites/contact-tracing-joint-statement/); CCC, “10 Prüfsteine für die Beurteilung von ‘Contact
Tracing’-Apps” (available at https://www.ccc.de/de/updates/2020/contact-tracing-requirements);
European Data Protection Board, “Guidelines 04/2020 on the use of location data and contact
tracing tools in the context of the COVID-19 outbreak” (available at https://edpb.europa.eu/our-
work-tools/our-documents/guidelines/guidelines-042020-use-location-data-and-contact-tracing_
en); Bayer et al., “Technical and Legal Review of the Stopp Corona App by the Austrian Red
Cross” (available at https://epicenter.works/document/2497); and Neumann, “‘Corona-Apps’: Sinn
und Unsinn von Tracking” (available at https://linus-neumann.de/2020/03/corona-apps-sinn-und-
unsinn-von-tracking).
6
https://covid19.apple.com/contacttracing
Contact Tracing Apps: A Lesson in Societal Aspects of Technological. . . 293
dominant and powerful that the world is practically unable to implement such a
system without their good will (Veale 2020).
However, Google and Apple may not have chosen the decentralized approach out
of noble privacy considerations. In light of the potential information power that
comes with the centralized approach, one can raise the hypothesis that they did not
want to decide which governments are trustworthy enough to give them these
powers and which ones are not. In any case, they implemented the decentralized
approach, and so the most privacy-friendly solution prevailed in practice.
As many people may not know, the GDPR, which ushered in a new era of data
protection when it came into force in 2018, has not significantly changed the
substantive data protection law in Europe. Rather, its fundamental impact results
from the penalties it imposes for conduct that, for the most part, was already
unlawful before and from the momentum and focused public discussion it created
all over Europe and beyond. However, the GDPR did introduce a new principle: data
protection by design. This fundamental principle wants us to build privacy into the
design of systems from the start. Only slowly an understanding is being developed of
what this requirement means in practice and how it can be systematically fulfilled.
DP-3T and related concepts and their implementation by Apple and Google can be
seen as one of the first widespread real privacy-by-design solutions in the sense that
it demonstrated that with a privacy-first attitude, the key functional requirements of a
software can be fulfilled without compromise.
However, it made us realize that we are not there yet. The step we took here, from
having a sound body of data protection law which should theoretically protect users
installing an app to having the app implemented as a data protection by design
solution and making that transparent in every detail, was not enough to gain the trust
of the users.
This is particularly noteworthy since the quality and depth of the public techno-
logical discussion was remarkable. For example, in Austria, not only the Data
Protection Authority but also the broader data protection community was involved
in the development of the app very early, and the nationally and internationally
recognized NGOs epicenter.works and NOYB and the information security research
center SBA Research carried out a technical and legal review and published a report
containing a list of recommendations which were immediately implemented.7 The
important realization, that we must actively participate in shaping technology if we
are to exercise political control over it, seemed to have suddenly taken effect in civil
society and the scientific community. The European Data Protection Supervisor
stated: “The public discussion about specific privacy features for a new application,
which was only in the early phases of development, was a completely new
phenomenon.”8
7
https://epicenter.works/document/2497
8
https://edps.europa.eu/press-publications/press-news/blog/what-does-covid-19-reveal-about-our-
privacy-engineering_en
294 W. Hötzendorfer
9
See, e.g., Schneier, “Me on COVID-19 Contact Tracing Apps” (available at https://www.schneier.
com/blog/archives/2020/05/me_on_covad-19_.html).
10
I am convinced that the number of active users of contact tracing apps could have easily been
boosted. One option would be to integrate other functionalities into the app, such as the extension of
the contact tracing functionality by an (e.g., QR code based) anonymous check-in option for
meetings, meeting rooms, restaurants, and other kinds of locations. There are developments in
this direction in some countries. Another option would be a lottery, i.e., to announce that one
random user per week will be informed by a push notification that she won 1000 or be it 10,000
euros—a rather cheap measure compared to the cost of the pandemic (cf. https://logbuch-
Contact Tracing Apps: A Lesson in Societal Aspects of Technological. . . 295
that the decentralized approach and the apps based on it are in fact harmless in terms
of privacy and data protection. The European Data Protection Supervisor concludes
that: “From all reactions, it appears that the biggest inhibitor to wide uptake and use
of tracing apps is the lack of trust in their confidentiality.”11
Has the world become so complicated that a broad majority cannot take qualified
(democratic) decisions concerning a growing number of domains? One way forward
is to strengthen trust in experts and science. But to be honest, we have to realize that
this cannot fully succeed in explaining privacy-by-design solutions to a broader
public. This might be related to the fact that privacy by design is in a way an attempt
to control technology with more technology. And this at least makes things more
complicated and hence more complicated to explain.
Clearly, there are domains where digital solutions are conceptually completely
inappropriate, and the paper-based solution fulfils the essential requirements appro-
priately, e.g., a secret ballot.12 But the domain of contact tracing apps is a good
example where only technology enables a suitable solution, i.e., tracing and ano-
nymity at the same time, which is conceptually impossible to achieve with any
paper-based approach.
In many other domains, we might not be able to find such elegant privacy-by-
design solutions that fulfil all functional requirements as DP-3T does here. As I am
writing these lines exactly 1 year after the Austrian Stopp Corona App was released,
another app-based “solution” in the context of the pandemic is around the corner: the
“green” app-based pass for demonstrating the fact of being tested, vaccinated, or
immune due to a past infection. Unfortunately, here the perfect privacy-by-design
concept for implementing such a system does not (yet) suggest itself. At the same
time, this is a much more crucial domain than contact tracing because people will be
under much more factual pressure to use such a system if they want to participate in
public life again. However, it seems that wherever the “green pass” is discussed, it is
much clearer than it was a year ago that such a solution must meet the highest
standards in privacy and data protection.
To conclude, I think this is the positive legacy of the contact tracing apps in the
context of Digital Humanism: We can expect that applied privacy by design will
become more common. Also, the public debate about privacy and data protection
was elevated onto a new level. Mankind needs to find ways to actively shape
technological progress for the greater good, and therefore civil society and the
scientific community must involve themselves as it happened here. However, we
also learned that this is not enough: As technological development is making the
world more difficult to understand every day, we need to find ways to explain
“good” technology to the people, including intellectuals and experts in other fields,
while maintaining a sound and productive skepticism toward technological devel-
opments that influence our lives.
References
Hötzendorfer, W. (2020) ‘Zum Verhältnis von Recht und Technik: Rechtsdurchsetzung durch
Technikgestaltung’ in Hötzendorfer, W., Tschohl, C., Kummer, F. (eds.) International Trends
in Legal Informatics, Festschrift for Erich Schweighofer. Bern: Editions Weblaw, 419–437.
Troncoso C. et al. (2020) ‘Decentralized Privacy-Preserving Proximity Tracing’, Whitepaper,
DP-3T Consortium, 25 May. Available at: arXiv:2005.12273
Veale, M. (2020) ‘Privacy is not the problem with the Apple-Google contact-tracing toolkit’, The
Guardian, 1 July. Available at: https://www.theguardian.com/commentisfree/2020/jul/01/apple-
google-contact-tracing-app-tech-giant-digital-rights.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Data, Models, and Decisions: How We Can
Shape Our World by Not Predicting
the Future
Niki Popper
Abstract Modelling and simulation can be used for different goals and purposes.
Prediction is only one of them, and, as this chapter highlights, it might not be the
main goal—even if it was in the spotlight during the COVID-19 crisis. Predicting the
future is a vanity. Instead, we aim to prevent certain events in the future by
describing scenarios, or, even better, we try to actively shape the future according
to our social, technological, or economic goals. Thus, modellers can contribute to
debate and social discourse; this is one of the aims of Digital Humanism.
“I don’t try to describe the future. I try to prevent it.” This Ray Bradbury quote from
1977 was cited by Theodore Sturgeon in Emtsev and Parnov (1977, p. viii): “In a
discussion of (Orwell’s) book 1984, Bradbury pointed out that the world George
Orwell described has little likelihood of coming about—largely because Orwell
described it. ‘The function of science fiction is not (only) to predict the future, but
to prevent it.’ Bradbury said.”1
The use of modern simulation methods also often falls prey to the misunder-
standing that prediction is its main goal. In my opinion, it is not our purpose to
predict the future. Instead, we aim to prevent certain events in the future by
describing scenarios or—even better—try to actively shape the future according to
our social, technological, or economic goals. We can thereby contribute to discus-
sions and social discourse; this is one of the intentions of Digital Humanism. One of
the most important scientific contributions to achieving this aim has been the
development of innumerable types of models that are fed with all kinds of data.
This wealth rather complicates things. . .
1
Theodore Sturgeon wrote this in the preface to a Russian science fiction book. One can, of course,
concede that the idea of preventing disaster was probably more prevalent during the Cold War era
than that of bringing about positive change, as compared to recent decades.
N. Popper (*)
Vienna University of Technology, Vienna, Austria
e-mail: nikolas.popper@tuwien.ac.at
At first glance, the notions of shaping the future and of preventing undesired
events from happening do not differ from each other. They both revolve around the
fact that we usually want to not only predict the future but to actually change it (for
the better). Some kinds of models, like the weather forecast, focus on prediction. We
mostly just want to know how likely it is that we will need an umbrella and not why.
Thus, the design of those kinds of models differs greatly from the ones that will be
described in this chapter, i.e., those being used by my group in the ongoing COVID-
19 crisis. These are models that can show different possible outcomes for different
decisions and are used to support the discussion of the available variety of strategies.
In January 2020, my group at TU Wien began applying our model of the virtual
Austrian population, its interactions and connections to infrastructure, measures, and
policies in order to model the COVID-19 crisis (Bicher et al. 2021a). We are now
able to map different aspects of the COVID-19 crisis that interact with each other on
an individual level. These are, among others, strategic aspects such as (A) setting and
cancelling measures; (B) testing, screening, and isolation strategies; (C) vaccination;
and (D) the development of new therapeutic concepts. From a systemic perspective,
it is possible to implement (E) changes in viral properties (such as mutations),
(F) changes in the population (e.g., through natural immunization or vaccination),
or (G) changes in environmental influences.
We have contributed our work to the Austrian government since March 2020.
Initially, this support took the form of short-term predictions together with other
research groups as joint action in the Austrian Prognosis Consortium (Bicher et al.
2021b). Subsequently, in addition, we started to communicate the relationships
between measures, dynamics, decisions, and social and epidemiological outcomes.
The main approaches have been vaccination programs (Jahn et al. 2021) and
screening strategies and how they can shape society for better or worse.
During the COVID-19 crisis, we have learned that there are no technological
solutions without integrating peoples’ needs, weaknesses, hopes, and ambitions.
Simulation, models, and decision tools have to be integrated into processes based on
the foundations of Digital Humanism. We also need solutions in order to cope with
the lack of transparent and reliable data that can be used for our models in accor-
dance with the European Data Protection Regulation (GDPR).
Bradbury’s statement refers to the fact that prediction can prevent a thing because
we have become aware of it and take countermeasures. That is something where
modelling can contribute. Modelling and simulation have to do more than sketch an
“outcome,” as science fiction does so well. Ideally, we also want to describe feasible
ways to improve something in the future. To be able to do so, we need to understand
interrelationships and describe causalities without ever leaving the safe ground of
steady data. This is the fundamental concept of “decision support” as sought by
politicians, managers, and others with the decision-taking powers.
Do we need accurate predictions in order to generate reliable decision support?
Not necessarily. In fact, prediction can even be a hindrance in the process of change
because it reinforces the impression that the future is already decided. Might
Data, Models, and Decisions: How We Can Shape Our World by Not Predicting. . . 299
Laplace’s demon2 (Laplace 1825, p. 4) have long been refuted already? While it is
necessary to think and work in scenarios, we then also need to integrate these
thoughts into the change process. Models can only be one piece of the puzzle of
“decision support.” They need to be embedded into the bigger picture, i.e., other
processes.
In my experience, it is unavoidable to make use of a range of different established
methodological approaches and, moreover, combine these to create tailor-made
processes which make it possible to link the respective advantages and disadvan-
tages of each method, laying the groundwork for “better decisions.” I like to think of
it as a gradual process with feedback loops:
(1) Get your data straight. In their first step, modellers collect and analyze data.
Hypotheses are generated on the basis of data with the help of different methods,
such as statistics or artificial intelligence. This approach makes it possible to
make forecasts. During the COVID-19 crisis, early models allowed us to make
basic statements about the current situation or to compare international devel-
opments. However, these concepts often mislead us into believing that we can
continue to extrapolate developments. Moreover, we are still lacking valid and
quality-assured data.
(2) Establish correlations and causalities and describe relationships. In this step,
macroscopic models are used to couple the formalized data with causal or
relational hypotheses. Examples of this approach have existed for many decades.
Earliest works came from Norbert Wiener (Wiener 1961); Jay W. Forrester was
a pioneer (Forrester 1973). System dynamics is one representative of the linkage
between model representation through differential equations and the modelling
process with non-mathematicians. The Limits to Growth. A Report for the Club
of Rome’s Project on the Predicament of Mankind by Meadows (Meadows et al.
1972) was an early example of the impact such approaches can have. It has
enabled us to use system dynamics in order to develop our understanding of
feedback loops and regulated systems in economics and ecology. With
approaches such as system dynamics and differential equations, we can describe
relationships and explain, for example, exponential behavior and logistic
growth. While these aspects have particularly come to the fore in the COVID-
19 crisis, they have in fact been helping us to better understand the mechanisms
of actions in therapy analysis for many years. Causal analysis has further
facilitated this understanding since 2000 (Pearl 2000) by addressing issues
2
Laplace wrote: “We ought then to regard the present state of the universe as the effect of its anterior
state and as the cause of the one which is to follow. Given for one instant an intelligence which
could comprehend all the forces by which nature is animated and the respective situation of the
beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would
be embrace in the same formula the movements of the greatest bodies of the universe and those of
the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to
its eyes.”
300 N. Popper
3
This means that there are variables that are simultaneously confounders (common causes of
treatment and outcome) and intermediate steps, i.e., on the causal chain treatment that leads to an
outcome. In other words: Confounders are also affected by treatment. It is difficult to determine
when one has identified “sufficient” causal chains.
Data, Models, and Decisions: How We Can Shape Our World by Not Predicting. . . 301
The gathering and handling of raw data, including a true understanding of the
patterns and relationships therein, are undoubtedly vital to establish a solid basis for
models. However, when a model aims to provide decision support, we must also be
able to identify, represent, and reproduce the dynamics of a system—i.e., the
behavior of the population. That is what makes it possible to not only predict future
events but actually understand their underlying reasons.
Like other philosophical problems, Laplace’s demon will continue to occupy our
thoughts. We might even want this particular demon to act as our sparring partner.
We will continue to ponder what purpose models can serve exactly and how
stochasticity plays out in different models and “if-then” predictions. Also, we
might want to have a closer look at those who gather the data, finding correlations
and describing emergent behaviors.
In the end, we have to be aware of our limits and always keep in mind what our
form of decision-taking support can actually contribute:
A few years ago, I presented an early version of the agent-based network model
that we are currently using in the context of COVID-19 at a major meeting on the
issue of influenza vaccination strategy. A medical professional approached me after
the presentation and asked triumphantly: “Can you tell me now how many patients
with the flu we can expect to present next year on the 17th of March?”. My answer
was: “No, I can’t—you’ve got me there! But that knowledge would be utterly
pointless, anyway. What we can do with our model, however, is to tell you which
strategy you can use in order to minimise the number of infected people, or, indeed,
to produce maximum damage.”
References
Bicher M., Rippinger C., Urach C., Brunmeir D., Siebert U., Popper N. (2021a) Evaluation of
Contact-Tracing Policies Against the Spread of SARS-CoV-2 in Austria – An Agent-Based
Simulation accepted in Medical Decision Making, https://doi.org/10.1101/2020.05.12.
20098970
Bicher M., Zuba M., Rainer L., Bachner F., Rippinger C., Ostermann H., Popper N., Thurner N.,
Klimek P. (2021b) Supporting Austria through the COVID-19 Epidemics with a Forecast-Based
Early Warning System, medRxiv 2020.10.18.20214767; https://doi.org/10.1101/2020.10.18.
20214767
Emtsev M. and Parnov E. (1977) World Soul, Introduction by Theodore Sturgeon, page viii,
translated from Russian by Antonina W. Bouis, Macmillan Publishing Co., New York,
researched at https://quoteinvestigator.com/2010/10/19/prevent-the-future/
Forrester J.W., (1973) World Dynamics. Cambridge, Mass.: Wright-Allen Press.
Jahn B., Sroczynski G., Bicher M., Rippinger C., Mühlberger N., Santamaria J., Urach C.,
Schomaker M., Stojkov I., Schmid D., Weiss G., Wiedermann U., Redlberger-Fritz M.,
Druml C., Kretzschmar M., Paulke-Korinek M., Ostermann H., Czasch C., Endel G., Bock
W., Popper N. and Siebert U. (2021) “Targeted COVID-19 Vaccination (TAV-COVID) Con-
sidering Limited Vaccination Capacities—An Agent-Based Modeling Evaluation,” Vaccines,
vol. 9, no. 5, p. 434, Apr. 2021, https://doi.org/10.3390/vaccines9050434
Laplace P.S. (1825) Essai philosophique sur les probabilités , A Philosophical Essay on Probabil-
ities translated from French by Frederick Wilson Truscott and Frederick Lincoln Emory, John
302 N. Popper
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Lessons Learned from the COVID-19
Pandemic
Alfonso Fuggetta
Abstract The COVID-19 pandemic is having a tragic and profound impact on our
planet. Thousands of lives have been lost, millions of jobs have been destroyed, and
the life of billions of people has been changed and disrupted. In this dramatic
turmoil, digital technologies have been playing an essential role. The Internet and
all its services have enabled our societies to keep working and operating; social
networks have provided valuable channels to disseminate information and kept
people connected despite lockdowns and the block of most travels; most impor-
tantly, digital technologies are key to support researchers, epidemiologists, and
public officers in studying, monitoring, controlling, and managing this unprece-
dented emergency.
After more than a year, it is possible and worthwhile to propose some reflections
on the strengths and weaknesses we have experienced and, most importantly, on the
lessons learned that must drive our future policies and roadmaps. This is unavoidable
not just to improve our ability to react to these dramatic situation, but, most
importantly, to proactively design and develop a better future for our society.
At the beginning of March 2020, as many other countries, all of Italy was put on
lockdown: stores and schools were closed, most workers started working remotely,
industries were forced to operate with limited staff, and traveling was basically
cancelled. The key infrastructure that kept the country alive and operating was the
Internet. Nevertheless, there were (and still are) problems that could not be solved
instantaneously: significant portions of the territory do not have adequate network
access and services; most public administrations do not manage all the information
digitally and, most important, operate as separate silos that do not exchange and
integrate their data and processes; too many companies were unprepared to work as
A. Fuggetta (*)
Cefriel – Politecnico di Milano, Milan, Italy
e-mail: alfonso.fuggetta@cefriel.com
When the crisis erupted, Italy was unable to procure all the masks needed to deal
with the emergency. It was not a matter of a lack of resources: there were no
production and sourcing capabilities, and it took weeks and months to create them.
A country with a very high GDP, a member of the G20, was unable to procure basic
devices such as masks. Similarly, when Italy decided to launch a contact tracing app,
the real issue was not the cost of developing the app or integrating it with the national
health information system: the real issue was the time needed to put it into operation.
As an additional example, the same problem occurred to define and deploy an
interoperability strategy for the different COVID-19 apps developed in Europe.
In an emergency, the critical resource is time, not money. Consequently, the
speed and efficiency of processes are of paramount importance. It depends on the
level of preparedness (as discussed in the previous point) and on the efficiency and
clarity of the command chain within the different branches of the government and of
the society in general.
Time cannot be procured.
The need to trace and control the pandemic has generated a heated and turbulent
discussion about a crucial and sensitive issue: privacy preservation. Indeed, the
characteristics of the virus do require the availability of fast and pervasive mecha-
nisms to identify infected people that must be isolated so that the infection is blocked
as soon as possible. This process can be enabled and accelerated by digital technol-
ogies and processes that trace and exploit a number of personal data and information.
Lessons Learned from the COVID-19 Pandemic 305
In this respect, we have seen different approaches that have explored the entire
spectrum of possibilities, from light access to a minimum of personal data to a
pervasive and intrusive penetration into the private lives of citizens. Too often, the
debate on this topic has been unable to strike a reasonable balance between these
different tensions nor to instill enough trust in the public opinion.
We need new rules, policies, and mechanisms that are able to find the appropriate
balance and trade-off between promotion of societal interests and protection of
freedom and civil rights. This has to be achieved by exploiting two key directions.
First, as indicated by the EU Digital Services Act and related legislative initiatives,1
it is vital to provide transparent mechanisms to inform citizens about the policies and
rules used to collect and use their personal data and to protect their rights and
privacy. Second, it is crucial and urgent to increase our investments in education
to ensure that every European citizen uses digital technologies in an informed and
knowledgeable way.
In Italy as well as across European countries, one of the most critical problems has
been the integration of different information and data sources that were key to
monitor and control the pandemic. Two are the main problems:
1. Technical interoperability standards and enabling infrastructures
2. Common data schema and semantic models to interpret and exploit data coher-
ently and effectively
Addressing these issues requires an incredible effort, similar in nature—but more
challenging as far as its scope and complexity are concerned—to the creation of the
GSM standard and related infrastructure. In Europe, there are two important initia-
tives that are trying to tackle these challenges: the Gaia-X project2 and the Interna-
tional Data Spaces Association (IDSA).3 The former aims at defining a European
industrial and market strategy to promote and exploit cloud computing and related
services; the latter aims at creating exchange standards for critical data assets.
As for the development of GSM, the Internet, and other digital technologies,
standardization is an essential and crucial foundation to promote innovation, growth,
and societal impact. The key concept here is coopetition: it is vital to cooperate in the
creation of standards and common enabling infrastructures that define a level-
playing field while competing to offer the best services to citizens and companies
based on the availability of these enabling assets.
1
Directorate-General CONNECT of the European Commission. “The Digital Services Act.” https://
digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
2
https://www.data-infrastructure.eu/
3
https://internationaldataspaces.org/
306 A. Fuggetta
One of the major failures that we have experienced in this pandemic is the inability to
provide clear and convincing messages to citizens. Such an incredible crisis would
have demanded for an evidence-based communication strategy that too often has
been completely absent or insufficient. Even worse, we have been overwhelmed by
an unmanageable amount of incoherent, confusing, and often contradicting
messages.
Government, companies (e.g., big pharma), and scientific institutions need to
raise the bar in their communication strategies and practices, to provide the public
with appropriate and trustworthy information on the evolution and management of
the emergency. This has to be achieved by combining clear and coordinated com-
munication strategies and procedures (especially as far as public bodies are
concerned) with a streamlined, timely, and coherent exploitation of digital media
and social network.
Apple and Google played a key role in the development of contact tracing apps.
Even if—unfortunately—they didn’t have a significant and wide impact on the
containment of the epidemic, as Europeans, we have been basically dependent on
the decisions and strategies of foreign industries. Similarly, our working and social
activities heavily rely on technologies developed by American and Asian companies.
It would be silly to simple promote a simplistic and unfeasible technological
protectionism. At the same time, as the handling of the vaccine production and
procurement processes has demonstrated, the European Union needs to define a
strategy that considers science and high-tech policy a strategic and security affair,
and not just a cultural or economic issue.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
The Need for Respectful Technologies:
Going Beyond Privacy
Elissa M. Redmiles
Abstract Digital technologies, the data they collect, and the ways in which that data
is used increasingly effect our psychological, social, economic, medical, and safety-
related well-being. While technology can be used to improve our well-being on all of
these axes, it can also perpetrate harm. Prior research has focused near exclusively
on privacy as a primary harm. Yet, privacy is only one of the many considerations
that users have when adopting a technology. In this chapter, I use the case study of
COVID-19 apps to argue that this reductionist view on technology harm has
prevented effective adoption of beneficial technology. Further, a privacy-only
focus risks perpetuating and magnifying existing technology-related inequities. To
realize the potential of well-being technology, we need to create technologies that
are respectful not only of user privacy but of users’ expectations for their technology
use and the context in which that use takes place.
E. M. Redmiles (*)
Max Planck Institute for Software Systems, Saarbrücken, Germany
e-mail: eredmiles@mpi-sws.org
Privacy has been shown to be a key, and growing, concern for users when
considering whether to adopt new technologies, including well-being-related tech-
nologies. However, privacy is far from the only consideration that effects whether a
user will adopt a new technology. Here, I argue that we have developed a reduc-
tionist focus on privacy in considering whether people will adopt a new technology.
This focus has prevented us from effectively achieving adoption of beneficial
technologies and risks perpetuating and magnifying inequities in technology access,
use, and harms.
By focusing exclusively on data privacy, we fail to fully capture user’s desire for
respectful technologies: systems that respect a user’s expectations for how their data
will be used and a user’s expectations for how the system will influence their life and
the contexts surrounding them. I argue that user’s decisions to adopt a new technol-
ogy are driven by their perception of whether that technology will be respectful.
A large body of research shows that user’s technology-adoption behavior is often
misaligned with their expressed privacy concerns. While this phenomena, the
privacy paradox, is explained in part by the effect of many cognitive biases including
endowment and ordering effects (Acquisti et al. 2013), it should perhaps not be such
a surprise that people’s decision to adopt or reject a technology is based on more than
just the privacy of that technology.
Privacy calculus theory (PCT) agrees, going beyond considering just privacy to
also consider benefits, arguing that “individuals make choices in which they surren-
der a certain degree of privacy in exchange for outcomes that are perceived to be
worth the risk of information disclosure” (Dinev and Hart 2006). However, as I
illustrate below, placing privacy as the sole detractor from adopting a technology and
outcomes (or benefits) on the other remains too reductionist to fully capture user
behavior, especially in well-being-related settings.
The incompleteness of a privacy-only view toward designing respectful technol-
ogies was exemplified in the rush to create COVID-19 technologies. In late 2020 and
early 2021, technology companies and researchers developed exposure notification
applications that were designed to detect exposures to coronavirus and notify app
users of these exposures. These apps were created to replace and/or augment manual
contact tracing, which requires people to call those who have been exposed to trace
their networks of contact.
In tandem with the push to design these technologies was a push to ensure that
these designs were privacy preserving (Troncoso et al. 2020). While ensuring the
privacy of these technologies was critically important for preventing government
misuse and human rights violations, and addressing user’s concerns, people rarely
adopt technologies just because they are private (Abu-Salma et al. 2017). Indeed,
after many of these apps were released, a minority of people adopted them. Missing
from the conversation was a discussion of user’s other expectations for COVID-
19 apps.
Privacy calculus theory posits that users trade off privacy against benefits and, in
so doing, make decisions about what technologies to adopt. However, empirical
research on people’s adoption considerations for COVID-19 apps finds a more
complex story (Li et al. 2020; Redmiles 2020; Simko et al. 2020). People consider
The Need for Respectful Technologies: Going Beyond Privacy 311
not only the benefits of COVID-19 apps—whether the app can notify them of a
COVID exposure, for example—but also how the efficacy of the app, how many
exposures it can detect, might erode those benefits. Indeed, preliminary research
shows that efficacy considerations may be far more important in user’s COVID-19
app adoption decisions than benefits considerations (Learning from the People:
Responsibly Encouraging Adoption of Contact Tracing Apps 2020). On the other
hand, privacy considerations are not the only potential detractors; people also
consider costs of using the system both monetary (e.g., cost of mobile data used
by the app) and usability-related (e.g., erosion of phone battery life from using
the app).
People’s adoption considerations for COVID-19 apps exemplify the idea of
respectful technologies: those that provide a benefit with a sufficient level of
guarantee (efficacy) in exchange for using the user’s data—with the potential
privacy risks resulting from such use—at an appropriate monetary and usability
cost. While COVID-19 apps offered benefits and protected user privacy, app devel-
opers and jurisdictions initially failed to evaluate the efficacy and cost of what they
had built and failed to be transparent to users about both the efficacy and costs of
these apps. As a result, people were unable to evaluate whether these technologies
were respectful and the adoption rate of a technology that had the potential to
significantly benefit individual and societal well-being during a global pandemic
remained low.
Examining the full spectrum of people’s respectful technology-related consider-
ations is especially critical for well-being-related applications for two reasons.
First, there are a multitude of types of well-being that are increasingly being
addressed by technology—from natural disaster check-in solutions through mental
health treatment systems—each with a corresponding variety of different harms,
costs, and risks that users may consider. If we focus strictly on the privacy-benefit
tradeoffs of such technologies, we may miss critical adoption considerations such as
whether the user suspects they might be harassed while using, or for using, a
particular technology (Redmiles et al. 2019). Failing to design for and examine
these additional adoption considerations can be a significant barrier to increasing
adoption of commercially profitable and individually, or societally, beneficial
technologies.
Second, different aspects of respectful technologies are prioritized by different
sociodemographic groups (Learning from the People: Responsibly Encouraging
Adoption of Contact Tracing Apps 2020). For example, older adults focus more on
the costs of COVID-19 apps than do younger adults; younger adults focus more on
the efficacy of these apps than do older adults. Ignoring considerations aside from
privacy, and benefits, can perpetuate inequities in whose needs are designed for in
well-being technologies and, ultimately, who adopts those technologies. Such equity
considerations are especially important for well-being technologies for which equi-
table access is critical and for which inequitable distribution of harms can be
especially damaging.
Thus, to ensure commercial viability and adoption of well-being technologies,
and to avoid perpetuating and magnifying well-being inequities through the creation
312 E. M. Redmiles
References
Abu-Salma, R., Sasse, M. A., Bonneau, J., Danilova, A., Naiakshina, A., and Smith, M. (2017).
Obstacles to the Adoption of Secure Communication Tools. In: Security and Privacy (SP), 2017
IEEE Symposium on (SP17). IEEE Computer Society.
Acquisti, A., John, L. K., and Loewenstein, G. (2013). What Is Privacy Worth? The Journal of
Legal Studies, 42 (2), 249–274.
Burns, P. B., Rohrich, R. J., and Chung, K. C. (2011). The levels of evidence and their role in
evidence-based medicine. Plastic and reconstructive surgery, 128 (1), 305.
Byambasuren, O., Sanders, S., Beller, E., and Glasziou, P. (2018). Prescribable mHealth apps
identified from an overview of systematic reviews. npj Digital Medicine, 1 (1), 1–12.
Dinev, T. and Hart, P. (2006). An Extended Privacy Calculus Model for E-Commerce Transactions.
Information Systems Research, 17 (1), 61–80.
Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps (2020).
Available from: https://www.youtube.com/watch?v¼my_Sm7C_Jt4&t¼366s [Accessed
17 Mar 2021].
Li, T., Cobb, C., Jackie, Yang, Baviskar, S., Agarwal, Y., Li, B., Bauer, L., and Hong, J. I. (2020).
What Makes People Install a COVID-19 Contact-Tracing App? Understanding the Influence of
App Design and Individual Difference on Contact-Tracing App Adoption Intention.
arXiv:2012.12415 [cs] [online]. Available from: http://arxiv.org/abs/2012.12415 [Accessed
17 Mar 2021].
Redmiles, E. M. (2020). User Concerns 8 Tradeoffs in Technology-facilitated COVID-19
Response. Digital Government: Research and Practice, 2 (1), 6:1–6:12.
Redmiles, E. M., Bodford, J., and Blackwell, L. (2019). “I Just Want to Feel Safe”: A Diary Study
of Safety Perceptions on Social Media. Proceedings of the International AAAI Conference on
Web and Social Media, 13, 405–416.
Simko, L., Chang, J. L., Jiang, M., Calo, R., Roesner, F., and Kohno, T. (2020). COVID-19 Contact
Tracing and Privacy: A Longitudinal Study of Public Opinion. arXiv:2012.01553 [cs] [online].
Available from: http://arxiv.org/abs/2012.01553 [Accessed 17 Mar 2021].
Troncoso, C., Payer, M., Hubaux, J.-P., Salathé, M., Larus, J., Bugnion, E., Lueks, W., Stadler, T.,
Pyrgelis, A., Antonioli, D., Barman, L., Chatel, S., Paterson, K., Čapkun, S., Basin, D., Beutel,
J., Jackson, D., Roeschlin, M., Leu, P., Preneel, B., Smart, N., Abidin, A., Gürses, S., Veale, M.,
Cremers, C., Backes, M., Tippenhauer, N. O., Binns, R., Cattuto, C., Barrat, A., Fiore, D.,
Barbosa, M., Oliveira, R., and Pereira, J. (2020). Decentralized Privacy-Preserving Proximity
Tracing. arXiv:2005.12273 [cs] [online]. Available from: http://arxiv.org/abs/2005.12273
[Accessed 17 Mar 2021].
The Need for Respectful Technologies: Going Beyond Privacy 313
Vitak, J., Liao, Y., Kumar, P., Zimmer, M., and Kritikos, K. (2018). Privacy attitudes and data
valuation among fitness tracker users. In: International Conference on Information. Springer,
229–239.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part XI
Realizing Digital Humanism
Digital Humanism: Navigating the Tensions
Ahead
Helga Nowotny
H. Nowotny (*)
ETH Zürich, Zurich, Switzerland
e-mail: helga.nowotny@wwtf.at
of voluntarily handing over our data to the large corporations (Zuboff 2018). The
possibilities of abuse and malfunctioning of vulnerabilities to hacking and other
forms of cyber-insecurity persist, while optimistic scenarios of new opportunities
continue to be acclaimed. We rightly insist that in critical situations, humans ought to
be the ones whose judgement trumps automated response and decisions and that
accountability must be built into the process in case things turn bad (Christian 2020;
Russell 2019). On this co-evolutionary journey and despite the uncertainty of its
outcome, we feel encouraged by what may turn out to be an illusion: that we have
been dealt the slightly better cards in the co-evolutionary game and human ingenuity
will prevail. This is one of the premises on which digital humanism rests, the belief
that human values can be instilled in the digital technologies and that a human-
centered approach will guide their design, use, and future development (Werthner
et al. 2019).
For digital humanisms, such aspirations serve as the necessary preconditions for
gaining momentum, but ought not obscure the difficulties that lie ahead. In the long
history of technological inventions and innovations, humans always attempted to be
in control. What began as deploying tools thousands of years ago to carve out a
precarious living from the natural environment turned into massive intervention and
large-scale change of the natural environment during industrialization, with devas-
tating consequences for the latter on which we depend. The peak of the belief that
humans were in complete control of technology and mastering their future came
during modernity (Scott 1999). A turning point was reached in the mid-twentieth
century, when it became clear that we were no longer in control over the radioactive
waste left behind from the production of the atomic bomb. After the end of the war,
the world’s population began to grow dramatically, and so did GDP and living
standards. At the same time, the impact of human intervention in the earth system
and its services began to be noticeably felt. Called “The Great Acceleration,” the
convergence of these two large-scale developments has not abated since (Steffen
et al. 2015; McNeill and Engelke 2015). Today, we are faced with a major sustain-
ability crisis, while digitalization is rapidly gathering momentum with profound and
far-reaching implications for what it means to be human and what a good digital
society could be. We have arrived in the Anthropocene, and it will be a digital
Anthropocene.
Digital humanism thus emerges at a crucial moment, at the intersection of the
sustainability crisis and the opportunities offered by digitalization. In order to gauge
the challenges it faces, we ought to remind ourselves of the continuities and ruptures
it entails. It aspires to build upon some of the great cultural transformations that are
part of the European heritage, exploring human nature and adopting a human-
centered approach under rapidly changing global circumstances. But digital human-
ism also harbors a rupture that is less obvious. It marks the transition from the
linearity in thinking and understanding the world which was one of the hallmarks of
modernity toward coping with the non-linear processes of complex adaptive sys-
tems. Just as it is no longer possible to rely on the linearity of technological progress
that will inevitably lead to a future being better than the past and the present, digital
Digital Humanism: Navigating the Tensions Ahead 319
institutions, predictive algorithms extrapolate from the past to let us see further ahead
into the future. Yet, in doing so, they lure us into transferring agency unto them.
Once we start to believe that an algorithm can predict what will happen in the future
and supportive digital decision-making systems are widely adopted, the point may
be reached when human judgement seems superfluous and algorithmic predictions
turn into self-fulfilling prophecies (Nowotny 2021).
Thus, the stakes for a digital humanism are high. In order to navigate these
tensions, it will have to come up with concrete propositions that include the deeper
humanistic layers, going beyond technological solutions. Important as appeals to
ethical principles are, they will not suffice unless they can draw in very practical
terms on a widely shared set of attitudes and practices that are inspired and guided by
a humanistic ideal as a way of living together. This involves devising new ways of
tackling problems that go beyond technological fixes and to acknowledge that
“wicked problems” exist for which no solutions are in sight, yet they too must be
confronted. Digital humanism draws its strength from the conviction that a better
digital society is possible, mustering the courage to experiment with what is needed
for shaping it.
In practice, this means to cultivate a humanistic sensitivity for the diversity of
social contexts in which digital technologies are deployed and efficacious. Currently,
neither predictive algorithms nor the data on which they train are sufficiently
context-sensitive. Digital humanism can let us discover hitherto unknown features
of who we are without determining what we will be. It can teach us the irreplaceable
value of critical human judgement when we face the illusionary promise of predic-
tive algorithms that they know the future, which is not determined by any technology
but remains uncertain and open. The major benefits of digital processes do not only
consist in being “smart,” but other potential benefits wait to be explored with an open
and curious mind. Digital humanism can sensitize us how to deal with complexity
which is closer to our intuitive understanding of what is means to be human than a
linear, cause-effect way of thinking. It can attune us to emergent properties and to
what remains unpredictable—the ultimate sign of life that keeps evolving.
References
Christian, B. (2020) The Alignment Problem. Machine Learning and Human Values. New York:
Norton & Company.
Lee, E. A. (2020) The Coevolution: The Entwined Futures of Humans and Machines. Cambridge,
MA: MIT Press.
McNeill, J. R. and Engelke, P. (2015) The Great Acceleration: An Environmental History of the
Anthropocene since 1945. Cambridge, MA: Harvard University Press.
Nowotny, H. (2015) The Cunning of Uncertainty. Cambridge, UK: Polity Press.
Nowotny, H. (2021) In AI We Trust. Power, Illusion and Control of Predictive Algorithms.
Cambridge, UK: Polity Press.
Russell, S. (2019) Human Compatible: AI and the Problem of Control. London: Allen Lane.
Scott, J 1999, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have
Failed, Yale University Press, New Haven.
Digital Humanism: Navigating the Tensions Ahead 321
Steffen, W. et al. (2015) ‘The Trajectory of the Anthropocene: The Great Acceleration’, The
Anthropocene Review 2:1, pp. 81–98.
Susskind, D. (2020) A World Without Work: Technology, Automation, and How We Should
Respond. London: Allen Lane.
Werthner, H et al. (2019) Vienna Manifesto on Digital Humanism, viewed 20 June 2019, <www.
informatik.tuwien.ac.at/dighum/>.
Zuboff, S. (2018) The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power, New York: Hachette Book Group.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Should We Rethink How We Do Research?
Carlo Ghezzi
Abstract Advances in digital technologies move incredibly fast from the research
stage to practical use, and they generate radical changes in the world, affecting
humans in all aspects of their life. The chapter illustrates how this can have profound
implications on the way technological research is developed. It also discusses the
need for researchers to engage more actively in public debates with society.
History shows that advances in science and technology have always produced
changes in the world in which we live. With digital technologies,1 we experience
unprecedented levels of change. They can affect human well-being, assist humans in
their individual activities, and relieve them of strenuous or dangerous jobs. More
broadly, they create an entirely new cyber-physical world in which they live and
interact with natural phenomena, other individuals, and new kinds of artificial
autonomous entities. The “old world,” which was known to us for centuries, in
which we felt comfortable and with which we slowly evolved, has been suddenly
replaced by a new one whose laws we ignore and where humans may lose control.
Digital humanism stresses that humankind must be at the center of innovation and
technology has to serve society and the environment.
Technological developments are largely driven by research. To understand the
lessons for research we can learn from the development of digital technologies, we
need to reflect on its two key properties: radicality and speed of change.
The effects of digital technologies on humankind have indeed been revolutionary.
Other radical shifts were generated by research in the past, but perhaps none is
1
The umbrella term “digital technologies” includes both hardware and software (data,
algorithms, AI).
C. Ghezzi (*)
Politecnico di Milano, Milano, Italy
e-mail: carlo.ghezzi@polimi.it
affecting as profoundly every human life. For example, the shift from Ptolemaic
astronomy—which considered the earth to be at the center of the universe with the
sun, moon, and planets revolving around it—to the Copernican view had radical
effects on science and philosophy but did not affect the individual’s everyday life.
Likewise, the radical shift in physics at the beginning of the twentieth century,
caused by the developments of relativity theory and quantum mechanics, challenged
the known view of the physical world, described by the axioms and laws of
Newton’s mechanics. This was a spectacular paradigm shift, which however had
little effect on practically observable phenomena in everyday human life.
In addition, the transition of advances in digital technologies from the research
stage into everyday life occurred much faster than for previous technologies. As
observed, for example by Harari (2014), the Industrial Revolution, ignited by the
invention of steam engines, took about two centuries to develop the modern indus-
trial world. Digital technologies spread to the world in only a few decades, gener-
ating abrupt changes and hampering gradual and friction-less adaptation.2
The main implication of speed of change is that scientists and engineers,3 in their
exploratory work, cannot ignore the potential implications and effects of the new
technology they develop. Delaying reflection on the use of technology to a later
stage may cause serious harm. Alas, “later” is in fact “sooner” than expected; it can
be too late! Rather, technological research must proceed hand-in-hand with the
investigation of its implications. Traditionally, careful deployment strategies and
trial usage of newly developed technology could prevent major damage, through
adjustments and countermeasures. Today, however, radical innovations in image
recognition based on AI deep learning techniques have been immediately transferred
to practice without first exploring their limitations and potential implications. Adop-
tion in law trials has raised serious ethical concerns and potential violations of
human rights. Likewise, the advances that enabled mass mobile pervasive comput-
ing through personal devices and smartphones stressed mainly usability and func-
tionality, at the expense of trustworthiness and dependability. As a result,
infrastructures are open to all kinds of misuses and attacks, including privacy
violations that led to serious political consequences.4 Plenty of examples of ethically
sensitive technical issues are faced by current research on automatic vehicles.
2
The digital revolution is also very fast and radically changing the way we do research, in almost all
areas, through the unprecedented availability of data and the invention of algorithms which can
manipulate them and reason about them, leading to discovery automation. The deep consequences
of this change would require further discussion.
3
In this chapter, the terms scientist and researcher are used interchangeably. Furthermore, they
mainly refer implicitly to technological research.
4
For example, the Pegasus spyware [https://en.wikipedia.org/wiki/Pegasus_(spyware)].
Should We Rethink How We Do Research? 325
The effect on humans and society of radical changes is a serious concern that
must be addressed while developing technological research. To deal with it, research
has to broaden its focus, moving beyond pure technical boundaries and bringing in a
focus on the potential human and societal issues involved. This asks scientists to
break the rigid silos into which research is currently compartmented. Philosophers
and social scientists, for example, need to be integrated in research groups that
develop new technology for autonomous vehicles; environmentalists, urban plan-
ners, and social scientists need to work with computer scientists to develop traffic
management solutions in smart cities. The quest for interdisciplinarity—so far often
more a fashionable slogan than reality—becomes a necessity. We seriously and
urgently need to understand how this can be done. For example, how to achieve
breadth without sacrificing depth of research, how to evaluate interdisciplinary work
without penalizing it by applying traditional silo-based criteria, etc.
Speed and radicality of change have an important consequence on the need for
scientists to engage with society. Traditionally, they interact and communicate
almost exclusively with their silo peers. They are largely shielded from direct
communication and interaction with a broader public. Limited forms of engagement
include innovation initiatives, like generation of spin-offs and collaborations with
industry, government, and policy makers. Fast and radical changes require more
involvement, especially in discussing the potential developments and uses and
raising broad social awareness. This, however, is easier said than done.
Researchers know well how to communicate with their peers. They learn how to
do it since they enter a PhD program and continue to learn and improve throughout
their career. Research is an intrinsically open process that relies on communication
among peers. The main ambition of scientists is to achieve novel results and
communicate them to the research community trough research papers, artifacts
(such as data sets or software prototypes), and scientific debates in conferences.
Their career progress largely depends on how successful they are in producing and
spreading novel and relevant results to their peers. Ghezzi (2020) discusses the
importance of communication among peers and also stresses the need for more
neglected forms of public engagement, through which scientists lead, or participate
in, scientific debates with a broader audience, outside the circle of peers: with
government, policy and decision-makers, and the general public.
There are notable historical examples of scientists who engaged in public scien-
tific debates, especially when progress led to radical changes. A famous case is
Galileo Galilei, who strove to bring his support to the Copernican theory to the
attention of the society of his time. He was well aware of the profound consequences
of the shift from the Ptolemaic to the Copernican view: mankind was no more living
at the center of the universe, but instead on a planet, which was just a small part of
the solar system. He spoke to the informed society of his time through an essay,
326 C. Ghezzi
“Dialogue Concerning the Two Chief World Systems,” in which a scientific con-
versation is carried on among three individuals: a Copernican scientist who explains
the new theory to an educated citizen arguing against the statements made by a
Ptolemaic scientist. Galileo is considered as the father of modern science. He taught
to us that science is not blind faith on previous beliefs, dominant orthodoxy, or
ideology. It relies on rational argumentations to arrive at whatever conclusions a
careful analysis of evidence would suggest, even if they are not conformant with
current beliefs. He developed new technology to empower humans to understand
and dominate the physical world. His public engagement led him to confront the
Catholic official doctrine, which followed the Ptolemaic view that the earth was at
the center of the solar system. Galileo appeared before the Roman Inquisition and
was eventually accused of heresy. He was forced to recant his views and sentenced
to house arrest for the rest of his life.
Another example of a heated public debate occurred at the beginning of the
twentieth century when the revolutionary developments in physics and mathematics
by giants like Einstein, Plank, Bohr, and Hilbert brought together an outstanding
group of physicists, philosophers, and mathematicians, who met in Vienna in a
permanent seminar from 1924 until 1936, called Wiener Kreis (Vienna Circle). The
members of the seminar aimed at founding philosophy on a modern scientific view
of the world, keeping it separate from metaphysics.
Participants in the discussions included physicists, philosophers like Schlick
(who chaired the group), Neurath, Popper, and Wittgenstein and mathematicians
and logicians like Carnap and Gödel. The Vienna Circle dissolved in 1936, when
Schlick died, and anti-semitism caused a diaspora of the other members. A fasci-
nating account of this highly influential movement can be found in Sigmund (2017).
More heated public discussions involved physicists at the end of World War II, when
the relation between research and its direct use in the development of mass destruc-
tion weapons became evident. The debate was able to inform and involve both
governments and citizens.
These examples of public involvement in scientific debates remained mostly at
the level of “educated elites.” The digital revolution is directly affecting every
individual’s life and requires a broader reach out. An informed debate needs to
take place, involving not only scientists in almost all areas and decision-makers at all
levels, but also citizens, to make sure that humans and the earth on which they live
are at the center of technological developments.
4 Conclusions
Effective engagement with the general public requires that researchers learn how to
communicate effectively and that their efforts in doing so are recognized and
rewarded. They need to understand the role they must play in this conversation,
which mainly aims at explaining the advances of research and pointing to the critical
issues involved in its use, which may require collective, informed, rational decisions.
Should We Rethink How We Do Research? 327
The boundaries between scientific knowledge and personal opinions and beliefs
should be kept clearly separate. Effective communication also demands a mature and
competent audience. This raises serious concerns, since regrettably the level of
scientific education has been decreasing in many countries. Even worse, paradoxi-
cally, in our highly technological world there is a widespread mistrust in science—
see Nichols (2017)—which should be contrasted by more investments in education.
In particular, we need to ensure that every responsible citizen understands how
science and technology progress, how they can be trusted, and what their limits
are. Development of an open space for discussions around digital technologies is
crucial for the future of our democratic societies and realization of digital humanism.
References
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Interdisciplinarity: Models and Values
for Digital Humanism
Sally Wyatt
Abstract This chapter starts from the recognition that the world is facing major
challenges and that these may be best addressed by people working together, across
different disciplinary domains and between universities, civil society, governments,
and industry. After sketching these problems, I provide an overview of the meanings
of discipline and of multi-, inter-, and transdisciplinarity. I then provide a brief
historical overview of how disciplines emerge. Examples from computer sciences,
social sciences, and the humanities, and collaborations between them, are used to
illustrate these definitions and overview. In the final part, I reflect on what this means
for digital humanism, drawing on different models and values of collaboration.
The 2030 Agenda for Sustainable Development, prepared by the United Nations
(UN) and approved by all countries in 2015, identifies 17 goals, crucial for the future
of the planet. These include ending poverty, empowering women and girls, reducing
inequality, and taking action to combat climate change (UN 2015). Interestingly,
digital technologies are not explicitly mentioned in any of the goals, although they
could be seen as both part of the problem given, for example, their enormous energy
needs and the emergence of new forms of digital inequality. They could also be part of
the solution by making it easier to share data and knowledge to solve problems such as
those arising from an ageing population and by expanding access to education.
The problems underlying these goals could be characterized as “wicked prob-
lems,” those political and intellectual challenges that defy easy definition or solution.
No single academic discipline can provide an adequate definition of such problems
much less a clear and feasible resolution. The UN calls for partnership and collab-
oration to tackle these goals. To do so will require multi-, inter-, and transdisciplin-
ary research. The UN is not alone in making such calls. Many research-funding
agencies and policy-making organizations emphasize the importance of engaging
S. Wyatt (*)
Maastricht University, Maastricht, The Netherlands
e-mail: sally.wyatt@maastrichtuniversity.nl
with different disciplines and stakeholders in order to tackle contemporary social and
scientific problems.
In this short chapter, I first discuss the meaning of discipline and of multi-, inter-,
and transdisciplinarity. I then provide a brief historical overview of how disciplines
emerge and conclude with different models and values of collaboration and what
these could mean for digital humanism.
Multi-, inter-, and transdisciplinarity are sometimes used interchangeably, but
they each capture something different, described in the following paragraphs. But
first it is necessary to understand what an academic discipline is. Disciplines have
their own cultures and practices and provide those trained in them with skills, tools,
methods, concepts, and ways of thinking. They come with their own notions of how
the world is organized and of what constitutes good quality research (Knorr Cetina
1999). Disciplines are usually institutionalized, in university departments and fac-
ulties, in professional associations, and in specialized conferences and journals.
Reproduction of disciplines from one generation to the next is typically done via
formal, accredited education and sometimes involves shared competence criteria
(Hackett et al. 2017). An example of the latter is the “Computing Competencies for
Undergraduate Data Science Curricula” (ACM 2021). It is less usual to find such
criteria in the humanities and the social sciences, although those disciplines often
have implicit norms and expectations of the knowledge and competences students
should possess by the end of their degree programs.
Having provided a working definition of discipline, let us now move to the ways
they may be combined. These are usually presented in a hierarchical form, with
multidisciplinarity being the least integrated. Multidisciplinarity can be described as
moving between disciplines in order to understand a topic or problem from different
perspectives. This can lead to greater knowledge and may be very helpful in making
policy or other decisions, but there is little integration of methods or concepts from
the contributing disciplines. For example, economic modelling can be used to
understand the incidence of poverty in a country, but pedagogical studies provide
the basis for policies to tackle educational inequalities between children from
different socioeconomic backgrounds.
Interdisciplinary education and research deliberately attempt to combine and
synthesize methodologies and specialized jargon from different disciplines in order
to produce a more comprehensive solution to a problem or to address a complex
topic. For example, this occurs when computer scientists and linguists work together
to understand changing language patterns in large text corpora.
Transdisciplinarity goes outside the university in order to incorporate knowledge
from other non-academic sources and stakeholders. There are many possible stake-
holders with specialized knowledge and experiences that can be valuable in the
production of knowledge. In the case of healthcare, this could include patient
organizations, the pharmaceutical industry, and nursing professional associations
or unions as well as biomedical researchers, sociologists of health, medical ethicists,
and data scientists. There are many terms in circulation for transdisciplinary knowl-
edge production, including post-normal science (Funtowicz and Ravetz 1993), the
triple helix (Leydesdorff and Etzkowitz 1998), and Mode 2 (Nowotny et al. 2001).
Interdisciplinarity: Models and Values for Digital Humanism 331
Having briefly defined the key terms, let us return to academic disciplines. They
can have the appearance of immutability, rather like an immutable object in some
kinds of computer programming, something that cannot be changed after it has been
created. Nonetheless, it is important to remember that academic disciplines can and
do change. Many academic disciplines now taken for granted, such as mathematics,
history, and philosophy, have very long histories, just as universities do. Others,
including engineering and social sciences, emerged in the late nineteenth century,
largely in response to the challenges posed by industrialization and urbanization in
Europe and the United States. The rise of industrialization and engineering was in no
small part the impetus behind the establishment of technical universities in many
countries. Even though change might be slow, new disciplines can and do emerge,
and the focus and emphasis in long-standing disciplines may change.
The expansion of the university system after World War II in industrialized
countries was one catalyst for change. Growth was accompanied by an increase in
the diversity of students, staff, and (inter)disciplines. In the final third of the
twentieth century, the emergence of a new field was sometimes related to the
diffusion of a new object, such as the internet in the case of new media studies. In
other cases, the availability of new techniques and instruments could lead to a new
field, as in computer science. In yet other cases, such as women’s studies, the
emergence could be attributed to the greater diversity of people entering universities,
people who may identify new problems and ways of working (Wyatt et al. 2013).
Such new fields may find their first institutional homes in literature, electrical
engineering, or sociology. As they grow and stabilize, they can become institution-
alized in the ways mentioned above, by developing their own departments, educa-
tional programs, journals, and professional associations.
Not all attempts at creating new disciplines are successful. Some might be very
strong in research and the creation of new knowledge, published and shared in
specialized journals and conferences, but this is not always accompanied by wide-
spread or strong educational profiles. For example, neuroeconomics—the study of
how economic behavior affects understanding of the brain and how neuroscience
might guide economic models—might be taught only in a relatively small number of
universities at advanced level. Nonetheless it has its own specialized jargon, with
conference and publication outlets, for sharing ideas and developments.
Inter- and transdisciplinary collaborations have, as mentioned above, been
heralded by national and international organizations looking for innovative solutions
to complex and wicked problems. But collaborations are not always easy to achieve.
Not all disciplines are equal, neither in terms of available funding nor in terms of
epistemic and social legitimacy and status. Such inequalities can hinder productive
collaboration, and thus the remainder of this text focuses on different modes of
collaboration across disciplines.
In particular, I reflect on what this might mean for “digital humanism.” that
“community of scholars, policy makers, and industrial players who are focused on
332 S. Wyatt
1
https://dighum.ec.tuwien.ac.at/
Interdisciplinarity: Models and Values for Digital Humanism 333
These different modes of interdisciplinarity are intended as heuristic. They are not
exhaustive of modes of collaboration and nor are they mutually exclusive. From my
own experiences of interdisciplinary collaboration, I have identified one resource
and two values: time, respect and humility. There are many guidelines regarding
collaboration, but there is no fast or simple route to success. Just as disciplinary
training takes time, so too does learning to collaborate. Each project or group needs
time to develop shared vocabularies, methods, and ways of working. People also
need to respect other disciplinary ways of working even if they do not necessarily
understand them. We all need to recognize that the disciplines in which we have
been trained may not have all the answers nor even always the right questions. This
is another way of phrasing the old adage that “if your only tool is a hammer, then
everything looks like a nail.”
References
ACM Data Science Task Force (2021). ‘Computing competencies for undergraduate data science
curricula’, Available at: DSTF_Final_Draft_Report (acm.org) (Accessed: 21 March 2021)
Barry, A., Born, G. and Weszkalnys, G. (2008). ‘Logics of interdisciplinarity’, Economy & Society,
37(1), 20-49.
Funtowicz, S. and Ravetz, J. (1993). ’Science for the post-normal age’, Futures, 25, 739-755.
Gregory, K. (2021). Findable and reusable? Data discovery practices in research. PhD thesis.
Maastricht University.
Hackett, E.J. et al. (2017). ‘The social and epistemic organization of scientific work’, in Felt, U. et al.
(eds.) The handbook of science and technology studies, 4th edition. Cambridge, MA: The MIT
Press, pp. 733-764.
Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge, MA:
Harvard University Press.
Leydesdorff, L. and Etzkowitz, H. (1998). ‘The triple helix as a model for innovation studies’,
Science & Public Policy, 25(3), 195-203.
Nowotny, H., Scott, P. and Gibbons, M. (2001). Re-thinking science. Knowledge and the public in
an age of uncertainty. London: Polity Press.
United Nations (2015). Transforming our world: The 2030 agenda for sustainable development.
New York, NY: United Nations, Department of Economic and Social Affairs.
Wyatt, S. et al. (2013). ‘Introduction to Virtual Knowledge’, in Wouters, P. et al. (eds.) Virtual
knowledge. Experimenting in the humanities and the social sciences. Cambridge, MA: The MIT
Press, pp. 1-23.
334 S. Wyatt
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
It Is Simple, It Is Complicated
Abstract History is not a strictly linear process; our progress as society is one full of
contradictions. This we have to bear in mind when trying to find answers to pressing
challenges related to and even caused by the digital transformation. In this chapter,
we reflect on contradictory aspects of Digital Humanism, which is an approach to
foster the control and the design of digital infrastructure in accordance with human
values and needs. Seemingly simple solutions turn out to be highly complex when
looking at them more closely. Focusing on some key aspects as (non-exhaustive)
examples of the simple/complicated dilemma, we argue that, in the end, political
answers are required.
History is a dialectic process. This is also true for the ongoing digital transformation
(probably much better named informatization as it is informatizing nearly every-
thing). Much of this development has already happened in the past, unobserved by
mass media and most decision-makers in politics and industry. Nowadays, this
transformation appears at the surface, leaving many with the impression of an
automatism, of a process without human control, guided by some “external” forces.
This is why the notion of Digital Humanism is so important. As an approach to
counteract negative effects of the digital transformation, it aims to foster the control
and the design of digital infrastructure in accordance with human values and needs.
While numerous people support (or at least sympathize with) these general aims and
goals of Digital Humanism, there are subtle questions when looking behind the
scene that need to be discussed and resolved. We should resist the temptation to
provide trivial solutions which will not prevail in a serious discussion.
Our progress as society is full of contradictions, and sometimes it even points
backwards. Bearing in mind such contradictions and that historical processes do not
move straight forward linearly, we try to approach some of these contradictory
aspects of Digital Humanism by looking at it using the intertwined pair “simple”
and “complicated.” What seems to be simple is complicated, and vice versa. This
chapter also reflects a discussion among us, the authors. Our sometimes-
controversial debate may serve as a blueprint for a future “dialectic” process to
find agreements on complicated matters, also in a public debate. The following, by
no means exhaustive, list represents some of the “simple/complicated” issues Digital
Humanism has to address:
• Interdisciplinarity. The impact of digitalization on our lives is obvious, and
everybody senses its power (both positive and negative). Negative phenomena
to be addressed include the monopolization of the Web, issues related to
the automatization of work, problems with respect to AI and decision making,
the emergence of filter bubbles, the spread of fake news, the loss of privacy, and
the prevalence of digital surveillance. While these are all aspects of the same
disruptive process, they manifest very differently. Consequently, a broad spec-
trum of challenges needs to be addressed. The obvious and simple conclusion is:
Interdisciplinarity is needed to tackle them, to understand the complicated pres-
ence, and to shape the digital future.
But is it really so simple, as interdisciplinarity brings its own challenges? It is
very hard, for instance, to come up with a common language, where all
researchers involved use the same terminology with the same meanings. The
way, moreover, how the research landscape is organized, still hinders interdisci-
plinarity. Interdisciplinary (especially young) researchers often do not obtain
funding since they touch different communities but are not specialized enough
to be at their centers, which often leads to negative reviews; so how to foster
interdisciplinarity for Digital Humanism, on a content, a method, and an institu-
tional level? Even more, as Informatics—as a key discipline—often comes with
the attitude to “solve problems,” but at the same time not always seeing the side
effects and long-term impacts of its work. Computer scientists cannot or even
should not be the sole driving force. But if so, the role of Informatics and its
methods needs some clarification. For a solid foundation of Digital Humanism,
exchange across various disciplines is needed throughout the entire process, i.e.,
when doing analyses, when developing new technologies, and when adopting
them in practice. Looking back in history, one sees that artifacts created by
computer scientists have similar (if not even more, given their more pervasive
nature) impact as the steam engine had in the Industrial Revolution. But it was not
the engineers who organized the workers and envisioned social welfare measures;
it was a much broader effort including intellectual leaders with diverse back-
grounds together with the workers and their unions.
• Humans decide. As it is written in the manifesto, “Decisions with consequences
that have the potential to affect individual or collective human rights must
continue to be made by humans” (Werthner et al. 2019). In a world becoming
more and more complicated and diverse, it is obvious that long-reaching and
fundamental decisions should be made by humans. It is about us and our
society—thus it is up to us, we are responsible for ourselves. This seems to be
a simple principle.
It Is Simple, It Is Complicated 337
1
Danzinger et al. (2011) show that “judicial rulings can be swayed by extraneous variables that
should have no bearing on legal decisions.” When examining parole boards’ decisions, the authors
found that the likelihood of a favorable ruling is greater at the beginning of the work day or after a
food break than later; so, whether before or after a break matter.
2
See, e.g., www.bbvaopenmind.com/en/articles/inequality-in-the-digital-era/; or knowledge.
insead.edu/responsibility/how-the-digital-economy-has-exacerbated-inequality-9726; (thanks to
George Metakides for these references).
3
There is even a new source of inequality that stems from bias in data, an issue which we won’t
discuss here further.
4
As a result, look at the respective elections and the success of the populist right.
338 J. Neidhardt et al.
• Ethical technology. Being aware of the impact of our artifacts, we recognize the
need to develop new technology along ethical guidelines. Informatics depart-
ments worldwide have included ethics in their curricula, either as stand-alone
courses or embedded in specific technical subjects. Also, industry has come
along; some companies even offer specific tools, and associations such as IEEE
provide guidelines for ethical design of systems.5 So, if we follow such guide-
lines, offer courses, and behave ethically, then it will work, at least in the long
run. That’s simple.
But reality may again be a little bit complicated. Most of the research in AI,
especially with respect to machine learning, is done by the big IT platform
companies; they have the data and, with sufficient financial resources, also
outstanding expertise. These companies try to anticipate “too much” regulation
and argue for self-regulation. However, is this research really independent, is it
not only “ethics washing” as observed by Wagner (2018)? Cases such as Google
firing Timnit Gebru and Margaret Mitchell let the alarm bells ring.6 But it is not
only about independence of research, it is also about reproducibility of results,
transparency of funding, or the governance structure of research (see Ebell et al.
2021). And there are other subtle problems, e.g., we argue for fairness in
recommendation or search results. But how to define fairness, is it with respect
to the provider of information or products, is it with respect to readers or
consumers (and which sub-group), or do we need to define fairness with respect
to some general societal criteria? One step further: let’s assume these issues are
solved and we all behave according to Kant’s categorical imperative; can we then
guarantee overall ethical behavior or a good outcome? Assuming the concept of
human technology co-evolution, we have an evolutionary “optimization” pro-
cess, which may lead to a local but not to a global optimum (e.g., preventing
automatically monopolistic structures). Even more, this evolution “does not
evolve on its own”, but is—as in our context—governed by existing unequal
societal and economic power relationships. So, ethics alone may not be enough.
• It is about the economy.7 The digital transformation as a socioeconomic-technical
process has to be put into a historical context. One could apply contemporary
economic theory to understand what is going on (the “invisible hand” according
to Adam Smith, i.e., following their self-interest consumers and firms create an
efficient allocation of resources for the whole of society). However, the economic
world has been substantially changed by the digital transformation. The value of
labor is in the process of being reduced by increasing automatization and a
possible unconditional basic income. These are simple observations but what
are the implications? What is, ultimately, the role of humans in the production
5
IEEE P7000 – IEEE Draft Model Process for Addressing Ethical Concerns During System Design.
https://standards.ieee.org/project/7000.html
6
https://www.bbc.com/news/technology-56135817
7
We omit “stupid,” for not offending the reader.
It Is Simple, It Is Complicated 339
process? Or even, what is the value of a company? Can this still be captured and
understood by traditional theories?
This is again complicated, as personal data seems to become the most distin-
guished feature people can contribute (at least on the Web) in this new world.
Apparently, it is less and less the surplus (“Mehrwert”) generated by humans in
the labor process that is relevant but rather the added value by a never-ending
stream of data, i.e., their behavioral traces on the Web. This data is used to
develop, to train, and to optimize AI-driven services. Thus, users are permanently
doing unpaid work. A user is, therefore, all three, a customer, a product, and a
resource at the same time (Butollo and Nuss 2019). Furthermore, “instead of
having a transparent market in which posted prices lead to value discovery, we
have an opaque market in which consumers support Internet companies via,
essentially, an invisible tax” (Vardi 2018). All this is related to the central role
of online platforms. However, the investigation of their technological dominance
and the resulting imbalances of power may require a network analytical perspec-
tive, integrating informatics, statistics, and political science. But such novel
approaches to understand the new rules of the economic game and the mecha-
nisms that drive the data-driven digital revolution are complicated. However, as
far as we know there is no accepted method to measure the value of the data
economy, or data itself. Data is the core of this development and is even called by
several observers as the “gold nugget” of today. Whereas external valuation is
difficult, the large online platforms are aware of the situation and are investing
heavily. We need creative people with eyes from different disciplines in order to
come up with further enlightening—both methodological as well as practical—
insights. Remember Industrial Revolution once more: understanding the steam
engine does not immediately result in Marx’s Critique of Political Economy.
• And about politics. It is already everyday knowledge that Informatics will
continue to bring about profound changes. All this seems to be an automatism,
even like a force of nature. However, we neither think that there is a higher being
that is responsible nor, in a similar mindset, that developments strictly follow a
“historical determinism.” If we, the people, should be the driving force, the
simple approach would be that all people participate in decisions of shaping
their own future, be it via democratic elections or via participatory initiatives.
However, in practice experiences are contradictory and, thus, complicated. An
example: although it was social media that claimed to lay the basis for participa-
tive processes,8 recent years have shown that their effect is often enough going in
the opposite direction and fuels the loss of trust in policy makers (and thus in
democracy in the long run). In addition, policy makers seem to be, at least
sometimes, powerless against market automatisms, which in turn leads to the
tendency to let people vote for “strong men.” Today it is the platforms themselves
that make inherent political decisions when, for instance, banning individuals or
entire opinion groups. In conclusion, with the simplistic vision that the Internet
8
In fact, social media contributed to broad political movements such as the Arab Spring.
340 J. Neidhardt et al.
References
Butollo, F., and Nuss, S. (2019) Marx und die Roboter. Vernetzte Produktion, künstliche Intelligenz
und lebendige Arbeit. Dietz Berlin (in German).
Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011) Extraneous factors in judicial decisions.
Proceedings of the National Academy of Sciences, 108(17), 6889-6892.
Denning, P., and Johnson, J. (2021) Science Is Not Another Opinion. Communications of the ACM.
3 (64)
Ebell, C., Baeza-Yates, R., Benjamins, R. et al. (2021) Towards intellectual freedom in an AI Ethics
Global Community. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00052-5
Habermas, J. (1970) “Technology and Science as ‘Ideology.’” In Toward a Rational Society:
Student Protest, Science, and Politics, trans. Jeremy J. Shapiro. Boston: Beacon Press. (Original
article: Habermas, J. (1968) Technik und Wissenschaft als “Ideologie”. Man and World 1
(4):483-523.)
Kahneman, D. (2011) Thinking, fast and slow. London: Penguin Books
Latour, B. (1987) Science in Action: How to Follow Scientists and Engineers through Society.
Harvard University Press.
Meehl, P. E. (1986) Causes and effects of my disturbing little book. Journal of personality
assessment, 50(3), 370-375.
342 J. Neidhardt et al.
Popper, K. R., (1971) The moral responsibility of the scientist. Bulletin of Peace Proposals, 2(3),
279-283
Vardi, M. (2018) How the hippies destroyed the Internet. Communications of the ACM. 7 (61).
Wagner, B. (2018) Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping?
In: Emre, B., Irina, B., Liisa, J.U.A. (Hg.): Being Profiled: Cogitas Ergo Sum. 10 Years of
‘Profiling the European Citizen‘. Amsterdam University Press, Amsterdam, pp. 84–88.
Werthner, H. et al. (2019) The Vienna Manifesto on Digital Humanism. https://dighum.ec.tuwien.
ac.at/dighum-manifesto/
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter's Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.