0% found this document useful (0 votes)
685 views100 pages

New Scientist Essential Guide No2 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
685 views100 pages

New Scientist Essential Guide No2 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

WITH CONTRIBUTIONS FROM

REGINA BARZILAY
GARRY K ASPAROV

ESSENTIAL PETER NORVIG


MARCUS DU SAUTOY
AND MORE

GUIDE№2

ARTIFICIAL
INTELLIGENCE
THE PAST, PRESENT AND FUTURE
OF MACHINES THAT THINK

EDITED BY

RICHARD WEBB
NEW
SCIENTIST
ESSENTIAL
GUIDE
ARTIFICIAL HE second in our series of Essential Guides
examines perhaps the most revolutionary
technological development in a generation,
and one that is happening with bewildering

INTELLIGENCE speed. From small beginnings just decades ago,


artificial intelligence in 2020 is already
embedded in our daily lives. Feeding on vast amounts
of often-personal data, it helps to determine the
information we see, the news we read, the products we
buy and much, much more. Technologies from
driverless cars to personalised medicine form just part
of its future game-changing potential.
Yet AI stands accused of all sorts of misdemeanours,
from undermining democracy through the spread of
information to widening inequality to embedding
racism and discrimination. That’s not to mention fears
that machine minds might one day supersede our own.
Curated from New Scientist’s extensive archive
content, this Essential Guide examines the past, present
and future of a technology it is essential for all of us to
understand. Feedback is welcome at essentialguides@
newscientist.com. If you like it and missed the first in
the series, The Nature of Reality, you can still buy it
online at shop.newscientist.com. Richard Webb

NEW SCIENTIST Printed in the UK by EDITOR Richard Webb ADDITIONAL CONTRIBUTORS


Essential Guides Precision Colour Printing Ltd DESIGN Craig Mackie Chris Baraniuk, Michael Brooks, Daniel Cossins,
25 Bedford Street, and distributed by SUBEDITOR Hannah Joshua Leah Crane, Alison George, Jim Giles, John Graham-
London WC2E 9ES Marketforce UK Ltd PRODUCTION Robin Burton Cumming, Douglas Heaven, Hal Hodson, Adam
+44 (0)20 7611 1200 +44 (0)20 3148 3333 PUBLISHER Nina Wright Kucharski, Michael Le Page, Donna Lu, Donald
© 2020 New Scientist Ltd, England EDITOR-IN-CHIEF Emily Wilson Michie, Jason Arunn Murugesu, Sandy Ong, Sean
New Scientist Essential Guides DISPLAY ADVERTISING O’Neill, John Pavlus, Timothy Revell, Aviva Rutkin,
are published by New Scientist Ltd +44 (0)20 7611 1291 Kayt Sukel, Chris Stokel-Walker, Frank Swain,
ISSN 2634-0151 displayads@newscientist.com Jon White, Chelsea Whyte, Mark Zastrow

'3:)6',6-77)%60)%&3:)7,937,9-783'/4,383

New Scientist Essential Guide | Artificial Intelligence | 1


CHAPTER 1 CHAPTER 2 CHAPTER 3

WHAT AI HOW AI VS
IS – AND MACHINES HUMANS
ISN’T LEARN

Like any transformative new What does it mean to say that a The story of artificial intelligence so far
technology, artificial intelligence machine learns? The core idea of can be told through a series of episodes
brings both risks and opportunities. artificial intelligence sounds where they’ve beaten us at our own
To truly understand what they are, almost magical, but the reality as game – literally. Games from chess to
we must first cut through the hype implemented today is quite Go have proved excellent test beds for
and get to grips with some very basic mechanical, as Nello Cristianini AI, offering rules-based challenges that
questions about what learning explains – it’s all down to probability, are familiar to humans and mimic
machines can and can’t do. Peter statistics and a heck of a lot of data. conditions of everyday life.
Norvig sets the scene.
p. 30 Deep Blue 2 vs Garry Kasparov,
PLUS
1997
PLUS p. 20 A timeline of AI
p. 32 IBM Watson vs Jeopardy!, 2011
p. 10 Turing’s legacy p. 22 What makes machine
p. 34 AlphaGo vs Lee Sedol, 2016
p. 12 What is intelligence, anyway? learning different?
p. 37 Libratus vs Texas hold’em poker,
p. 23 Data, data, data
2017
p. 25 The power of deep learning
p. 38 AlphaGo Zero vs AlphaGo, 2017
p. 27 What is a neural network?
p. 39 AI + human vs human,
now and the future

PLUS
p. 35 INTERVIEW: Garry Kasparov
“We don’t need to lose
out to machines”

2 | New Scientist Essential Guide | Artificial Intelligence


CHAPTER 4 CHAPTER 5 CHAPTER 6

FRONTIERS AI AND WILL AI


OF AI SOCIETY TAKE OVER?

Where do the limits of machine minds Artificial intelligence stands Could superintelligences of our
lie? Mathematician Marcus du Sautoy accused of all sorts of deleterious creation somehow turn against
kicks off our survey of the cutting effects on society. As is often the us or supersede us on Earth? That
edge by discussing the potential for case with AI, the questions start idea is a staple both of science
AI creativity, and strategy researcher with data – should we be feeding it fiction and discussions surrounding
Kenneth Payne closes it by looking at so much for free? the future of AI. But while machines
how AI will affect the conduct of war. may well in the long run outsmart
PLUS us, there are any number of reasons
PLUS p. 70 Fake news! to believe they’ll never usurp us,
p. 45 Who owns AI art? p. 71 The power of deepfakes argues Toby Walsh.
p. 46 Appliance to science p. 72 When AI discriminates
p. 48 The promise of AI medicine p. 76 Big brother AI is watching you
p. 52 INTERVIEW: Regina Barzilay p. 77 Do you think like a machine?
“The real power comes when p. 78 Will AI steal our jobs...?
you put human and AI together”
p. 80 ...and break the climate?
p. 55 March of the intelligent robots
p. 82 Take back control
p. 58 The driverless car challenge
p. 83 Five commandments for AI
p. 60 Into fifth gear
p. 85 Can algorithms be accountable?
p. 65 The military-AI complex
p. 87 INTERVIEW: Iyad Rahwan
“AI is a new type of agent
in the world”

New Scientist Essential Guide | Artificial Intelligence | 3


CHAPTER 1

4 | New Scientist Essential Guide | Artificial Intelligence


The idea of artificial intelligence both thrills and disturbs. What
are we to make of devices that can understand and respond to our
voice commands, of super-powerful search engines that anticipate
what we want to know, of cars that can drive themselves?

Like any transformative new technology, AI brings both risks and


opportunities. To truly understand what they are, we must first cut
through the hype – and, as Peter Norvig sets out, get to grips with
some very basic questions.

Chapter 1 | What AI is – and isn’t | 5


RTIFICIAL intelligence is about
engineering machines that act
intelligently. That raises a vexing
question: what is “intelligent”?
In many ways, “unintelligent”
machines are already far smarter
than we are. But we don’t call a
computer program smart for
multiplying massive numbers or
keeping track of thousands of
bank balances; we just say it is correct.
We tend to reserve the word intelligent for
uniquely human abilities, such as understanding
and exploiting language, recognising a familiar face,
negotiating rush-hour traffic, or mastering how
to play a game or musical instrument. Developing
such abilities goes way beyond what conventional
computer programming can achieve. Traditionally,
a programmer will start off knowing what task they
want a computer to do. The knack in AI is getting a
PROFILE computer to do the right thing when you don’t
know what that might be.
PETER Like our own intelligence, AI must deal with the
NORVIG uncertainties of the real world. That uncertainty
takes many forms. It could be the unpredictable moves
Peter Norvig is director of an opponent trying to prevent you from reaching
of research at Google, your goal, say. It could be that the repercussions of one
and previously director of decision do not become apparent until later – you
might swerve your car to avoid a collision without
its core search algorithms
knowing if it is safe to do so – or that new information
group. He is co-author becomes available during a task. An intelligent
of the classic textbook program must be capable of handling all this input
Artificial Intelligence: and more. >
',6-77)%60)

A modern approach
→-
Chapter 2 has more on the nuts and bolts of AI-

6 | New Scientist Essential Guide | Artificial Intelligence


Chapter 1 | What AI is – and isn’t | 7
To approximate human intelligence, a system engineering had gone from the first jet
must not only model a task, but also model the world aircraft to an astronaut on the moon in 30 years.
in which that task is undertaken. It must sense its Why couldn’t AI take off in a similar way?
environment and then act on that, modifying and
adjusting its own actions accordingly. Only when a ↓-
machine can make the right decision in uncertain See the next article for more on Alan Turing’s -
circumstances can it be said to be intelligent. pioneering AI contributions-
The roots of artificial intelligence predate the first
computers by centuries. The Ancient Greek The difference is that there are no simple formulas
philosopher Aristotle described a method of formal, for intelligence; the discipline lacks its own F = ma or
mechanical logic called a syllogism that allows us to E = mc2. By the 1980s, AI researchers realised that they
draw conclusions from premises. One of his rules had neither sufficient hardware nor knowledge to
sanctioned the following argument: Some swans are simulate everything a human can do and the field
white. All swans are birds. Therefore, some birds are fragmented. Instead of working towards a single
white. That form of argument – Some S are W; All S are B; human-equivalent computer intelligence, research
Therefore some B are W – can be applied to any S, W, and groups splintered off to investigate specific aspects of
B to arrive at a valid conclusion, regardless of the the larger problem: speech recognition, computer
meaning of the words that make up the sentence. vision, probabilistic inference – even chess.
According to this formulation, it is possible to build a Each of these subdisciplines saw significant
mechanism that can act intelligently despite lacking an successes. An early signature moment came in 1997,
entire catalogue of human understanding. when IBM’s Deep Blue computer beat the world chess
Aristotle’s proposal set the stage for extensive champion, Garry Kasparov. Deep Blue could evaluate
enquiry into the nature of machine intelligence. 200 million chess positions per second in its search for
It wasn’t until the mid-20th century, though, that the right move. This allowed it to quickly look ahead at
computers finally became sophisticated enough to many different sequences to see where they might lead.
test these ideas. In 1948, Grey Walter, a researcher at the
University of Bristol, UK, built a set of autonomous →-
mechanical “turtles” that could move, react to light, Turn to chapter 3 for more on Deep Blue -
and learn. One of these, called Elsie, reacted to her and other human vs AI contests-
environment for example by decreasing her sensitivity
to light as her battery drained. This complex behaviour Deep Blue scored an impressive victory in a game that
made her unpredictable, which Walter compared to demands intellectual rigour. However, the machine
the behaviour of animals. had a very narrow range of expertise. It could win a
In 1950, Alan Turing suggested that if a computer game of chess, but it could not discuss the strategy it
could carry on a conversation with a person, then we had employed, nor could it play any other game. No
should, by “polite convention”, agree that the computer one would mistake its intelligence for human.
“thinks”. In the 1960s, most leading artificial It might not be obvious, but you interact with
intelligence researchers were confident that they AIs every day nonetheless. They route your phone
would meet the goal of making a human-level thinking calls, approve your credit card transactions, prevent
machine within a few decades. After all, aeronautic fraud and automatically trade stocks in your mutual

8 | New Scientist Essential Guide | Artificial Intelligence


fund. What’s more, they can help your doctor
A QUICK interpret test results. But you won’t think of these
programs as having a human-like intelligence.
GLOSSARY →-
OF AI Chapter 4 discusses more cutting-edge-
applications of AI-
ARTIFICIAL INTELLIGENCE More than half a century after the introduction of
%TTP]MRKGSQTYXIVWXSXEWOWXLEX AI, however, three key developments could now augur
RSVQEPP]VIUYMVILYQERPIZIP the emergence of machine intelligence. New insights
MRXIPPMKIRGIPMOIVIEWSRMRKHIGMWMSR from neuroscience and cognitive science are leading to
QEOMRKTVSFPIQWSPZMRKERHPIEVRMRK new hardware and software designs. The internet
provides access to a vast store of global data. It may
BIG DATA even be possible for AI to evolve on its own.
8LILYKIHEXEWIXWXLEXGERFIEREP]WIH That feeds into the notion of the ultra-intelligent
F]GSQTYXIVWERHEPKSVMXLQWXSVIZIEP machine – one that can surpass human thinking on
TEXXIVRWXVIRHWERHEWWSGMEXMSRW any subject – which was introduced in 1965 by
mathematician I. J. Good, who worked with Alan Turing
MACHINE LEARNING at Bletchley Park, the UK’s centre for coding and code-
8LIGETEGMX]SJEREPKSVMXLQXSPIEVR breaking during the second world war. Good noted that
JVSQRI[MRJSVQEXMSRERHQSHMJ]MXW “the first ultra-intelligent machine is the last invention
TVSGIWWMRKEWEVIWYPX[MXLSYXFIMRK that man need ever make”, because from then on, the
I\TPMGMXP]TVSKVEQQIHXSHSWS machines would be designing other, ever-better
machines, and there would be no work left for humans
NEURAL NETWORK to do.
%REPKSVMXLQYWIHMRHIITPIEVRMRKXLEX Over the past decade or so, AI researcher Ray
MQMXEXIWXLIEGXMZMX]SJPE]IVWSJRIYVSRW Kurzweil has further popularised this notion, calling it
MRXLIFVEMRJMPXIVMRKHEXEXLVSYKLXMIVW the technological singularity, or the tipping point at
SJZMVXYEPFVEMRGIPPW which ultra-intelligent machines so radically alter our
society that we can’t predict how life will change
DEEP LEARNING afterwards. In response, some have fearfully predicted
8LIlFPEGOFS\zSJ%-9RWYTIVZMWIH that these intelligent machines will dispense with
RIYVEPRIX[SVOWXLEXGVIEXIXLIMVS[R useless humans – mirroring the plot of the movie The
TVSGIWWMRKGSRWXVEMRXWEWXLI]PIEVR Matrix – while others see a utopian future filled with
JVSQZEWXXVSZIWSJXVEMRMRKHEXE endless leisure. >

→-
Turn to chapter 5 to read about the-
challenges AI poses to society today-

Chapter 1 | What AI is – and isn’t | 9


Focusing on these equally unlikely outcomes has
distracted the conversation from the very real
societal effects already brought about by the increasing
pace of technological change. For 100,000 years, we
TURING’S
LEGACY
relied on the hard labour of small bands of hunter-
gatherers. A scant 200 years ago we moved to an
industrial society that shifted most manual labour to
machines. And then, just one generation ago, we
made the transition into the digital age. Today much
of what we manufacture is information, not physical
objects – bits, not atoms. Computers are ubiquitous Alan Turing is feted as the father of
tools, and much of our manual labour has been computer science, as well as being a visionary
replaced by calculations.
A similar acceleration is taking place in robotics. thinker on artificial intelligence. He devised
The robots you can buy today to vacuum your floor the Turing test, which is still the key gauge
appeal mainly to technophiles. But within a decade
there will be an explosion of uses for robots in the of how close machines have come to human
office and home. Some will be completely autonomous, intelligence, as well as publishing some
others will be tele-operated by a human. Science
fiction author Robert Heinlein predicted this prescient ideas about how to simulate the
development in 1942; remote human operators of his human brain with computers.
Waldo robots wore gloves that translated their every
motion to make a distant robot perform tasks ranging
from micro-surgery to maintenance.
Personally, I think that the last invention we need
ever make is the partnership of human and tool.
Paralleling the move from mainframe computers in
the 1970s to personal computers today, most AI
systems went from being standalone entities to being
tools that are used in a human-machine partnership.
Our tools will get ever better as they embody more
intelligence. And we will become better as well, able to
,)6-8%+)-1%+)4%682)67,-408(%0%1=783'/4,383

access ever more information and education. We may


hear less about AI and more about IA, that is to say
“intelligence amplification”. In movies we will still have
to worry about the machines taking over, but in real life
humans and their sophisticated tools will move
forward together. ❚

→-
Will AI ever overtake human intelligence?-
Turn to chapter 6 for more-

10 | New Scientist Essential Guide | Artificial Intelligence


T WAS in 1936, at the age of just 24, that The report was not published until 1968, years after
Alan Turing laid the theoretical groundwork Turing’s untimely death in 1954, in part because his
for the computing revolution. Up until that supervisor, Charles Galton Darwin, described it as a
point, the word “computer” meant a person “schoolboy essay”. Subsequently, it was shown that the
who did calculations either manually or with simple binary-based neural networks he had conceived
the help of a mechanical adding machine. were indeed teachable, learning, for example, how to
These human computers were an essential part recognise simple patterns like the shapes of Os and Xs.
of the industrial revolution and performed They were a simpler version of the neural networks
often repetitive calculations, such as those that form the architecture of many AI systems today.
necessary for the creation of books containing
vast tables of logarithms. →-
Turing provided a recipe for how a machine could Find out how today’s neural networks work-
take over these sorts of tasks, by being provided with a on page 25-
set of internal rules and programmed with step-by-step
procedures, or algorithms, to complete them. He also It was in 1950, however, that Turing made his most
showed what such a computer could and could not be long-lasting contribution to the field of AI. His paper
expected to do. entitled “Computing Machinery and Intelligence”
While the idea of this “Turing machine” underlies opens with the words: “I propose to consider the
every modern computer’s basic workings, Turing was question, ‘Can machines think?’ ”.
also intensely curious about another type of computing It described what we now call the Turing test –
machine: the human brain. Unlike the Turing machine, although Turing himself referred to the method of
the human brain apparently wasn’t restricted to just determining whether a machine could be called
following preprogrammed rules, and its abilities intelligent or not as the Imitation Game. To this day,
evolved over time: it learned. his test is a standard by which “intelligent” machines
Turing believed that the infant brain could be are judged, and it is remarkable in its simplicity
simulated on a computer, and in 1948, while employed and ingenuity.
at the UK’s National Physical Laboratory, wrote a report It involves a judge communicating with both a
arguing just that. human and a machine in written language, via a
In it, he describes a model of the brain based on computer screen or teleprinter, so the judge can use
simple processing units – neurons – that take two only the conversation to assess the participants. If the
inputs and have a single output. They are connected judge cannot distinguish the machine and the human,
together in a random fashion to make a vast network the machine is deemed to be intelligent.
of units. The signals, passing along interconnections The concept will be familiar to anyone who has
equivalent to the brain’s synapses, consisted of 1s or 0s. interacted with an AI such as Apple’s digital personal
Today this is called a “boolean neural network”; Turing assistant, Siri, or a chatbot. Siri does not pass the test,
called it an unorganised A-type machine. and although chatbots may have fooled some
The A-type machine could not learn anything, so individuals in recent years, none have passed the
Turing took things a step further. His B-type machine Turing test unequivocally.
was identical to the A-type except that the In fact, the limitations of even the best modern AIs
interconnections between neurons had switches that mean that they are quickly outed as machines. Turing
could be “educated”. The education took the form of imagined a day when AI would prove indistinguishable
telling a switch to be on (allowing a signal to pass down from the human form. Seventy years on, that day has
the synapse) or off (blocking the signal). not yet come. ❚

Chapter 1 | What AI is – and isn’t | 11


RTIFICIAL intelligences of the sort

WHAT IS that already exist probably wouldn’t


fare badly on an IQ test. These tests
are all about pattern finding and word

INTELLIGENCE,
matching – skills that the machine
minds powering search engines, face-
recognition technologies and the like
are already getting rather good at.

ANYWAY? But what about wisdom, social


sensitivity, practical sense and other
hard-to-measure qualities we might deem part of
“intelligence”? “No single number captures the rich
complexity of what it means to be intelligent,” says
Rosalind Arden, who researches cognitive abilities at
Intelligence has enabled humans to reach the London School of Economics.
Intelligence “reflects a broader and deeper
for the moon, cure disease and generally
capability for comprehending our surroundings –
dominate this small blue dot of a planet. ‘catching on’, ‘making sense’ of things, or ‘figuring out’
what to do”, according to one oft-quoted attempt to
So if we grant machines intelligence, will it
define it. This ability to learn from experience and
do all that? That depends on how we define change behaviour accordingly lies at the heart of a
concept called general intelligence.
the concept in the first place.
Where does AI rank on this definition? We have
developed many AIs adept at learning to solve
particular problems – image recognition, say – so at
first pass it would seem to tick a lot of the boxes. But in
our unusually big and well-connected brains, general
intelligence has morphed into special talents for
abstract thinking, detailed forward planning,

12 | New Scientist Essential Guide | Artificial Intelligence


.978C794)6-783'/4,383
understanding the minds of others and insight – those If that sounds boring, it is. But for boring tasks, AI is
“aha!” moments when we connect cause and effect. useful. Siting those adverts on your Facebook timeline is
We shouldn’t get blown away by our supposedly not something a human does well, even if they wanted to.
superior abilities: we share virtually all our intelligence But on any reasonably intelligent definition of
skills with at least some animals. An octopus’s ability to general intelligence, any human is far cleverer than any
solve puzzles or an antelope’s talent for assessing the AI – as indeed is any octopus or antelope. The
most nutritious grasses are examples. “Humans are algorithms at the heart of AI systems today “learn” by
limited by our size, our evolutionary history,” says altering their data-processing routines in ways that get
Arden. “Actually we have the sort of intelligence that a better result, given the goal. They don’t “know”
would have evolved for a species like us.” anything afterwards in the way that you (hopefully)
It follows that our intelligence is unlikely to be the know more now than you did 5 minutes ago. Nor can
last word in grand, dot-joining thinking. Unfettered by they deliberately forget or accidentally misremember
biology’s constraints, is AI the next big thing? that knowledge as you can, or apply it in any way you
choose – to inform someone else, make yourself look
→- clever, or even just to decide you know enough to stop
Chapter 6 has more on the limits of AI- reading this article right now and go do something
more interesting.
If it is, we’re still a way away from seeing it happen. AIs in their current form have “weak” intelligence:
“AI in its current version is about statistical machine the ability to do one thing really well. They don’t – yet –
learning, often from crowdsourced data,” says Ross have emotional input about experiences, imagined
Anderson at the University of Cambridge, UK. This type futures and interactions with other AIs or humans. And
of AI processes information, identifies patterns in it, they may never have these things. “In my view, the
and assesses their relevance to goals defined by a biggest misapprehension about AIs is that they will be
human creator: beating a human at a game such as something like human intelligence,” says philosopher
chess or Go, or setting someone’s insurance premium, Stephen Cave of the Leverhulme Centre for the Future
or curating a Facebook feed and populating it with ads. of Intelligence at the University of Cambridge, UK. “The
The system’s response provides feedback on the AI’s way they work is nothing like the human brain. In their
action, which the AI uses to do a better job next time – goals, capacities and limitations, they will actually be
perhaps just a microsecond later. profoundly different to us large-brained apes.” ❚

Chapter 1 | What AI is – and isn’t | 13


CHAPTER 2

14 | New Scientist Essential Guide | Artificial Intelligence


On the question of whether machines can learn, at least one computing
pioneer knew well where she stood. “The Analytical Engine has no
pretensions whatever to originate anything,” said Ada Lovelace in 1843,
referring to the computing machine proposed by her collaborator Charles
Babbage. “It can do whatever we know how to order it to perform.”

Yet in 2016, a computer program developed just over a mile away


from Lovelace’s house in London beat a master of the game Go. None
of its programmers could have done that – or indeed even told you exactly
how AlphaGo achieved this feat. What changed to make it possible?
Nello Cristianini takes up the story.

Chapter 2 | How machines learn | 15


N THE summer of 1956, a remarkable
collection of scientists and engineers gathered
at Dartmouth College in Hanover, New
Hampshire. Among them were computer
scientist Marvin Minsky, information theorist
Claude Shannon and two future Nobel
prizewinners, Herbert Simon and John Nash.
Their task: to spend the summer months
inventing a new field of science called “artificial
intelligence”.
They did not lack in ambition, writing in their
funding application: “every aspect of learning or any
other feature of intelligence can in principle be so
precisely described that a machine can be made to
simulate it”. Their wish list was “to make machines use
language, form abstractions and concepts, solve kinds
of problems now reserved for humans, and improve
themselves”. They thought that “a significant advance
can be made in one or more of these problems if a
carefully selected group of scientists work on it
together for a summer”.
PROFILE At the Dartmouth conference, and at various
meetings that followed it, the defining goals for the
NELLO field became clear: machine translation, computer
CRISTIANINI vision, text understanding, speech recognition, control
of robots and machine learning. The pioneers also
Nello Cristianini is expected that any breakthrough in AI would provide us
professor of artificial with further understanding about our own
intelligence at the University intelligence. For the following three decades,
significant resources were ploughed into research,
of Bristol, UK, where he
but none of the goals were achieved.
researches statistical As progress stalled, public investment dried up,
approaches to AI and the and the field of machine intelligence underwent two
implications of big data successive slumps in the early 1970s and late 1980s,
',6-77)%60)

for society known as the AI winters. It was not until the late 1990s
that many of the advances predicted in 1956 started to
happen. Before this wave of success, the field had >

16 | New Scientist Essential Guide | Artificial Intelligence


Chapter 2 | How machines learn | 17
to learn an important and humbling lesson. ditch the assumption that AI would provide us with
While the goals have remained essentially the same, further understanding of our own intelligence. Try to
the methods of creating AI have changed dramatically. learn from an algorithms how a human performs a task
The instinct of those early engineers was to program and you are wasting your time: the intelligence is more
machines from the top down, and so solve the whole in the data than in the algorithm. The field had
problem at once. They expected to generate intelligent undergone a paradigm shift and had entered the age of
behaviour by first creating a mathematical model of data-driven AI. Its language was no longer that of logic,
how we might process speech, text or images, and then but statistics.
by implementing that model in the form of a computer How, then, can a machine learn? When I was growing
program, perhaps one that would reason logically up, my bicycle never learned its way home and my
about those tasks. typewriter never suggested a word or spotted a spelling
Over the years, it became increasingly clear that mistake. Mechanical behaviour was synonymous with
those systems weren’t suited to dealing with the being fixed, predictable and rigid. For a long time, a
messiness of the real world. Most engineers started “learning machine” sounded like a contradiction, yet
abandoning the dream of a general-purpose top-down today we talk happily of machines that are flexible,
reasoning machine, focusing instead on specific tasks adaptive, even curious.
that were more likely to be solved. In AI, we say that a machine learns when it changes
Some early success came in systems to recommend its behaviour (hopefully for the better) based on
products. While it can be difficult to know why a experience. It sounds almost magical, but in reality the
customer might want to buy an item, it can be easy to process is quite mechanical.
know which item they might like on the basis of
previous transactions by themselves or similar ↓-
customers. If you liked the first and second Harry What makes machine learning-
Potter films, you might like the third. A full different? See page 22-
understanding of the problem was not required for a
solution: you could detect useful correlations just by To get a feel for how it works, consider the
combing through a lot of data. autocomplete function on your smartphone. If you
By the mid-2000s, success stories were piling up. It activate this function, the software will propose
was becoming clear that, contrary to the assumptions possible completions of the word you are typing. How
of 60 years ago, we don’t need to precisely describe a can it know what you were going to type? At no point
feature of intelligence for a machine to simulate it. did the programmer develop a model of your
This experimental finding is sometimes called “the intentions, or the complex grammatical rules of
unreasonable effectiveness of data”. It was a very your language.
humbling and important lesson for AI researchers: that Rather, the algorithm proposes the word that has the
simple statistical tricks, combined with vast amounts highest probability of being used next. It “knows” this
of data, could deliver the kind of behaviour that had from a statistical analysis of vast quantities of existing
eluded the best theoreticians for decades. text. This analysis was done mostly when the
At the same time, researchers were also forced to autocomplete tool was being created, but it can be

18 | New Scientist Essential Guide | Artificial Intelligence


“Statistical hacks can create
behaviour that looks intelligent”

augmented along the way with data from your own software, and feed them with millions of examples, the
usage. The software can literally learn your style. The result might look like highly adaptive behaviour that
same basic algorithm can handle different languages, feels intelligent to us. Yet, remarkably, the agent has no
adapt to different users and incorporate words and internal representation of why it does what it does.
phrases it has never seen before, such as your name or
street. The quality of its suggestions will depend mostly →-
on the quantity and quality of data on which it is Chapter 4 discusses cutting-edge AI technologies-
trained. So long as the data set is sufficiently large and
close in topic to what you are writing, the suggestions The underlying algorithms can get more complicated.
should be helpful. The more you use it, the more it Online retailers keep track not just of purchases, but
learns the kinds of words and expressions you use. also of any user behaviour during a visit to the site.
It improves its behaviour on the basis of experience, They might track information such as which items you
which is the definition of learning. have added to your basket but later removed, which
Note that a system of this type will probably need to you have rated and what you have added to your wish
be exposed to hundreds of millions of phrases, which list. Yet more data can be extracted from a single
means being trained on several million documents. purchase: time of day, address, method of payment,
That would be difficult for a human, but is no challenge even the time it took to complete the transaction. And
at all for modern hardware. this, of course, is done for millions of users.
The next step up in complexity is a product As customer behaviour tends to be rather uniform,
recommendation agent. Consider your favourite this mass of information can be used to constantly
online shop. Using your previous purchases, or even refine the agent’s performance. Some learning
just your browsing history, the agent will try to find the algorithms are designed to adapt on the fly; others are
items in its catalogue that have the highest probability retrained offline every now and then. But they all use
of being of interest to you. These will be computed the multitude of signals extracted from your actions to
from the analysis of a database containing millions of adapt their behaviour. In this way, they constantly learn
transactions, searches and items. Here, too, the number and track our preferences. It is no wonder that we
of parameters that need to be extracted from the sometimes end up buying a different item from the
training set can be staggering: Amazon, the world’s one we thought we wanted.
largest online retailer, has hundreds of millions of Intelligent agents can even propose items just to
customers and tens of millions of product lines. see how you respond. Extracting information in this
Matching users to products on the basis of previous way can be as valuable as completing a sale. Online
transactions requires statistical analysis on a massive retailers act in many ways as autonomous learning
scale. As with autocomplete, no traditional agents, constantly walking a fine line between the
understanding is required – it does not need exploration and exploitation of their customers.
psychological models of customers or literary criticism Learning something they did not know about you can
of novels. Each of the basic mechanisms is simple be as important as selling something. To put it simply,
enough that we might call it a statistical hack, but when they are curious.
we deploy many of them simultaneously in complex Consider now that these nuts and bolts of >

Chapter 2 | How machines learn | 19


A TIMELINE OF AI
While the idea of AI has been around for the best part of 70 years, in the first
few decades progress was halting and slow. It was only in the late 1980s with
the discovery of a new, bottom-up paradigm of machine learning, based on
algorithms capable of drawing statistical inference from large amounts of data,
that AI began to find its way. Just a few years later, the advent of the world
wide web began to provide a ready, ever-growing source of that data.

1950
Alan Turing publishes the seminal
paper “Computing machinery and
intelligence“. Its opening sentence is 1973
“I propose to consider the question, The first “AI winter” sets in as
‘Can machines think?’ ” progress stalls, and funding and 1988
interest dry up IBM researchers publish a paper
entitled “A statistical approach to
1956 language translation”, setting out a
The term “artificial intelligence” 1975 new way to use data, rather than
is coined at a workshop at Dartmouth A system called MYCIN diagnoses rules, to determine programming
College bacterial infections and recommends outcomes
antibiotics using deduction based on
a series of yes/no questions. It was
1959 never used in practice 1989
Computer scientists at Carnegie NASA’s AutoClass program uses this
Mellon University create the General probabilistic approach to discover
Problem Solver, a program that can 1987 several previously unknown classes
solve logic puzzles Second AI winter begins of stars in telescope data

20 | New Scientist Essential Guide | Artificial Intelligence


2002
Amazon replaces human
product recommendation editors 2011
with an automated system Apple integrates Siri, a voice-
operated personal assistant that
can answer questions, make
2005 recommendations and carry out
Five autonomous vehicles instructions such as “call home”,
complete the DARPA Grand into its operating system
Challenge, a 200-km off-road
race in the Mojave desert
1991 2011
The world wide web is launched IBM’s supercomputer Watson
into the public domain 2006 beats two human champions at
Facebook opened to the TV quiz game Jeopardy!
general public
1996
First version of Google’s
2006 2012
PageRank web search algorithm Google’s driverless cars navigate
launched Google launches Translate, autonomously through traffic
a statistical machine translation
service
1997 2016
IBM’s Deep Blue beats world Google’s AlphaGo defeats Lee Sedol,
champion Garry Kasparov at chess 2009 one of the world’s leading Go players
Google researchers publish
an influential paper called “The
1998 unreasonable effectiveness of data”. 2018
NASA’s Remote Agent is first fully It declares that “simple models and Google’s Waymo spin-off launches a
autonomous program to control a a lot of data trump more elaborate limited commercial autonomous taxi
spacecraft in flight models based on less data” service in Phoenix, Arizona

Chapter 2 | How machines learn | 21


machine learning can be applied to many parts of the
same system at the same time: a search engine might
WHAT MAKES use them to learn how to complete your queries, best

MACHINE LEARNING rank the answers for you, translate a document among
the search results and select which ads to display. And
DIFFERENT? this is just on the surface.
Unknown to users, the system will probably also
be running tests to compare the performance of
;MXLXVEHMXMSREPGSQTYXIVTVSKVEQW different methods by using them on different random
XLIQEGLMRIKIXWPMRIF]PMRI subsets of users. This is known as A/B testing. Every
MRWXVYGXMSRW;MXLQEGLMRIPIEVRMRK time you use an online service, you are giving it a lot
LS[IZIVXLIGSQTYXIVQYWX[SVOSYX of information about the quality of the methods being
LS[FIWXXSWSPZIXLITVSFPIQ8LI tested behind the scenes. All this is on top of the
VIWYPXMWEQEGLMRIXLEXIWWIRXMEPP] revenue you generate for them by clicking on ads or
TVSKVEQWMXWIPJ buying products.
-QEKMRIMRWXVYGXMRKEVSFSXXSQEOI
WSYT8LIGSRZIRXMSREPETTVSEGL[SYPH →-
FIXS[VMXISYXETVIGMWIVIGMTIJSV Chapter 5 has a thorough discussion of the-
7SYT&SXXSJSPPS[*MVWXTIIPXLISRMSR implications of data-driven AI for society-
XLIRGYXXLISRMSR&YXE7SYT&SXFEWIH
SRQEGLMRIPIEVRMRK[SYPHMRWXIEH While each of these mechanisms is simple enough,
[SVOSYX[LEXXSHSSRMXWS[RTIVLETW their simultaneous and constant application on a vast
F][EXGLMRKXLSYWERHWSJZMHISWSJ scale results in a highly adaptive behaviour that looks
TISTPIQEOMRKWSYTERHXV]MRKXSGSQI intelligent to us. Using the same or similar statistical
YT[MXLMXWS[RWSYTPMOIVIGMTISVF] techniques, in multiple parts of a system and at various
EXXIQTXMRKXSQEOIWSYTEKEMRERH scales, computers can now learn to recognise faces,
EKEMRERHPIEVRMRKJVSQJIIHFEGOSR transcribe speech, translate text from one language to
XLIVIWYPXWSJIEGLEXXIQTX another and answer questions. According to some
-RXLIGEWISJ7SYT&SXXLI online dating companies, they can even find us
GSRZIRXMSREPETTVSEGL[SYPHFI potential love matches. Integrated into larger systems,
QSWXIJJMGMIRX&YXWMQTPIVIGMTIWHSR X those can power products and services ranging from
I\MWXMRQER]WGIREVMSW8LIVIMWR X Siri and Amazon Echo to autonomous cars.
SRIJSVVIGSKRMWMRK[SVHWMREWSYRH Nonetheless, every time we understand one of the
VIGSVHMRKWE]SVJSVZIVMJ]MRKEJEGIXS mechanisms behind AI, we cannot help feeling a little
YRPSGOETLSRI%RHXLMWMW[LIVI cheated. AlphaGo, for example, learned its winning
QEGLMRIPIEVRMRKGSQIWMRXSMXWS[R strategies by studying millions of past matches and
&][SVOMRKSYXLS[XSUYMGOP]WTSX then playing against various versions of itself for
TEXXIVRWMRZEWXEQSYRXWSJHEXEER%- millions of further matches. An impressive feat. But
GERQEWXIVI\GIIHMRKP]GSQTPI\XEWOW such AI systems generate adaptive and purposeful
behaviour without needing the kind of self-awareness
that we like to consider the mark of “real” intelligence.
Would Lovelace dismiss their suggestions as
unoriginal? Possibly, but while the philosophers
debate, the field keeps moving forward. ❚

22 | New Scientist Essential Guide | Artificial Intelligence


Many of the AI systems we interact with day-to-day have a feature in common –
they “understand” written or spoken human language. It’s a clever feat, to be sure,
and it’s an example of how AI feigns intelligence it doesn’t really have. At its heart
lies probability, statistics and...

DATA, DATA, DATA


ACK in 2008, over 90 per cent of all
emails sent and received were spam.
Over the following decade, that
proportion dropped by half, standing
at just over 45 per cent in 2018. It just
wasn’t so worthwhile sending
unsolicited emails advertising dubious
products and services any more –
thanks to the steady improvement in
AI-driven spam filters preventing
them from ever reaching users’ inboxes.
Spam filters, like many other AI technologies, got
their first fillip from user-generated data. Individual
email users provided the gold standard by labelling
messages in their inboxes as “spam” or “not spam”,
laboriously dragging junk emails into dedicated
folders. Spam filters used this information to break
down each message into features that identify an email
as spam: individual words or phrases, the time of day
the message was sent, or the computer that sent it.
Over time this data builds up into a global database
of what constitutes spam. That’s when the secret sauce
of probabilistic reasoning kicks in. Suppose an
incoming email contains the phrases “lowest prices”
and “discreet packaging”. The AI will refer to global
7,937,9-783'/4,383

statistics that tell it that these phrases appear in 8 per


cent and 3 per cent of spam, respectively, but only in
0.1 per cent and 0.3 per cent of legitimate messages.
After making some assumptions about the
independence of features – and applying a formula >

Chapter 2 | How machines learn | 23


“Probabilistic, data-driven
AI is a powerful paradigm”
called Bayes’ rule, which assesses the probability of compared with a table of available translations and
one event happening based on observations of their probabilities of being accurate. There will be
associated events – it concludes that the message is multiple ways to divide up the source sentence, and
99.9 per cent likely to be spam, and it deposits it multiple ways to translate each phrase. The problem is
straight into the junk-mail folder. how to select from among these possible translations
But the most important thing spam filters do is and assemble them into a sentence in the target
update their models over time based on experience. language that is both grammatical and an accurate
Every time a user corrects a mistake, perhaps by translation.
rescuing a legitimate message from the junk-mail This is computationally hard, but modern computers
folder, the system updates itself to reflect the new can do it. All that is required for such calculations are
situation: it “learns”. stored in two tables containing phrases and
Programmers do not need to specify step-by-step probabilities – the first one providing a statistical
instructions for identifying a spam message. Nor is any model of the target language, and the second a list of all
deep understanding required on the part of the phrases in the source language and their possible
algorithm – just the ability to count the frequencies of translations. That is where all the knowledge of the
words. The software architects need only build a system lies. Change some entry, and the system will
general learning system and expose it to examples of behave differently. Improve the estimation of
spam and genuine emails. Probabilistic reasoning does probabilities, and it will boost its performance. No
the rest. understanding of the text is needed, just statistical
A further powerful example of this statistical, data- patterns.
driven AI paradigm is machine translation. On any A key aspect of machine translation is the computer-
given day, Google now translates more text than all the human partnership. Modern machine translation
professional human translators in the world decipher systems start by gathering millions of documents from
in a year. Google’s tool can translate with reasonable across the internet that have already been previously
competence between over 100 languages, from translated by humans. These include vast data sets
Afrikaans to Zulu. such as the proceedings of the European Parliament,
Again, probabilistic reasoning lies at its core. In the which are translated into 24 languages.
early days, linguists built translation systems based on But the vast majority of the data exploited by such
bilingual dictionaries and codified grammar rules. But human-interfacing AIs is user-generated. How does
the inflexibility of such rules, compared with the Google know how you might want to complete the
flexibility with which humans generally use language, search query you are typing into its search bar, or how
proved a major stumbling block. For example, does it automatically correct your spelling to search for
adjectives come after the noun in French and before the a mistyped query term? Simply because it knows, by
noun in English – except when they don’t, as in the statistically aggregating billions of user interactions
phrase “the light fantastic”. before you, that this is what your are likely to want.
In the last decade or so, translation has shifted from Challenging? Yes. Still prone to error? Yes. Teaching us
rules laid down by humans to guidelines automatically something about human intelligence understands or
learned from real examples. The text to be translated is uses language, or extracts meaning from the world?
first broken into words or phrases, each of which is then Hardly. ❚

24 | New Scientist Essential Guide | Artificial Intelligence


THE POWER
OF DEEP
LEARNING
Statistical analysis of past interactions HE need to generalise and find
patterns in data poses a whole host of
to anticipate present behaviour is one thing. new challenges for those developing
Dealing with novel situations is another. machine-learning systems. For a start,
how do you choose the right features
Who can you recommend a brand new book that your system should light upon to
to, say, or what should an autonomous car make decisions?
This is one of the most critical
do when it detects an unexpected movement problems in machine learning. Take the
on the road in front of it? example of a book-recommendation
algorithm. What does it mean to say that two items are
“similar”, or both relevant to a given user? We could
The key is pattern recognition, a fundamental describe a book by the number of pages, the language
it is written in, the topic, the price, the date of
part of intelligent behaviour, for machines as publication, the author, even some index of its
well as humans. That leads to the paradigm readability. For a customer, useful descriptors might
include age, gender or location. But how do we match
at the heart of most modern AI: deep the germane features?
learning, a machine-learning technique that, This problem becomes even more critical when we
handle complex items such as images. If you compare
implemented on architectures called neural two passport photos of yourself taken one minute apart,
networks, attempts to mimic how the human they will not be identical at the level of raw pixels. A
computer would interpret them as very different images,
brain works. despite their being nearly identical to a human eye.
Now consider two photos of yourself taken in very
different situations, with different lighting, positioning
and background, say, or taken years apart. How can a
machine learn to identify the features that remain
invariant despite differences in the raw data, and so >

Chapter 3 | Humans vs AI | 25
recognise two images as being of the same thing? Or, →-
given a set of photos, how would you identify all the
ones of a football match? A programmer could write an
Turn to page 58 for more on the challenges-
algorithm to looks for typical features like goalposts,
of programming driverless cars-
but it’s a lot of work.
In general, this sort of complex pattern recognition Neural networks have been around since the 1940s and
can’t be programmed directly. In response, software 1950s, but only recently have they started to have much
engineers have above all resorted to a specific machine success. The change of fortunes is due to the huge rise
learning technique known as deep learning. While this in both the amount of data we produce and the amount
sounds exotic, it is actually another form of the data- of computer power available.
driven approach, using big data to adjust millions of These AIs require anywhere between thousands to
parameters. It relies on a programming architecture millions of examples to learn how to do something. But
called neural networks, and it has more than a whiff of now millions of videos, audio clips, articles, photos and
the original conception of AI mimicking the human more are uploaded to the internet every minute,
brain put forward by Alan Turing and others. By making it much easier to get hold of suitable data sets –
training the network on many instances of relevant especially if you are a researcher at one of the large
data, it tweaks its own parameters in response to technology companies that hold information about
“right” answers, and so eventually manages to spot their customers. Processing these data sets and
patterns itself (see “What is a neural network?”, right). training AIs with them is a power-hungry task, but
processing power has roughly doubled every two years
←- since the 1970s meaning modern supercomputers are
Turn back to page 10 for more on- up to the task.
Alan Turing’s contributions to AI- AlphaGo, the AI created by Google-owned DeepMind
to play the ancient Chinese board game Go, and which
This is an incredibly powerful technique that can be beat the human champion Lee Sedol in 2016, is perhaps
applied in all sorts of situations. A robot vacuum the most famous deep-learning AI. It had no strategies
cleaner, for example, can be trained by being shown directly programmed into it, not even the rules of the
thousands of examples of humans vacuuming rooms game. But after viewing thousands of hours of human
along with the relevant sensor inputs. By strengthening play, and then refining its technique by playing against
the relevant connections, feedback loops are created by itself, AlphaGo became the best Go player in the world.
which a neural network vacuum can then eventually
learn which patterns of inputs correspond to which →-
actions, so that it can clean the room by itself. The You’ll find the full story of AlphaGo on page 34-
challenge of making truly autonomous vehicles is just
a scaled up version of the same problem, with far more With modern hardware and giant data sets, neural
unpredictable inputs – and the stakes potentially much networks deliver the best performance in certain
higher if things go wrong. perceptual tasks involving pattern recognition, most

26 | New Scientist Essential Guide | Artificial Intelligence


WHAT IS A
NEURAL NETWORK?
%PXLSYKLSVMKMREPP]FEWIHSREPSSWI
EREPSK][MXLXLIRIX[SVOSJGSRRIGXMSRW
FIX[IIRRIYVSRWMRXLILYQERGSVXI\E
RIYVEPRIX[SVOMWRSXETL]WMGEPRIX[SVO
MRER]WIRWI6EXLIVMXMWEGSQTPI\
QEXLIQEXMGEPSFNIGXEWYMXISJEPKSVMXLQW
GSRWXVYGXIHMRWYGLE[E]XLEXVEXLIV
PMOIXLILYQERFVEMRMXGERVIGSKRMWI
TEXXIVRWMRHEXEERHWSlPIEVRzJVSQMX
8LIFEWMGYRMXWrXLIRSHIWSVlRIYVSRWz
SJXLIRIX[SVOrEVIEPKSVMXLQVSYXMRIW
WTIGMEPMWIHMRVIGSKRMWMRKWSQIEWTIGXSJ notably in vision and speech. They are involved in
XLIHEXEXLIRIX[SVOMWHIWMKRIHXS many of your interactions with your smartphone or
TVSGIWW-RERMQEKIVIGSKRMXMSRRIX[SVO any large internet company. “The first one we had was
JSVI\EQTPIPS[IVPIZIPRIYVSRWHIXIGX in Android phones in 2012 when they put in speech
WMQTPIJIEXYVIWrWXVEMKLXPMRIWWE]rERH recognition,” says Yoshua Bengio of the University of
JIIHXLEXMRJSVQEXMSRXSLMKLIVPIZIP Montréal in Quebec, Canada. “Now all the major speech
RIYVSRWWSXLEXMRGVIEWMRKP]GSQTPI\ recognition software uses them.”
TVSTIVXMIWGERFIHIXIGXIH In a few short years neural networks have overtaken
-RXLEX[E]XLIRIX[SVOGERFIKMRXS established technologies to become the best way to
JMRHJIEXYVIWWYGLEWXLIIHKIWSJSFNIGXW automatically perform face recognition, read and
XLIRQSZISRXSVIGSKRMWMRKSFNIGXW understand text and interpret what’s happening in
XLIQWIPZIWERHIZIREGXMZMXMIWrEFEPP photographs and videos. Google uses the cutting-edge
EJMIPHERHTPE]IVWMRHMGEXMRKEJSSXFEPP technologies originally developed by DeepMind in
QEXGLJSVMRWXERGI-REXVEMRMRKTLEWI many of its products. Other internet companies like
XLIRIX[SVO WERW[IVWEVIGSQTEVIH[MXL Facebook also have troves of data ripe for a neural
ELYQERSRIERHXLIKETFIX[IIRXLIX[S network to analyse: billions of photos of faces, if tagged
JIHFEGOEPPS[MRKXLIRIX[SVOXSX[IEO accurately, can be used to train powerful face
XLI[IMKLXMRKWFIX[IIRMXWGSRRIGXMSRW recognition systems. Wearables from Fitbits to the
YRXMPMXVIKYPEVP]KIXWXLIVMKLXERW[IV Apple Watch all feed data into AI models that can
8LITVSGIWWSJEHETXMSRERH recognise healthy behaviour such as regular exercise,
MQTVSZIQIRX[MXLXVEMRMRKMWGVYGMEP;MXL or gauge walking speed.
IZIV]I\EQTPISJMRTYXHEXEXLIW]WXIQ The hallmark of these successes is that the actions of
WIIWERHWSQIXMQIWXLIVIEVIFMPPMSRW ordinary users train the networks for free. Whenever
XLIRIX[SVOX[IEOWXLITEXXIVRERH we use the internet or a smartphone, we are almost
WXVIRKXLSJMXWGSRRIGXMSRWXSVIJPIGXXLI certainly contributing data to a deep learning system,
RI[MRJSVQEXMSRMREWMQMPEV[E]XSLS[ one probably relying on neural networks that our data
RIYVSRWMRXLIFVEMRVIMRJSVGIGSRRIGXMSRW helped train in the first place.
[LIRPIEVRMRKWSQIXLMRKRI[ Just like our brains, however, deep learning is deeply
%TVSKVEQQIVRIIHEHNYWXSRP]XLI mysterious. Once the network is up and running, not
RYQFIVSJRSHIWERHPE]IVWXSSTXMQMWI even its creators can know what it is doing – a largely
LS[MXGETXYVIWVIPIZERXJIEXYVIWMRXLI unforeseen problem that, as AI assumes ever-more
236+%02)4--783'/4,383

HEXE,S[IZIVWMRGIMX WMQTSWWMFPI decision-making powers within computer systems,


XSXIPPI\EGXP]LS[ERIYVEPRIX[SVOHSIW researchers are increasingly having to grapple with. ❚
[LEXMXHSIWXLMWX[IEOMRKMWEQEXXIVSJ
XVMEPERHIVVSV →-
Page 85 has more on efforts to ensure-
“algorithmic accountability”-

Chapter 3 | Humans vs AI | 27
CHAPTER 3

28 | New Scientist Essential Guide | Artificial Intelligence


The story of AI so far can be told through a series of episodes
where they’ve beaten us at our own game – literally.

Games such as chess and Go are excellent test beds for machine
learning, offering challenges that are familiar to humans and
mimic conditions of everyday life. Generally, there’s been only
one winner. Just ask Garry Kasparov.

Chapter 3 | Humans vs AI | 29
Deep Blue 2 vs Garry Kasparov, 1997 game, Kasparov played the wrong pawn in a well-
known opening, and Deep Blue 2 was quick to spring
a trap from which there was little escape. Kasparov’s
This match still remains seared on many face on the big screen in the auditorium stared in
horror. Then he buried his head in his hands. After
minds as the first time a silicon mind beat the Deep Blue 2 had played its 19th move, Kasparov
best human competitor at an iconic intellectual resigned the game, and the match. It was the shortest
losing game of his career.
pursuit. Ultimately, however, this was a victory This had been a battle between two different ways
more for computing brawn than brain. of playing chess: search and evaluation. Search is about
following different possible lines of play: what your
opponent might do if you do X, and what your responses
to that might be. This method produces a “tree” of
’M NOT afraid to say that I’m possibilities that gains exponentially in complexity
afraid,” said Garry Kasparov, the the further you attempt to look into the future.
world chess champion, after the Evaluation, on the other hand, is about recognising
fifth game of his six-game 1997 patterns and weighing up the comparative merits of
series against Deep Blue 2. The different ones. A pattern is any feature, simple or
match between man and complex, that can be spotted with a glance at the
machine was all square at 2 ½ chessboard. This might be the material value of a
points apiece as Kasparov spoke position – a rook, for example, is worth about one
to the chess aficionados and bishop plus one pawn – or geometrical patterns
world press assembled at the indicating future threats and opportunities.
Equitable Center on New York’s Seventh Avenue. Human minds are limited when it comes to search:
The statement raised to an extraordinary pitch the even leading chess players can only compute a few
tension surrounding the final game the next day. branches in the tree of future possibilities, and only as
Kasparov was fiery, flamboyant and a merciless chess far as a few moves ahead. But a human mind honed by
competitor, and people loved him for it. When he had years of practice is highly skilled at evaluation. Studies
first taken on Deep Blue, his current opponent’s first have shown that chess grandmasters have a databank
iteration, in a six-game match the year before, he had of around 100,000 patterns in their heads that they can
an historically unprecedented rating. After a wobble recognise and interpret as they look at the board and
losing the first game, to few people’s surprise Kasparov use to drive their play forward, much as a writer can
won the series 4-2. assemble learned words, phrases or whole sentences to
But in the second game of the 1997 rematch, carry a story to its denouement.
catastrophe had overtaken the best chess mind of his For Kasparov’s machine opponent, things were
era. Flustered by his opponent’s relentless play, exactly the other way round. Deep Blue had little
Kasparov had needlessly resigned when he could have capacity for evaluation. It was simply programmed
',6-77)%60)

forced a draw, leaving the series poised on a knife edge. with the rules of chess, plus a few hundred simple
You could forgive him a little fear. patterns, and routines that included ways to recognise
And then it came. On his seventh move of the sixth the safety of the king and to check how much room >

30 | New Scientist Essential Guide | Artificial Intelligence


Chapter 3 | Humans vs AI | 31
IBM Watson vs Jeopardy!, 2011

Unlike the rules of chess, the rules of human


language are complex – or non-existent.
Understanding human communication is an
impossible ask for a computer that follows hard-
programmed rules alone. But exploit statistics,
probability and a lot of data, and you don’t need
rules. That was the secret of AI’s next big leap
forward, a bot that was a bit of a whizz at a quiz.

for manoeuvre a position left other pieces.


But it made up for this comparative strategic
blindness with an astounding, brute-force search
capability way beyond that of any human mind. With
processors capable of evaluating 200 million positions
a second, it could probe the tree of possibilities solidly
to a depth of five to six moves ahead, and explore the
possible future consequences of its moves still further
along selected branches.
Its algorithm then allowed it to weight these myriad

>91%46)77-2'%0%1=783'/4,383
possibilities probabilistically against its pre-
programmed patterns to decide what was its best next
move. With Kasparov taking up to 15 minutes to
elaborate a deeply thought-through plan for his next
move, Deep Blue 2 had ample opportunity to calculate a
branching tree of several hundred billion possibilities
while he did so. Around half of the time, according to its
programmers, it had anticipated Kasparov’s likely
move, so could reply instantly. The effect on Kasparov
when a succession of his deeply pondered moves were

W
answered without delay was immense – and it HEN, in 2007, IBM first suggested training
ultimately wore him down. a machine to play the US quiz show
Yet for all its skilled play, Deep Blue had very little Jeopardy!, many artificial intelligence
intelligence. It didn’t learn from experience. It couldn’t researchers were doubtful the company
actually tell chess sense from nonsense, and it was would succeed. Jeopardy! is unusual as a
blind to what a chess position or chess game is really quiz show in that contestants are given answers and
about. It could offer no useful analysis of why it made required to supply the right question. For example, if
apparently deeply calculated moves. the host says that “This cigar-smoking prime minister
Forget artificial intelligence, or at this stage much in led Britain during the second world war”, a contestant
the way of machine learning. Deep Blue was a product might reply “Who is Winston Churchill?”.
above all of Moore’s law, the continual increase of Understanding Jeopardy! requires an ability to parse
processor power over time. But its victory showed that human language, and culture, in all its bewildering
computational capacity was now at a point where the complexity. Clues can involve puns and clues-within-
ability to crunch vast amounts of data could produce clues, with topics ranging from pop culture to
outputs superior to those of the best human minds. As technology. Competing demands an encyclopaedic
such, it contained the kernel of all that was to follow. knowledge and an ability to understand complex

32 | New Scientist Essential Guide | Artificial Intelligence


“Watson once named
Wonder Woman as the
first woman in space”
verbal cues that, to cap it all, don’t necessarily contain out spam or translating automatically between
all the information needed to produce the right answer. languages. By combining abilities for parsing language,
Watson, named after IBM’s founder Thomas Watson, generating hypotheses and refining them on the basis
approached this challenge armed with just a few rules of further evidence, Watson leapt ahead of rival AI
and a huge amount of memory and processing power. question-and-answer systems then being developed –
Its programmers loaded 200 million pages of text from and human knowledge recall capabilities in the process.
encyclopaedias, newspapers and other sources into its
memory, added custom-built stores of data on ←-
geographic facts and other types of information. They Turn back to page 23 for more applications of-
then divided the controlling program into a committee probabilistic processing to real-world problems-
of about 100 subprograms, each in charge of its own
specialised area. In its first iteration, Watson remained quite a blunt
Watson started by using basic grammatical rules to instrument, however. It could still find many simple
work out the likely subject of a clue. Knowing, for questions baffling. It relied heavily on finding text that
example, that the word “this” often precedes the looked like the right answer to a question in its sources,
subject of a sentence, and faced with a clue beginning and so missed out on information that was too obvious
“This 19th-century novelist…”, it could infer it needed to to have been written down. It was also caught out by
access its memory bank pertaining to writers. thinking fictional characters were real, once naming
From that point on, it was just probabilistic the first woman in space as “Wonder Woman”.
reasoning. Further facts given in the clue, such as dates, Algorithms that sift through large amounts of text
could be compared with data in its sources to narrow (or, more recently, imagery) looking for particular cues
down the search and weight potential answers with and connections hum on in the background in many
probabilities. Watson was also programmed with rules everyday applications today – they form the basis of
to interpret word-play, such as puns. Faced with a Google’s search engine or Facebook’s newsfeed
question about an “arresting landscape painter”, curation, for example. Watson’s first commercial
Watson looked up meanings of “arresting” and checked application came in 2013, sifting through patient records
for connections with the names of famous landscape and the medical literature to support clinical decisions
painters. Linked answers – in this case, “Constable” – in lung cancer treatment at the Memorial Sloan
got a higher confidence score. Once it had parsed all the Kettering Cancer Center in New York. That was an early
available evidence, and one possibility stood out as example of one of the most hopeful areas of AI research
having the highest weighting, Watson buzzed in to today – using statistical inference and pattern-spotting
answer the question. Naturally this all happened in a to improve medical diagnosis and treatment. >
fraction of second.
Watson was of a piece with the probabilistic machine- →-
learning technologies that were by then just beginning A fuller discussion of AI applications-
to make waves in other areas, for example in filtering in medicine begins on page 48-

Chapter 3 | Humans vs AI | 33
opponent’s moves. Then there were the hundreds of
AlphaGo vs Lee Sedol, 2016
training hours it had put in, observing games to
learn how others played Go. Lee admitted to being
The triumphs of Deep Mind 2 and Watson “quite nervous”, and backed off from his prediction
in their respective games both pushed the of a 5-0 victory.

boundaries of machine learning at the time. ←-


But it was still clear how their programming and Turn back to page 25 for the nuts and bolts-
of how deep learning works-
their algorithms allowed them to achieve their
victories. Not so with AlphaGo. None of its It didn’t start well: Lee lost the first game. Many
commentators felt this was because he had tried to play
programmers could have replicated its feat, unconventionally to disrupt AlphaGo’s dependence on
or even understood how it did what it did. This learning from previous games. In the second game,
however, it was AlphaGo that tore up the rule book.
announced the arrival of an entirely new Having made the 36th move, Lee had retired for
paradigm of AI: deep learning. a quick cigarette break. Not requiring the same
stimulation, AlphaGo thought a while and then asked
its human representative to place a black stone on the
line five steps in from the edge of the board.
Conventional wisdom says that during the early part

C
REATED over 2500 years ago, Go is a challenge of a Go game you play stones only on the outer four
for the nimblest of minds. Its 19 by 19 board lines, and so prepare the ground for an assault on the
allows for 10171 possible layouts, dwarfing the central part of the board later.
roughly 1050 possible configurations on a This now-infamous “move 37” turned on its head all
standard 8 by 8 chess board. “Go is probably the that humans thought they knew about how to play Go.
most complex game ever devised by man,” says Demis Commentators at the time thought it was a mistake –
Hassabis, founder of DeepMind, the Google-owned but it won the game. In the event, Lee won just one
company that created AlphaGo. game of the match, with AlphaGo wrapping up the
Alpha Go’s five-game match with human Go other four. “It was so hard to watch,” said Andrew
champion Lee Sedol was held in the swanky Four Jackson of the American Go Association, who
Seasons hotel in the heart of downtown Seoul in March commentated on the games. “He just got steamrolled.”
2016. Lee was a national hero in his native South Korea, But though its programmers could explain the outline
known for his brashness and unconventional and of how they had told AlphaGo to learn to play, no one
creative play. Hundreds of reporters from around the could explain how it had come to the game-changing
world were in attendance, and the match was televised conclusions it had. This was AI as a black box.
live across Korea, China and Japan. Continued on page 37 >
Lee flashed his swagger at a press conference
preceding the match, predicting he would win in a →-
“landslide”. He rowed back a little later after watching Do AIs such as AlphaGo demonstrate-
DeepMind’s representatives explain the principles of true creativity? See page 42-
AlphaGo’s neural network and deep-learning
algorithm, and how it employed a special sort of search →-
method, known as Monte Carlo Tree Search, to Turn to page 85 to learn more about the problems of-
compute possible scenarios and anticipate its accountability that black-box AI brings-

34 | New Scientist Essential Guide | Artificial Intelligence


INTERVIEW: GARRY KASPAROV

“WE DON’T
NEED TO
LOSE OUT TO
MACHINES”
Garry Kasparov took a long time to get over
his 1997 defeat by Deep Blue – but he’s since
learned to embrace artificial intelligence

In 2017, you wrote a book Deep Thinking: Where


machine intelligence ends and human creativity
PROFILE
begins exploring your defeat by Deep Blue. Was GARRY
writing it a cathartic experience? KASPAROV
Yes, absolutely. There were many questions about my
rematch with Deep Blue that I had avoided asking and Born in Baku in present-day
the answers weren’t always pleasant. I’d never analysed Azerbaijan in 1963, Garry
those six games in depth using modern chess
Kasparov is widely
computers. I discovered that Deep Blue didn’t play very
well either – at least not as well as we all believed at the considered the greatest
time – and this made me feel even worse about how my chess player of all time.
terrible psychological state during the match led to my He became the world’s
loss. Of course, it was only a matter of time before Deep youngest chess champion in
.3)07%+)8%*4+)88=-1%+)7

Blue or another machine defeated me, but I played


1985 at the age of just 22,
embarrassingly below my level.
and was the world’s number
If you could go back in time and take the draw in the one for 255 months, more
second game of the rematch, would you? than twice as long as his
Once a move is made it cannot be retracted, and if > nearest competitor.

Chapter 3 | Humans vs AI | 35
INTERVIEW: GARRY KASPAROV

I had a time machine, I’m sure I could think of better those caught in the turbulence. But it’s easy to focus on
uses for it. That match was such an anomaly, it has the negative things because we see their impact much
taken years for me to even attempt to draw lessons more clearly, while the new jobs and industries of the
from it. future can’t be imagined so easily. I think we’ll be
surprised, as we have been throughout history, by the
What are the broader lessons from your defeat? bright future that all these amazing tools will help us
As the proverbial man in man-versus-machine I feel build, and how many new positive trends appear.
obligated to defend humanity’s honour, but I’m also a
realist. History has spoken: for nearly any discrete task, Computer scientist Larry Tesler once said
including playing chess, machines will inevitably “intelligence is whatever machines haven’t done
outstrip even human-plus-machine. AI is hitting us in a yet”. Where are the next targets?
huge wave, so it is time to embrace it and to stop trying Where aren’t they? The biggest public impact might be
to hold on to a dying status quo. felt in medical diagnosis. This is an area that doesn’t
require 100 per cent or even 99.99 per cent accuracy to
What did you make of the 2016 match when AlphaGo be an improvement on human results. You wouldn’t
beat the human Go champion Lee Sedol? trust a self-driving car if it was only 99 per cent
AlphaGo played some genuinely unusual moves, accurate. But human doctors are only 60 or 70 per cent
strong moves that a top human would never consider. accurate in diagnosing many things, so machine or
It doesn’t surprise me that there is room for this in a human-plus-machine hitting 90 or 99 per cent will be a
game as long and subtle as Go, where an individual huge improvement. As soon as this is standard and
move is worth less than in chess. It’s even possible that successful, people will say it’s just a fancy tool, not AI at
entirely new ways of playing Go will be discovered as all, as Tesler predicted.
the machine gets stronger. It’s also likely that humans
won’t be able to imitate these new strategies, since they →-
depend on the machine’s unique capabilities. See chapter 4 for more on AI applications today-
You called Deep Blue the end and AlphaGo the What happens if AI, high-tech surveillance and
beginning. What did you mean? communications are sewn up by the ruling class?
Chess was considered a perfect test bed for cognition Ruling class? Sounds like Soviet propaganda! New tech
research but it turned out the world chess champion is always expensive and employed by the wealthy and
could be beaten while barely scratching the surface of powerful even as it provides benefits and trickles down
artificial intelligence. I’m sure some things were into every part of society. But it seems fanciful – or
learned about parallel processing and the other dystopian – to think there will be a harmful monopoly.
technologies Deep Blue used, but the real science was AI isn’t a nuclear weapon that can or should be under
known by the time of the 1997 rematch. AlphaGo was lock and key; it’s a million different things that will be
an entirely different thing. Deep Blue’s chess an important part of both new and existing technology.
algorithms were good for playing chess very well. The Like the internet, created by the US military, AI won’t be
machine-learning methods AlphaGo uses are kept in a box. It’s already out.
applicable to practically anything.
Will handing off ever more decisions to AI
What does that mean for the wider world? result in intellectual stagnation?
AI is clearly booming as a technology, but there’s no Technology doesn’t cause intellectual stagnation,
way to know what part of the curve we are in. Periods of but it enables new forms of it if we are complacent.
rapid change are turbulent and confusing, and we are Technology empowers intellectual enrichment and our
seeing the social apprehension that comes with a wave ability to act on our curiosity. With a smartphone, for
of automation, even more so because robots and example, you have the sum total of human knowledge
algorithms are moving in on jobs that require college in your pocket and can reach practically any person on
degrees. Of course, real dangers and human anguish the planet. What will you do with that incredible
come with the AI wave, and we can’t be callous toward power? Entertain yourself or change the world? ❚

36 | New Scientist Essential Guide | Artificial Intelligence


< Continued from page 34 bluffing.“It’s a really important milestone for
artificial intelligence,” says Georgios Yannakakis at
Libratus vs Texas hold’em poker, 2017 the University of Malta. “The real world is a game of
imperfect information, so by solving poker we become
AlphaGo’s 2016 victory showed how deep- one step closer to general artificial intelligence.”
At the heart of the bot’s success was a technique
learning techniques could conquer complex devised by researchers at the University of Alberta in
games such as Go. Poker was an altogther Edmonton, Canada, called counterfactual regret
minimisation. “Regret” here refers to the difference
different prospect. Not least, it required a between the expected pay-off of any action and the
machine with a crucial skill for dealing with the potential pay-off had it acted differently. The technique
involves the AI tweaking its strategy over billions of
real world: knowing what it didn’t know. hands, lowering its overall regret until it is as small as
possible. In doing so, it homed in on an optimal
“equilibrium” strategy that is in essence unbeatable.
Regardless of who it faced and how it played, it would

M
IDWAY through a 2015 poker competition not lose money in the long run.
at the Rivers casino in Pittsburgh, As with AlphaGo’s surprise moves, it turned out the
Pennsylvania, one player seemed to lose the tactics the AIs developed often diverged from those
plot. Opponents watched baffled as Claudico that humans would naturally consider winning
risked large amounts of money with weak approaches. They use a broader range of stakes than
cards, or raised the stakes aggressively to win a handful human players generally employ, from tiny bets to
of chips – and then suddenly appeared passive, huge raises. “Betting $19,000 to win a $700 pot just isn’t
dithering over decisions and avoiding big wagers. something that a person would do,” was how one of
Yet despite never having set foot in a casino before, Claudico’s human opponents at the Brains vs AI contest
Claudico had played more poker games than all of the put it. They rarely raise the stakes to the limit, even for
other players put together. Created by Tuomas the best possible hand, and they play a broader range of
Sandholm and his team at the nearby Carnegie Mellon hands than a human might, choosing occasionally to
University, it had learned poker by playing billions of play weak cards rather than fold.
hands against itself. One explanation is that human minds have to
Human ingenuity still won the Brains vs Artificial make simplifications when dealing with the
Intelligence contest back in 2015 – by a whisker. It didn’t complexity of a game like poker. Rather than
take long for the machines to get the edge. In 2017 considering all possible moves, we mentally bunch
Claudico’s successor, called Libratus, took on four of the similar situations together. We do the same in daily life:
world’s best poker players separately head to head at we round numbers up or down to the nearest ten or
the same casino. After 120,000 hands over 20 days, hundred, or use stereotypes to categorise people. This
Libratus won with a lead of over $1.7 million in chips. process of abstraction makes the world easier to
A poker-proficient AI was a remarkable handle, but means we can lose out to people who are
breakthrough. As in other games, Libratus had to deal using a better approximation of the world than we are.
with a huge number of potential futures: even in a two- Machine minds don’t need it.
player version of poker with limited bet sizes, known as That approach could benefit us elsewhere, thinks
heads-up limit poker, there are 3.16 × 1017, or 316 million Sandholm. “There are applications in cybersecurity,
billion, potential situations that could come up. But in negotiations, military settings, auctions and more,”
poker, unlike in chess or Go, players don’t know what he says. His lab has also been looking at how the
cards their opponents have. Brute-force computation machines can bolster the fight against infectious
alone is never going to win the day. disease, by viewing treatment plans as game strategies
A poker-playing AI has to take into account how its to be optimised, even if your information about the
opponent is playing and rework its approach so it infection’s next move is imperfect. “You can learn to
doesn’t give away when it has a good hand or is battle diseases better even if you have no extra >

Chapter 3 | Humans vs AI | 37
I
F YOU want to beat ‘em, you must first join ‘em.
That was the principle the first AI gamers followed:
they were trained, at least initially, through
observing thousands of instances of how humans
played the games.
Not so AlphaGo Zero. The zero stood for “zero data”:
AlphaGo Zero was programmed simply with the rules
of Go, and then started playing at random against
itself, learning what worked. Three days and 4.9 million
such games later, it took on AlphaGo. In a 100-game
grudge match, it won 100-0, becoming the world’s best
game-playing AI.
>)6&36-783'/4,383
“Humankind has accumulated Go knowledge from
millions of games played over thousands of years,” the
DeepMind programmers wrote in their paper
announcing the breakthrough. “In the space of a few
days… AlphaGo Zero was able to rediscover much of
this Go knowledge, as well as novel strategies that
provide new insights into the oldest of games.”
For example, the AI learned many different josekis –
medicines at your disposal – you just use them sequences of moves that result in no net loss for either
smarter,” he says. side. Initially AlphaGo Zero used many familiar ones
But Libratus also illustrates the limits of AI as it written down during the thousands of years Go has
currently stands. It is the world’s best heads up no-limit been played, but as its self-training continued, it started
Texas hold ’em poker player, but ask it “why Texas?” – to favour previously unknown sequences.
or where or what Texas – and it wouldn’t even be able Self-training is a huge step forward. “With this
to process the question. An algorithm trained to do one approach you no longer have to rely on getting expert
thing is next to useless at doing something else. That quality human data,” says David Churchill at Memorial
ambition to create an artificial general intelligence that University, Canada. That opens up whole new fields to
can perform any task the human brain can remains AI where, unlike Go, good-quality, easily comparable
just that – an ambition. data are hard to come by. Climate science, drug
discovery, protein folding and quantum chemistry are
just some of the fields DeepMind is investigating. “In 10
years, I hope that these kinds of algorithms will be
AlphaGo Zero vs AlphaGo, 2017 routinely advancing the frontiers of scientific research,”
says Demis Hassabis, the firm’s CEO.

When you’ve beaten the humans at their →-


own game(s), there’s only one logical next See page 46 for more on future scientific-
applications of AI-
step: beat yourself at your own game. Barely
a year after AlphaGo had left Lee Sedol in Yet there are drawbacks too. For an AI to learn by itself,
it needs to be programmed with the rules of the world
the dust, that’s just what the DeepMind team it inhabits. That works for worlds with clear and simple
did, showing that you didn’t even need data rules, but would quickly become impossible for more
complicated tasks such as autonomous driving. There,
to build a world-beating AI. the future may still lie in AI augmenting human skills.

38 | New Scientist Essential Guide | Artificial Intelligence


AI + human vs human, now and the future boosted his global ranking from 600 to 300 in just a
few months. As they practise against flawless AIs, the
top poker players are increasingly adopting the
For all the startling, innovative potential that equilibrium-like strategies they favour – and profiting
as a result.
game-playing AIs have shown us, the emphasis The surprises keep on coming, too. In a series of
on “beating” humans creates a misleading games in 2017 and 2018, a further development of
DeepMind’s game-playing AIs, AlphaZero, beat
impression. Winning just provides a benchmark Stockfish, one of the best chess computers in the
world. Unlike Stockfish, AlphaZero plays an aggressive
that the machine is really good at performing a
game, often sacrificing pieces early on if this helps it
particular task. As we apply the lessons learned achieve its goals. “AlphaZero just goes for the attack
straight away,” says Natasha Regan, who has
to areas outside gaming, it’s becoming clear that
represented the UK at both Go and chess. The AI is
the real benefits could come when AI supports more like a maverick human player than a typical chess
computer, she thinks, which makes it a more
and extends human expertise.
fascinating tutor.
The result is a new kind of software that displays
what looks very much like creativity and – whisper it –

L
IKE other human champions facing a machine intuition. Go players who compete with AlphaGo often
opponent, Grzegorz “MaNa” Komincz rated his remark on that. “They expected it to play in a way that
chances. “A realistic goal would be 4-1 in my was perhaps dull but efficient and instead there was
favour,” he told an interviewer before the match. real beauty and inventiveness to the games”, says David
One of the world’s best players of the video Silver at DeepMind.
game StarCraft II, Komincz was at the height of a
successful esports career. Artificial intelligence →-
company DeepMind invited him to face its latest The next chapter has more on-
AI, a StarCraft II-playing bot called AlphaStar, in AI creativity-
December 2018.
Komincz was expected to be a tough opponent. It’s a product largely of the deep “reinforcement”
He wasn’t. After being thrashed 5-0, he was less cocky. learning that now underpins all these AIs. In a process
“I wasn’t expecting the AI to be that good,” he said. “I of trial and error, successes, such as winning a game of
felt like I was learning something.” Go, are rewarded, and the appropriate connections
Many others at the receiving end of an AI thrashing within the AI’s neural network become reinforced,
say the same – and used the experience to get even which in turn reinforces a specific behaviour.
better. Working together, humans and AIs can bounce In particular, where the AI learns a game purely
ideas back and forth, each guiding the other to better by playing itself, as with AlphaZero – the “zero” stands
solutions than would be possible alone. “It will be an for zero input – it is untainted by human bias. It
amazing extension of thought,” says Anders Sandberg simply picks up its own ways of doing things.
from the Future of Humanity Institute at the University “AlphaZero discovers thousands of concepts that lead
of Oxford, UK. to winning more games,” says Silver. “To begin with,
Games that AI has conquered provide a wealth of these steps are quite elementary, but eventually this
examples. All professional players now practice with same process can discover knowledge that is
chess computers, for example. These tend to play surprising even to top human players.” The great hope
defensive games, so the style of top players has become of AI is that this could become true in other areas, too,
more defensive too. After losing to AlphaGo, European and so supplement and extend human expertise
Go champion Fan Hui trained against the AI and across the board. ❚

Chapter 3 | Humans vs AI | 39
CHAPTER 4

40 | New Scientist Essential Guide | Artificial Intelligence


From the sprawling algorithms used by tech giants such as Google, Facebook
and Amazon to myriad smaller examples, AI is all around us. In areas from
cracking scientific puzzles to assisting medical decisions, from driving
autonomous vehicles to waging warfare, its possibilities seem boundless.

That raises profound questions not just about what the limits of machine
minds are, but what we want them to be. Marcus du Sautoy kicks off our
exploration by asking the question, can AI ever be truly creative?

Chapter 4 | Frontiers of AI | 41
N OCTOBER 2018, a portrait of Edmond Belamy
sold at Christie’s in New York for $432,500, nearly
45 times its maximum estimated price. Nothing
that out of the ordinary, perhaps. Except Belamy
didn’t exist. He was the fictitious product of the
artist’s mind – and the mind that created him
wasn’t even human.
Ever since the 1840s, when Ada Lovelace
became obsessed with the possibility that
Charles Babbage’s Analytical Engine, a proposed
mechanical computer, could do more than simple
computations, we have been contemplating the idea
that it isn’t just biological life that may be creative.
Recognising that music is an art form similar to
mathematics in its manipulation of pattern, Lovelace
speculated “the engine might compose elaborate and
PROFILE scientific pieces of music of any degree of complexity
or extent”.
MARCUS It is fairly easy to discount or at least qualify many
DU SAUTOY claims of AI creativity today. Much of what they are
doing involves little more than data science and
Marcus du Sautoy is a statistical number-crunching, and requires a lot of
mathematician and the human intervention. That isn’t to say that there aren’t
Simonyi professor for the some striking examples of AI potentially
demonstrating true creativity. Take the unexpected
public understanding of
move 37 of the second game in the titanic battle of Go
science at the University of between the human champion Lee Sedol and the
Oxford, UK. He is the author DeepMind algorithm AlphaGo in March 2016. Human
of numerous books, competitors have since aped AlphaGo’s tactic to
including The Creativity establish a competitive advantage. The AI’s discovery
Code: How AI Is learning to taught the world a new way to play an ancient game. >
write, paint and think ←-
',6-77)%60)

Turn back to page 34 for more on-


the context of “move 37”-

42 | New Scientist Essential Guide | Artificial Intelligence


Chapter 4 | Frontiers of AI | 43
For me, this clears some of the key hurdles AI must leap
if it is to be deemed truly creative. A basic definition of a
creative act might be one that is new, surprising and
has value. A computer can easily be programmed to
produce novel outputs, but those second two criteria
are more challenging. Who is being surprised, and how
does one decide value?
Move 37 certainly surprised the Go experts, and
ultimately it had value: it won the game. It is easier to
determine value in a game situation than in other
creative spheres, however. An AI’s worth is generally

3&:-397%-',6-78-) 7
judged by its ability to solve problems, but creating art
isn’t a problem-solving activity. The monetary value of
the Edmond Belamy portrait came about partly
because it was created by an AI, not through some
independent assessment of its artistic value.
The AlphaGo story demonstrates another way AI can %-EVX4SVXVEMXSJ)HQSRH&IPEQ]
help to create a sort of value. It comes not so much in WSPHJSVMR
making machines that act like creative humans, but in
stopping creative humans behaving like machines. We independent nature. Lovelace herself raised it all
can get terribly stuck in our ways of thinking. An AI’s those years ago as she wrote about Babbage’s machine:
unprejudiced exploration of the terrain, meanwhile, “It is desirable to guard against the possibility of
can sometimes reveal new pinnacles of achievement. exaggerated ideas that might arise as to the powers
You may be at the top of Snowdon, thinking you have of the Analytical Engine, The Analytical Engine has no
reached the ultimate height, but that is because you pretensions whatever to originate anything. It can do
don’t know Mount Everest exists. whatever we know how to order it to perform.”
Many of the examples of music created by AI are still Ultimately, she believed, you couldn’t get more out
stuck in the foothills, comprising poor pastiches of of the machine than you had put in.
Mozart or Beethoven’s works. But there are examples This raises a crucial question: how much of any AI’s
where code has helped us cross the valley to more “creativity” is the creativity of the human coder rather
interesting peaks. The Continuator, a jazz improvisor than the code? Cameras heralded a new age of human
designed by François Pachet, director of the Spotify creativity that can be seen displayed today in art
Creator Technology Research Lab, provides another museums across the world. But no one assigns the
example of how AI can help us escape the straitjacket of camera any part in the act of creativity.
creative conventions. But that is an imperfect analogy. Humans are
When musicians improvised with the Continuator, machines running the instructions of a code, our DNA.
they were amazed. The algorithm was passing a kind of Our parents are responsible for that code, yet we don’t
musical Turing test, responding in a way regard ourselves as a mere vessel for the creativity of
indistinguishable from a human improvisor. And its our parents. Part of how a child differentiates itself
responses weren’t simply a mash-up of what had gone from its parents comes from its unique interaction
before: the musicians could hear the Continuator with its environment. This interaction also shapes our
playing things that were recognisably connected with creative abilities.
their style of performance, but which they never And that is exactly what is happening with AI.
thought possible. Machine learning allows code to change, mutate and
Although novelty, surprise and value are three key update itself based on its interaction with new data:
components to measure if AI is being creative, I think a inputs from the environment. In creative terms, the
fourth element must also be introduced if we are going potential result is shown by a work from the artist Ian
to herald real creativity in AI: originality of a truly Cheng displayed at the Serpentine Gallery in London in

44 | New Scientist Essential Guide | Artificial Intelligence


WHO OWNS
AI ART?
%-EVX[SVOVIRHIVWUYIWXMSRWSJS[RIVWLMTERH March 2018. He started off with six artificial life forms,
GST]VMKLXMREJY^^]LYI;LSS[RWXLISYXTYXXLI all called BOBand all written with the same code. But
TIVWSR[LSFYMPXXLIEPKSVMXLQXLITIVWSR[LSJIHMX the parameters of each BOB’s code mutated depending
HEXEWSMXGSYPHVIGSKRMWIERHQMQMGTEXXIVRWSVXLI on interactions with gallery visitors. At the end of the
TIVWSR[LSWIPIGXIHXLIWTIGMJMGSYXTYX# show, after months of different interactions, the six
8LIGSRWIRWYWEQSRKEVXMWXWERHPE[]IVWWIIQW BOBs were very different beasts.
XSFIXLEXER%-MWWSQIXLMRKEOMRXSE[SVH This shift from top-down to bottom-up coding gives
TVSGIWWMRKTVSKVEQMJRSFSH]X]TIWMRXSSRIXLIVI code the chance to assert an independence from its
MWRSIWWE]&]I\XIRWMSRER]SRI[LSJIIHWHEXEMRXS architect. You could argue that Cheng was still the
ER%-S[RWXLIVIWYPXMRK[SVOERHXLI%- WGVIEXSVW ultimate creative author, because he was the one who
LEZIRSGPEMQXSXLISYXTYXWl-XGERWIIQPMOIXLI gave the code the possibility to evolve. But as the
QEGLMRIMWHSMRKIZIV]XLMRKFYX[MXLSYXEREVXMWXXS decisions made by code based on its interactions with
EGXYEPP]GSPPIGXXLIHEXEXLIEPKSVMXLQLEWRS the environment become harder and harder for its
EKIRG]zWE]W%-EVXMWX.ERIPPI7LERI programmer or others to rationalise and explain, this
-XHSIWR XWXSTXLIVI8LIHEXEWIXWYWIHXSXVEMR standpoint becomes more questionable.
%-KIRIVEPP]GSRWMWXSJMQEKIWWSRKWSVTMIGIWSJ This captures a quality of creativity that perhaps has
[VMXMRKXLEXQE]FIGST]VMKLXIHXLIQWIPZIW7LSYPH got lost in modern definitions of the term: that it is
XLIS[RIVWSJXLIWI[SVOWLEZIEGPEMQSRXLI about capturing our attempts to understand being in
SYXTYX#7SQIEVKYIXLEXXLIXVEMRMRKTVSGIWWMWEOMR the world. Machine learning taps into this earlier take on
XSMRWTMVEXMSRl;LIRQYWMGMERWGVIEXIQYWMGXLI] creativity, in that its output is an original expression of
PMWXIRXSQYWMGzWE]W%RHVIW+YEHEQY^EPIGXYVIVMR a machine’s interactions with an emerging digital world.
MRXIPPIGXYEPTVSTIVX]PE[EXXLI9RMZIVWMX]SJ7YWWI\ But something fundamental is still missing:
9/l-RWSQI[E]WJIIHMRKEQEGLMRIPIEVRMRK intentionality. What is driving the AI to blurt out a
EPKSVMXLQQYWMGSVMQEKIWMWEPQSWXXLIWEQIz8LIR creative product? A human. Someone presses the print
EKEMRMJER%-MWJIHEPPSJ(V7IYWW WGLMPHVIR WFSSOW button, a person often chooses which algorithmic
WE]ERHTVSHYGISYXTYXWXLEXEVIZIV]WMQMPEVXSXLI outputs to put in front of another human being. In that
SVMKMREPW[LEXXLIR#*SV%-MXMWEXLMRPMRIFIX[IIR sense, AI doesn’t display the same intentionality in
MRWTMVEXMSRERHTPEKMEVMWQ creativity as humans do. Can it ever?
1SVIEHZERGIH%-WGSYPHVEMWIEXLSVRMIVUYIWXMSR To answer that question, we must first ask what
GEREREPKSVMXLQGPEMQGST]VMKLXSRMXWS[R drives our own urge for artistic creation. And for me,
GVIEXMSRW#'YVVIRXPE[WWYKKIWXRSX*SVI\EQTPI that is bound up with the hard problem of
XLI97'ST]VMKLX3JJMGISRP]VIGSKRMWIWlXLIJVYMXWSJ consciousness: the difficulty of explaining the true
MRXIPPIGXYEPPEFSVzXLEXlEVIJSYRHIHMRXLIGVIEXMZI nature of felt experience in ourselves and other
TS[IVWSJXLIQMRHz sentient beings. I can’t prove it, but I wonder whether
=IXIZIREPPS[MRKER]LYQERXSGPEMQGSQTYXIV true creativity and consciousness emerged at the same
KIRIVEXIHEVX[SVOEWXLIMVS[RMRXIPPIGXYEPTVSTIVX] time in the human species. Perhaps only when we had
EWGST]VMKLXPE[MRXLI9/-RHME2I[>IEPERHERH consciousness did we start to wonder what was going
7SYXL%JVMGEERHIPWI[LIVIGYVVIRXP]EPPS[WMWRSX on in the minds of others and want to share our own
IRXMVIP]YRTVSFPIQEXMG-XGSYPHIRHYTFIMRKE internal worlds – and begin to express ourselves
WIVMSYWTVSFPIQMJTS[IVJYP%-WHVMZIXLIGSWXSJ creatively. If so, I think that true creativity in machines
GVIEXMRKEVXIWWIRXMEPP]HS[RXS^IVSl=SYGSYPHKIX will only happen when they have a conscious world
ER%-XSTVSHYGIQMPPMSRHMJJIVIRXWSRKWERH they want to convey to us.
GST]VMKLXXLIQEPPERHTYXXLIQSRWSQIWLIPJERH I suspect that moment will be reached, but probably
NYWX[EMXYRXMPWSQIFSH]IPWI[VMXIWXLIWSRKERHWYI only in the distant future. When it does arrive, however,
XLIQzWE]WEVXMWX1EVMS/PMRKIQERRl-HSR XORS[MJ machine consciousness is likely to be very different
XLEX WXLIJYXYVIXLEX[I[ERXXSLEZIz8LIPE[]IVW from our own. And it will be AIs’ acts of artistic creativity
[MPPFIVYFFMRKXLIMVLERHW that will be the best vehicle for accessing the strange
world of what it is to be a conscious machine. ❚

Chapter 4 | Frontiers of AI | 45
8%;-783'/4,383

46 | New Scientist Essential Guide | Artificial Intelligence


APPLIANCE TO SCIENCE
Using AI smarts to create art is one thing, but millions of times a second. Already, machine-learning
algorithms and their pattern-recognition abilities are
what about science? The scientific enterprise is being used to zoom in more closely on collisions of
a unique mixture of creative thinking and hard particular interest.
Or take questions surrounding the quantum
graft. Often, the route to penetrating new behaviour of large numbers of electrons. Crack this
insights is blocked by an impenetrable thicket of problem and we could identify and construct new
forms of matter, from ultra-efficient batteries to new
data. Might AI help bulldoze it to one side? superconductors to wonder-drug medical treatments.
Doing the maths for just tens of electrons is like
searching for a needle in a near-infinite haystack: there
LPHAFOLD is an AI from Google- just isn’t enough time to grind out an exact solution,
owned DeepMind, the same stable as no matter how large your processor. Just as an AI like
game-playing AIs such as AlphaGo. It AlphaGo “solved” the fiendish complexities of Go, so
has a far loftier aim, however: to apply too might an AI identify patterns in the data that allow
the same deep-learning smarts to it to short-circuit brute-force calculations.
working out the intricate structures of Quantum physicist Roger Melko at the University of
proteins. Waterloo in Canada is one physicist who has seized the
A better understanding of how opportunity. “We basically started adopting these
proteins work would help us control industry standard machine-learning algorithms, and
everything from disease to food just throwing our problems at them,” he says. His
production. But a protein’s function is determined by neural network has already learned how to tell a metal
its unique, complex tangled structure, formed by the from an insulator just by reading the raw mathematics
intricate folding together of its constituent amino of the quantum system.
acids. To untangle things, researchers currently rely on Even mathematics itself could benefit. Yang-Hui He
laborious, expensive structure-determination at City, University of London, and his colleagues are
methods that only work for just a few proteins. using machine learning to better understand the Birch
Starting from genetic information about the and Swinnerton-Dyer conjecture, one of the seven
constituent amino acids, AlphaFold learns to make fiendishly difficult Millennium Prize Problems. It
predictions about what parts of the different amino describes solutions to equations known as elliptic
acids are in contact with one another. And it works. In a curves that are used in cryptography. In 2019, the AI
head to head contest with other software in 2018, it spotted statistical patterns in the distribution of 2.5
accurately predicted the form of 24 out of 43 proteins million curves that no human mathematician had
whose structures had been previously determined. Its spotted.
nearest competitor got just 14. If neural networks reach a point where they can
bypass such previously intractable scientific problems,
←- would anything be left for human physicists and
Turn back to page 25 for more on- mathematicians? Plenty, as it turns out. An AI might
how deep learning works- unravel a protein structure for us, or tell us the
existence of a new particle, a new state of matter or a
Other computationally difficult areas of science could new theorem in mathematics. But it can only, as yet,
also profit from such an approach. The Large Hadron help us move around territory we’ve already staked out.
Collider particle smasher at CERN near Geneva, When it comes to entering new realms of discovery,
Switzerland, collides bunches of protons up to tens of human ingenuity still leads the way. ❚

Chapter 4 | Frontiers of AI | 47
THE PROMISE
OF AI MEDICINE
In hospitals and doctors’ surgeries, the reliable, unflappable power of machine
minds has great potential in enabling more accurate diagnosis and more
appropriate and efficient care. But while AI medical apps are already available,
there are still great hurdles to overcome before this can become a fully fledged
revolution: issues of trust, accountability and data privacy.

TIFF neck, headache, tingling in your And it’s kicking at an open door. According to the US
fingers. You list your symptoms, answer Institute of Medicine, something like one in 10 medical
a few questions about how long they’ve diagnoses is wrong. Such errors contribute to as many
lasted and whether they seem to be as 80,000 unnecessary deaths each year in the US
getting worse. Then, without ever alone.
leaving home or queueing at the clinic, The tech underlying the apps knits together several
you get the diagnosis: a strained neck. strands of AI: the ability to process natural language,
Or, at least, eight out of 10 people with including people describing their symptoms; the
those symptoms have one. ability to trawl vast databases of the world’s medical
This isn’t some future scenario. The knowledge in an instant; and deep-learning software
app Ada, which offered up this diagnosis, boasts it has trained to spot correlations between millions of
completed 15 million medical assessments worldwide different complaints and conditions.
since it was launched in 2016. It took six years for 100 Deep-learning networks have outperformed doctors
data scientists to train the artificial intelligence behind at diagnoses as varied as melanomas and diabetic
Ada, using real medical records. With each case it sees, retinopathy, a complication of diabetes that damages
&0%'/6)(70-1(-783'/4,8383

it gets a bit smarter. blood vessels in the eye. Other AI tools can identify
Health insurers and healthcare providers see a bright cancers from CT scans or MRIs, or even predict from
future for AI-based medical apps. “A machine cannot be data about general health which people may have a
negligent,” says Ali Parsa, founder of the company heart attack. The US Food and Drug Administration has
behind one another app, Babylon. “It doesn’t get already approved over a dozen AI algorithms.
stressed. It doesn’t get hungry. It does exactly what it is For all that, there are plenty of reasons to proceed
meant to every single time.” carefully. “I’m bullish about the ability of AI to do >

48 | New Scientist Essential Guide | Artificial Intelligence


Chapter 4 | Frontiers of AI | 49
&%6%23>()1-6-783'/4,383
good,” says Amol Navathe at the University of software. “But we’re finding when the doctors and
Pennsylvania. “But it’s harder than people think.” machine disagree, the doctors are wrong as often as the
For a start, easier testing does not necessarily mean machine,” says Parsa. Part of the point, too, is that AIs
better outcomes. “We need to be careful about should work independently of human biases and
overdiagnosis,” says Constance Lehman at Harvard assumed knowledge.
Medical School, whose team has created a deep-
learning system that outperforms doctors at analysing →-
mammograms. A 2013 study of breast-cancer Turn to page 85 for more on making AI-
screening, for instance, concluded that for every life accountable for its decisions-
saved, 10 women had unnecessary treatment and 200
suffered years of needless stress. Then there’s the Done right, deep-learning AIs can also potentially do
potential for bias in the AI’s training data. If a medical far more than just diagnose conditions, says Eric
imaging AI is trained on scans of only people of Horvitz, a medical AI researcher at Microsoft. “The hard
European descent, say, they are unlikely to work for part is managing diseases, figuring out therapies over
other people – and such biases are a common problem time and tracking progress,” he says. New, more
in medical data sets. detailed algorithms should help doctors better
It doesn’t help that deep learning AIs are black boxes, understand the progression of chronic conditions such
making it difficult to tell why they reach the as diabetes, arthritis, hypertension and asthma that are
conclusions they do. An early, large-scale academic often costly to treat. In mental healthcare, AI apps are
project in the US provides a case study. It used AI to sift already providing cognitive behavioural therapy
through vast quantities of past data and so predict how online, for example for NHS patients in the UK.
likely individuals with symptoms of pneumonia were There are still hurdles to overcome, however. How to
to die of it. Despite producing generally better results, provide medical AIs with the huge data sets they need
the system routinely miscategorised those who also while protecting patient confidentiality is one huge
had asthma as being in less danger, advising they unresolved issue. The advent of electronic medical
shouldn’t be sent to hospital. Statistically this is true, records has also ushered in stringent regulations, such
but the logic was the reverse: the better survival as the HITECH Act in the US and the Data Protection Act
statistics came from the greater medical attention and in the UK. In 2016, however, New Scientist discovered
more intensive treatment those patients received from that the NHS had shared patient data with Google
human doctors recognising the severity of their DeepMind, a deal the UK Information Commissioner’s
condition. Office subsequently found “failed to comply with data
Such instances are one reason why app developers protection law“.
say they take pains to involve doctors in refining their For medical AI to truly take off, even more rapid and

50 | New Scientist Essential Guide | Artificial Intelligence


“For medical AI to truly
take off, even wider data
sharing may be needed”

wider sharing of data may be necessary. That might care, it’s unlikely doctors will be cut out of the picture
require new legislation. “Current laws don’t really cover entirely, says Vimla Patel, a cognitive psychologist and
the kind of sharing scenarios we need to make these specialist in biomedical informatics at the New York
systems work,” says bioinformatics researcher Joshua Academy of Medicine. AI can augment clinicians’
Denny of Vanderbilt University in Tennessee. Any abilities, but can’t do all the heavy lifting: for example, a
policies will of course require informed consent. Some human doctor’s (ideally) empathic relationship with
of those involved even argue that sharing your health the patient is an essential part of good care. “When
data should be seen as a civic duty, and only those who things get complex, and medicine often is complex,
opt in should reap any benefits. you need human reasoning to make decisions,” she
Navathe is also concerned that regulators are says. “Computers, no matter how sophisticated, cannot
applying lower standards to algorithms than to drugs replace that.”
or devices as to whether their use actually benefits Nevertheless, AI can change how healthcare is
people. “Because they are not invasive, they seem lower delivered for the better, says Kohane. At present,
risk.” But, he says, if doctors are basing treatments on doctors have to manage mounds of paperwork and
AIs, the same standards need to be applied. digital form-filling while trying to stay on top of the
emerging research to keep their knowledge current. If
→- AI could ease some of this burden, doctors would be
Turn to page 68 for more on- freer to listen to patients and take detailed histories.
AI data-privacy concerns- Meanwhile instead of going to see a doctor only when
we are sick, our health will be constantly assessed by
Liability is another issue. Malpractice laws are complex AIs, based on data from devices such as smart watches
and vary from place to place, so it’s unclear how they along with our genome sequences.
might need to change to accommodate AI. But doctors At least this is the vision of companies like Babylon
already use machines to make some diagnoses – Health, maker of an app for connecting people to
software that helps them identify tumours in MRI doctors via an AI triage system. “The system will
scans or abnormalities in echocardiograms, for become part and parcel of your life, constantly
example. “If a doctor is in the loop, the legal and ethical monitoring your health,” says Saurabh Johri, the firm’s
stuff is not going to be that challenging,” says Isaac chief science officer. With millions of people using such
Kohane, head of biomedical informatics at Harvard systems, disease outbreaks could also be detected and
Medical School in the US.; ultimately, it’s the doctor’s tracked in real time, he says. With a situation such as
responsibility. If AI and doctor disagree, a supervising the covid-19 pandemic, such apps could enable more
physician or committee could break the tie. rapid reaction and contact tracing – and perhaps save
For all AI might help provide better, more reliable thousands of lives. ❚

Chapter 4 | Frontiers of AI | 51
INTERVIEW REGINA BARZILAY

“THE REAL POWER COMES


WHEN YOU PUT HUMAN
AND AI TOGETHER”
When you are diagnosed with cancer, you are How did you end up applying your work
with machine learning to oncology?
faced with a lot of uncertainty and forced to There is a technical answer and a pragmatic one.
make dramatic decisions often based on very Virtually every aspect of life today is regulated by
machine learning, whether you know it or not. The
little data. The same goes for the doctors only area that isn’t is healthcare, which involves a lot of
diagnosing cancer, who must base their prediction tasks. When your doctor tries to find you a
treatment, they look at different clues together and
prognoses on clinical data drawn from just a make a prediction. With personalisation, which we’re
sliver of the population. For Regina Barzilay, all trying to achieve in medicine, the goal is matching
you and your unique characteristics to the correct drug.
improving that situation has been a very I got into oncology because I was a breast cancer patient
personal quest. at Massachusetts General Hospital (MGH).

Why do we need machine learning for


personalised medicine?
Even very experienced doctors have only seen a limited
number of patients. Maybe none of them were exactly
like you. That’s particularly true for cancer, where there
is so much trial and error. A human can look at a scan
and summarise what they see, but when they sum up
an image that has hundreds of thousands of pixels as a
single page of words, all the unique information is lost.
We’re in the 19th century here, in science terms. There
was so much opportunity and possibility that after I
finished my cancer treatment, I came back to MIT and I
started bringing what we do there to MGH and to
oncology more generally.

Was there a particular moment during your


treatment that led you to want to work on this?
/)26-',%6(732

It was at every single step. For instance, after my initial


diagnosis, I had a mammogram and they said this is a
very small cancer. Then they did an MRI, which >

52 | New Scientist Essential Guide | Artificial Intelligence


PROFILE
REGINA
BARZILAY
Regina Barzilay leads a team
at the Massachusetts
Institute of Technology
using AI to recognise
patterns in medical images
and doctors’ electronic
notes. In 2017, she received
a MacArthur “genius”
fellowship for her work.

Chapter 4 | Frontiers of AI | 53
they do before surgery, and the cancer was all over the the hospitals we are generally working with, there has
place. So they needed to do a biopsy, which found that it been a smaller proportion of African-American
was a false positive. The cancer was not everywhere. women. We can address this algorithmically, but we are
Why don’t we train a machine to predict what’s going also collecting data from hospitals that have a
on, instead of doing these painful, expensive reasonable representation of diverse groups.
procedures? Every single step of the way, the reason we
selected a certain treatment for me was that we weren’t If we apply machine learning to diagnosis or
sure. We were just going for the most aggressive thing. treatment, is there still a role for the doctor?
Millions of women get breast cancer, but I found out Absolutely. In some cases, the machine rivals the
later that all the oncology decisions made in the US human. Sometimes it performs below the human. But
today are based on the 3 per cent of those women who the real power comes when you put the two together.
participate in clinical trials. That’s very problematic: Still, the doctor doesn’t have to agree with the machine.
what is the chance that the women in that 3 per cent Ultimately, the doctor makes the decisions and
will be like you or me? But if you look at the whole approves treatment.
population of women with breast cancer, the chances
are significantly higher. How are you applying machine learning to the
problem of cancer overdiagnosis?
What are the privacy concerns with gathering We are reducing the uncertainty. There’s a condition
people’s health data in a massive set like this? found in breasts called high-risk lesions. In the US, all
We think about that from the design stage onwards. To patients with these lesions get surgery. But 87 per cent
get access to any data, we need approval from an of them are in fact benign: the patients didn’t need the
institutional review board at the hospital to make sure operation. With machine learning, we can now identify
it is handled according to protocol. The data lives at the 30 per cent of the women who don’t need the surgery.
hospital – we don’t bring it to MIT. We also anonymise That’s not all of them, but it is a big step.
it. Any workable system needs to be totally integrated
with clinical care, observing the hospital’s rules of If your system had been in use at the hospital when
patient privacy and rights. you were a patient, would you have felt reassured?
Oh, absolutely. I was diagnosed in 2014. But if you look
What about bad data? Can that bias your machine- again at my mammogram from 2013, you already see
learning model? cancer there. When you look at the one from 2012, you
Certainly. If the hospital’s pathologists miss cancer already see some small spots. Maybe we didn’t know
diagnoses or misdiagnose something else as cancer, what it was. How would my life have been different if I
the system will be trained on noisy data. We need to had been diagnosed in 2012? Maybe I didn’t need to lose
make sure that whatever human-generated my hair and all the other things that came with my
information we are using is as clean as possible. treatment. As a patient, not a scientist, I think we have
Another type of bias we have seen is one where a to do everything to make sure all this uncertainty is
certain population is under-represented in the general resolved and we are applying the best technologies
patient population. That leads to higher error rates. In available to what we care about the most – our health. ❚

54 | New Scientist Essential Guide | Artificial Intelligence


ADDING through a deserted
office in Waltham, Massachusetts,
a quadrupedal creature stops in front
of a pair of heavy doors. Resting on its
haunches for a moment, as though

MARCH contemplating the obstacle, it turns


and seems to summon a friend. The
two could almost be Dalmatians. Until,
that is, one of them turns what appears

OF THE to be its head into an articulated arm


that grabs the handle, twists it and pulls open the door.
It makes for a weirdly enthralling scene. So

INTELLIGENT
enthralling, in fact, that millions of people have
watched the YouTube video of these SpotMini robots
performing tricks such as opening inward-swinging
doors, getting back up after slipping on a banana skin,

ROBOTS
doing the Running Man dance and negotiating narrow
staircases. For their humanoid companion Atlas, the
highlights reel includes running through the woods,
doing backflips and picking up parcels.
For creatures as brilliant at it as us, it is easy to forget
Footage of robots performing impressive that moving is hard. When you walk, or even just stand
still, your brain is constantly telling your muscles to
feats of dexterity are internet gold, but they make thousands of tiny adjustments. For the most part,
aren’t quite as capable as they seem. For all attempts to build machines that can do even some of
the things animals routinely do have been hopeless.
their physical prowess, the intelligence of A quick glance at the footage from the world’s
these mechanical creatures is sorely limited, foremost robotics competitions proves that. Basically,
most of the time, even the most advanced robots fall
but efforts to build learning brains into these over. SpotMini and Atlas, built by the company Boston
machines is spurring a new robotics Dynamics, stand out, capable of feats of locomotion
and dexterity that took evolution millions of years to
revolution. That could in turn inspire entirely perfect. “You can’t fail to be impressed by what they do,”
new paradigms of AI. says Joanna Bryson, an AI researcher at the University
&37832(=2%1-'7

of Bath, UK.
They are far from autonomous, however: there is
almost always a person controlling them via a laptop
and an Xbox controller, sending instructions such as >

Chapter 4 | Frontiers of AI | 55
)7%
*SYVPIKKIHJVMIRHW
7TEGI&SO XST MWXLIGVIEXMSRSJ7[MWW
WXYHIRXW%2=FSXMGW %2=QEP QMHHPI 
ERH&SWXSR(]REQMGW &MK(SK FIPS[

“go forward”, “turn around” or “open the door”.


When the SpotMini does open the door, it is following
a script: painstakingly hand-coded instructions that
tell it how to reach for and turn that type of handle.
Even if we could endow an Atlas, say, with hardware
for the fine manipulation involved in simple chores,
like picking up an apple and putting it in a bowl,
programmers would have to write reams of code to

%2=&38-'7
make it possible. And if something unexpected
happened, the robot wouldn’t have a clue how to
react. They lack the ability to learn things for
themselves. As Mark Raibert, the founder of Boston
Dynamics, has himself put it, the world’s most
advanced robots “can’t do almost everything”.
This is the new frontier of robotics. “If you want
a robot to do anything useful, you need it to be able to
adapt on the fly to new situations,” says Chelsea Finn
at Stanford University in California. “That’s what
we’re trying to do.”
One of the big challenges is to get the software to
recognise when it is succeeding or failing. Researchers
have essentially two strategies. The first is imitation
learning: showing the robot how to do something by
either teleoperating it or having it watch videos of a
&37832(=2%1-'7

human doing something.


The other is reinforcement learning. In some
cases, this requires no supervision whatsoever. In
2018, for instance, Finn demonstrated a robotic

56 | New Scientist Essential Guide | Artificial Intelligence


“Embodiment is a critical
part of how humans and
many animals learn”

arm teaching itself to do various tasks, from picking “Current AI is built on solving specific tasks with
and placing apples to folding shorts and – somewhat large amounts of data and supervision,” says Gupta. “To
idiosyncratically – wrapping forks in towels, based build AI that can solve general-purpose tasks, that can
entirely on data collected from its camera as it fumbled reason in domains with very little supervision, we need
around with these objects. AIs to learn predictive models and causal, common-
“Basically, the robot learned to predict what happens sense knowledge. Embodiment allows us to do that.”
if it performs an action,” says Finn. “Then we give it Not everybody is convinced. “Many animals have
some goal and it figures out what actions it should take very sophisticated ways of interacting with the
to achieve that.” environment and have nothing like human
It is much like the way children learn by exploring intelligence,” says evolutionary psychologist David
the world, throwing toys around or spilling water. Geary at the University of Missouri. For robots to reach
“Traditionally with machine learning you train one our intellectual heights, he believes, some process akin
model to do one thing – translate French into English, to evolution would be required, specifically designed to
say – and you always start from scratch,” says Finn, “but weed out robots with poorer conceptual abilities.
here you can build a single model and use it for many Robots currently learning for themselves are nothing
different tasks. That is very powerful.” like approaching that level, or even that of Atlas or
For some experts, this convergence of robotics and SpotMini: they are stationary, cumbersome and
artificial intelligence could mark a turning point in the confined to the lab. And while some may get the chills
quest for machines with something approaching thinking of autonomous robots stalking confidently
human-level general intelligence. across the terrain, for the time being at least, our
“Embodiment is a critical part of how humans primary concern about autonomous robots should be
and many animals learn, because it allows you to build that they are too stupid rather than too smart.
and test hypotheses,” says Finn. For Abhinav Gupta at Imagine a clumsy robot attempting to rescue a
Carnegie Mellon University in Pittsburgh, family from a burning building, or an inflexible AI
Pennsylvania, and part of Facebook’s AI team, this is controlling heavy machinery in unfamiliar
the only way AIs can achieve the kind of common- circumstances. When opportunity knocks for the new
sense knowledge about how the physical world works world of robots, we want to see that they can open the
that we take for granted. door without pulling it off its hinges. ❚

Chapter 4 | Frontiers of AI | 57
THE
DRIVERLESS
CAR
CHALLENGE

.978C794)6-783'/4,383

58 | New Scientist Essential Guide | Artificial Intelligence


Many cars now feature technology that gives them some degree of autonomy, and companies
such as Google, Tesla and Uber are experimenting with prototypes of fully driverless cars.
That raises huge questions of transparency and accountability. Who should carry the can
when things go wrong – and what level of risk should we deem acceptable?

N THE night of 3 September 2010, that has stopped at an intersection for schoolchildren
33-year-old Brian Wood was driving to cross senses a lorry approaching too fast from
along a highway in Washington state in behind, should it move out of the way to protect the
the northwestern US. Asleep in the car’s passengers, or take a hit and save the children?
passenger seat was his wife Erin, seven “Many or all of those decisions will have to be
months pregnant with their first child. programmed into the car,” says philosopher Jason
The couple were on their way from Millar of the University of Ottawa, Canada.
Vancouver, Canada, to spend time at Answering such “what do we do if…” questions is a
her parents’ vacation home by the two-step process. First, the vehicle needs to be able to
picturesque Puget Sound. accurately detect a hazard; second, it must decide on its
Out of nowhere, a Chevy Blazer came hurtling response. The first step mainly depends on the efficient
towards them. By the time Wood saw it, it was too late. collection and processing of data on the whereabouts
He braked hard and swerved right to take the brunt of and speed of surrounding vehicles, pedestrians or
the impact. He died instantly, but his wife and their other objects.
unborn daughter survived. These might include video cameras to read traffic
We hope it never happens to us, but any driver might lights and road signs, or systems that emit laser or
find themselves making such a split-second, life-and- radar pulses and analyse what bounces back. Data from
death decision. They are part rational, part reflex, and these various sensors must be combined into a stream,
draw on a delicate balance of altruism and self-interest rather as we integrate what our various senses are
programmed into all of us. To his wife, Wood was a hero telling us. “In robotics, it’s called sensor fusion,” says
– but indisputably he was a human. Raúl Rojas of the Free University of Berlin, who heads
As our cars edge towards making these decisions for AutoNOMOS labs, an autonomous vehicles research
us, cases like this raise profound ethical questions. To effort funded by the German government.
drive safely in a human world, autonomous vehicles But the hazards those sensors are reporting back on
must learn to think like us – or at least understand how aren’t always obvious. Consider the first death linked to
humans think. But how will they learn, and which driverless technology, in May 2016. It happened
humans should they try to emulate? because Tesla’s Autopilot system failed to detect that
The ethical challenges posed by driverless cars are the whiteness ahead wasn’t part of a bright spring sky,
often illustrated by the infamous trolley problem but the side of a trailer, resulting in a crash that killed
beloved of moral philosophers. Imagine a trolley car the passenger in the car.
out of control, and five oblivious people on the track A human might have made that mistake too, but
ahead. They will die if you do nothing – or you could sometimes driverless vehicles make a hash of things we
flip a switch and divert the car to a different track where master intuitively. “One of the things AutoNOMOS cars
it will kill only one person. What should you do? have struggled with is someone walking behind a
In a similar spirit, should an autonomous vehicle parked bus,” says Rojas. A human mind would expect
avoid a jaywalker who suddenly steps off the curb, even them to reappear, and supply a pretty accurate estimate
if it means swinging abruptly into the next lane? If a car of when and where, but for a driverless car, it seems >

Chapter 4 | Frontiers of AI | 59
that’s an extrapolation too far.
Even if a sensor system allows an autonomous car
to assess its environment perfectly, the second step to
driving in a morally informed way – taking the
information gathered, assessing relative risks and
acting accordingly – remains an obstacle course. “At a
basic level, it’s about setting up rules and priorities. For
example, avoid all contact with human beings, then
animals, then property,” says ethicist Patrick Lin of
California Polytechnic State University in San Luis
Obispo. “But what happens if the car is faced with
running over your foot or swerving into an Apple store
and causing millions of dollars in damage?”
Part of the problem with such a rules-based approach

+33+0)
is that often there are no rules – at least, no single set
that a sensor system based purely on obvious physical
cues could hope to implement.
For one thing, even the most sophisticated sensor
system can’t compute societal cues we all rely on when surrounding vehicles, or sensors inside the bus might
driving. Imagine you’re at an intersection and it’s not autonomously track its weight, including whether a
clear who should go first, says David Danks, who person is sitting in a particular seat, says Lin. But who
teaches psychology and philosophy at Carnegie Mellon decides a hierarchy of what lives are worth, and how do
University in Pittsburgh, Pennsylvania. “You creep out we eliminate discrimination and bias in how the cars
a bit. Then someone waves and you go all the way are programmed?
through.” You understand that, having signalled, the There’s one way to avoid such thorny moral
other driver will wait until you have gone past before questions, says Lin: simply ignore them. After all,
moving off. a human driver is likely to know nothing about those
Programming this “theory of mind” into cars has in the vehicles around them. “We could avoid some
proved challenging. “A 4-year-old has more theory of ethical dilemmas by being deliberately blind to certain
mind than a driverless car,” says Danks. Reliably facts,” says Lin. This “veil of ignorance” approach
recognising what mental states are encoded in facial amounts to developing responses to simple versions
expressions or bodily movements is way beyond even of likely situations, either by preprogramming them
cutting-edge tech. If a car assesses a given situation or letting the car learn on the job.
very differently from the way a human would, it may The first approach suffers from the problem that
take an unexpected action – with potentially disastrous it is pretty much impossible to anticipate all possible
consequences, especially where a mix of autonomous scenarios – for example, an encounter with a woman
and human-driven vehicles are on the road. in an electric wheelchair chasing a duck into the road
Another problem with a rules-based approach is that with a broom, as recorded by a Google car in 2014
the information a video camera or radar echo can (pictured above).
supply is limited. Sensors might be able to identify a The second approach seems more promising.
moving vehicle as a bus; but would they be able to A car might learn as it goes along, for example, that
identify it as a bus full of schoolchildren? jaywalkers are more likely on city streets than country
Technologically, there are probably fixes. A human roads, but that swerving to avoid one on a country road
intervention might program in details of the number brings less likelihood of hitting something else. Or it
and age of the passengers to be broadcast to might learn that it’s OK to break the speed limit

60 | New Scientist Essential Guide | Artificial Intelligence


occasionally to make way for an ambulance.

INTO FIFTH GEAR But basic rules still need to be programmed, and
whole new ethical issues also arise, says Millar: a
programmer will not be able to predict what exactly
%JYPP]EYXSRSQSYWGEVXLEXGEVVMIW
a car will do in a given situation. We don’t want
SYXQSVEPVIEWSRMRK WIIQEMRWXSV] 
autonomous vehicles to act unpredictably. Just as it’s
[SYPHVEXIEWPIZIPSREWGEPIHIZIPSTIH
important for cars to predict the actions of human road
F]XLI7SGMIX]SJ%YXSQSXMZI)RKMRIIVW
users, so it matters for people to be able to anticipate a
,IVIMWLS[XLIWGEPIFVIEOWHS[R
car’s behaviour. Hence the question of what an
autonomous car will do when it encounters that
LEVEL 0 trolley-problem-like dilemma.
2SEYXSRSQSYWJIEXYVIWQE] But fixating on such an extreme case probably
LEZIEYXSQEXMGKIEVWLMJX1SWXGEVW doesn’t help, says Rojas. “Who has ever experienced
GYVVIRXP]SRXLIVSEH that situation? It’s one possible situation in a million
days. We first need to solve 99.9 per cent of the more
LEVEL 1 pressing problems” – things like how to avoid
7SQIEYXSRSQSYWJIEXYVIWWYGL pedestrians, stay within a lane, operate safely in bad
EWEYXSQEXMGFVEOMRKGVYMWIGSRXVSP weather, or push software updates to cars while
1ER]RI[IVGEVQSHIPW safeguarding them from hackers. Millar agrees, but
says that’s not what the thought experiment is about.
LEVEL 2 “It’s just used to illustrate the point that engineers
%YXSQEXIHWXIIVMRKFVEOMRKERH don’t have the moral authority to make all the
EGGIPIVEXMSRFYXVIUYMVIWLYQER decisions in their cars,” he says.
SZIVWMKLX)K8IWPE1SHIP71IVGIHIW More openness and agreement about common
&IR^)'PEWW:SPZS7 standards on the part of those developing autonomous
vehicles would be a start. But probably no solution will
LEVEL 3 be perfect, warns Nick Bostrom, a philosopher at the
'EVGERQSRMXSVMXWIRZMVSRQIRX University of Oxford and director of its Future of
ERHHVMZIEYXSRSQSYWP]FYXQE]VIUYIWX Humanity Institute. “We should accept that some
LYQERMRXIVZIRXMSREXER]XMQI)K%YHM people will be killed by these cars.” That’s where we also
%2MWWER4VS4-038  /ME need to put things in context, he says. In 2013, car
(6-:);-7)  crashes killed nearly 1.3 million people around the
world, injuring up to 50 million more. Nine in every 10
LEVEL 4 accidents result from human factors: a moment’s
'EVGERHVMZIMRHITIRHIRXP]FYX distraction caused by reading a text message or yelling
QE]VIUYIWXLYQERMRXIVZIRXMSRMR for the kids to behave, or falling asleep because of
YRYWYEPGSRHMXMSRWWYGLEWI\XVIQI monotonous motorways.
[IEXLIV)K:SPZS8IWPE*SVH   In Brian Wood’s case, the 21-year-old driver of the
&1;M2I\X  oncoming Chevy had been distracted by taking off her
sweater. Had her car been fully autonomous, it seems
LEVEL 5 likely Wood would be alive today. The challenge isn’t to
'EVGERHVMZIMRHITIRHIRXP]MR build the perfect system for regulating traffic on the
EPPGSRHMXMSRW road in an age of autonomous vehicles; it is to agree
what a system looks like that is better than the human-
governed system we have now. ❚

Chapter 4 | Frontiers of AI | 61
THE BOTS OF WAR
“ONLY the dead have seen the end of war,” the philosopher George Santayana once bleakly
observed. Humanity’s martial instincts are deep-rooted. Over millennia, we have fought wars
according to the same strategic principles based in our understanding of each other’s minds.
But AI introduces another sort of military mind – one that even though we program how it thinks,
may not end up thinking as we do. Forget killer robots: we are only just beginning to work through
the profound and troubling effects these new combatants might have, says Kenneth Payne.

OCIAL intelligence gives humans a


powerful advantage in conflict. In war,
size matters. Victory generally goes to
the big battalions, a logic described
in a formula derived by the British
engineer Frederick Lanchester from
studies of aerial combat in the first
PROFILE world war. He found that wherever a
KENNETH battle devolves to a melee of all against
PAYNE all, with ranged weapons as well as close
combat, a group’s fighting power increases as the
Kenneth Payne researches square of its size.
psychology, military That creates a huge incentive to form ever-larger
strategy and international groups in violent times. Humans are good at this,
because we are good at understanding others. We forge
relations at King’s College
social bonds with unrelated humans, including with
London, and is the author of strangers, based on ideas, not kinship. Trust is aided by
Strategy, Evolution, and shared language and culture. We have an acute radar
War: From apes to artificial for deception, and a willingness to punish non-
intelligence cooperating free-riders. All these traits have allowed us
to assemble, organise and equip large and increasingly
potent forces to successfully wage war.

62 | New Scientist Essential Guide | Artificial Intelligence


4%90'%14&)00-783'/4,383

Underlying this is theory of mind – the human ability It involves chance and can be emotional. There is
to gauge what others are thinking and how they will scope for misperception and miscommunication, and
react to a given situation, friend or foe. Theory of mind a grasp of human psychology can be vital for success.
is essential to answer strategy’s big questions. How Artificial intelligence changes all this. First, it
much force is enough? What does the enemy want, and swings the logic of strategy decisively towards attack.
how hard will they fight for it? AI’s pattern recognition makes it easier to spot
Strategic decision-making is often instinctive and defensive vulnerabilities, and allows more precise
unconscious, but also can be shaped by deliberate targeting. Its distributed swarms of robots are hard to
reflection and an attempt at empathy. This has survived kill, but can concentrate rapidly on critical weaknesses
even into the nuclear era. Some strategic thinkers held before dispersing again. And it allows fewer soldiers to
that nuclear weapons changed everything because be risked than in warfare today.
their destructive power threatened punishment This all creates a powerful spur for moving first in
against any attack. Rather than denying aggressors any crisis. Combined with more accurate nuclear
their goals, they deterred them from ever attacking. weapons in development, this undermines the basis
That certainly did require new thinking, such as the of cold-war nuclear deterrence, because a well-planned,
need to hide nuclear weapons, for example on well-coordinated first strike could defeat all a
submarines, to ensure that no “first strike” could defender’s retaliatory forces. Superior AI capabilities
destroy all possibility for retaliation. Possessing would increase the temptation to strike quickly and
nuclear weapons certainly strengthens the position of decisively at North Korea’s small nuclear arsenal,
militarily weaker states; hence the desire of countries for example.
from Iran to North Korea to acquire them. By making many forces such as crewed aircraft and
But even in the nuclear era, strategy remains human. tanks practically redundant, AI also increases >

Chapter 4 | Frontiers of AI | 63
uncertainty about the balance of power between states. DeepMind AI that beat the human champion Lee
States dare not risk having second-rate military AI, Sedol at the board game Go in 2016. With enough past
because a marginal advantage in AI decision-making behaviour to go on, this works even in a game such as
accuracy and speed could be decisive in any conflict. poker where, unlike Go, not all information is freely
AI espionage is already under way, and the scope for a available and a healthy dose of chance is involved.
new arms race is clear. It is difficult to tell who is This approach could work well at the tactical level –
winning, so safer to go all out for the best AI weapons. anticipating how an enemy pilot might respond to a
Were that all, it would be tempting to say AI manoeuvre, for example. But it falls down as we
represents just another shift in strategic balance, as introduce high-level strategic decisions. There is too
nuclear weapons did in their time. But the most much unique about any military crisis for previous
unsettling, unexplored change is that AI will make data to model it.
decisions about the application of force very differently An alternative method is for an AI to attempt to
to humans. model the internal deliberations of an adversary. But
AI doesn’t naturally experience emotion, or this only works where the thing being modelled is less
empathy, of the sort that guides human strategists. sophisticated, as when an iPhone runs functional
We might attempt to encode rules of engagement into replicas of classic 1980s arcade games. Our strategic
an AI ahead of any conflict – a reward function that AI might be able to intuit the goals of an equally
tells it what outcome it should strive towards and how. sophisticated AI, but not how the AI will seek to achieve
At the tactical level, say with air-to-air combat between them. The interior machinations of an AI that learns
two swarms of rival autonomous aircraft, matching our as it goes are something of a black box, even to those
goals to the reward function that we set our AI might be who have designed it.
doable: win the combat, survive, minimise civilian Where the enemy is human, the problem becomes
casualties. Such goals translate into code, even if there more complex still. AI could perhaps incorporate
may be tensions between them. themes of human thinking, such as the way we
But as single actions knit together into military systematically inflate low-risk outcomes. But that is
campaigns, things become much more complex. AI looking for patterns again. It doesn’t understand
Human preferences are fuzzy, sometimes what things mean to us; it lacks the evolutionary logic
contradictory and apt to change in the heat of battle. that drives our social intelligence. When it comes to
If we don’t know exactly what we want, and how badly, understanding what others intend – “I know that
ahead of time, machine fleets have little chance of you know that she knows” – machines still have a
delivering those goals. There is plenty of scope for our long way to go.
wishes and an AI’s reward function to part company. Does that matter? Humans aren’t infallible mind-
Recalibrating the reward function takes time, and you readers, and in the history of international crises
can’t just switch AI off mid-battle – hesitate for a misperception abounds. In his sobering account of
moment, and you might lose. nuclear strategy, The Doomsday Machine, Daniel Ellsberg
That is before we try to understand how the describes a time when the original US early warning
adversary may respond. Strategy is a two-player game, system signalled an incoming Soviet strike. In fact, the
at least. If AI is to be competitive, it must anticipate system’s powerful radar beams were echoing back from
what the enemy will do. the surface of the moon. Would a machine have paused
The most straightforward approach, which plays to for thought to ascertain that error before launching a
AI’s tremendous abilities in pattern recognition and counterstrike, as the humans involved did?
recall, is to study an adversary’s previous behaviour and An AI’s own moves are often unexpected. AlphaGo’s
look for regularities that might be probabilistically game-winning “move 37” was down to probabilistic
modelled. This method was used by AlphaGo, the reasoning and a flawless memory of how hundreds of

64 | New Scientist Essential Guide | Artificial Intelligence


thousands of earlier games had played out. The last
THE MILITARY-AI thing we need is a blindingly fast, offensively brilliant
AI that makes startling and unanticipated moves in
COMPLEX confrontation with other machines.
There won’t necessarily be time for human
8LIQMPMXEV]LEWEP[E]WJYRHIHQYGL judgement to intercede in a battle of automatons
%-VIWIEVGL7MVMJSVMRWXERGIMWE before things get out of hand. At the tactical level,
F]TVSHYGXSJERIJJSVXXSTVSZMHIER keeping a human in the loop would ensure defeat by
EWWMWXERXJSVWSPHMIVW8LIl+VERH faster all-machine combatants. Despite the stated
'LEPPIRKIzVEGIWWTSRWSVIHF]XLI intentions of liberal Western governments, there will
97(IJIRWI%HZERGIH6IWIEVGL be ever-less scope for human oversight of blurringly
4VSNIGXW%KIRG] (%64% WXMQYPEXIH fast tactical warfare.
HIZIPSTQIRXSJXLIEYXSRSQSYW The same may be true at more elevated strategic
ZILMGPIWXLEXSXLIVWRS[LSTIXS levels. Herman Kahn, a nuclear strategist on whom the
QEOIYFMUYMXSYW character Dr Strangelove was partly based, conceived of
;LIREYXSQEXMSRFIGSQIW carefully calibrated “ladders” of escalation. A conflict is
EYXSRSQ]FIGSQIW%-MWEQEXXIVSJ won by dominating an adversary on one rung, and
HIFEXIERHMRXLIQMPMXEV]EVIRE[I making it clear that you can suddenly escalate several
EVITVSFEFP]X[SHIGEHIWE[E]JVSQ more rungs of intensity, with incalculable risk to the
JYPP]EYXSRSQSYWMRXIPPMKIRX[IETSR enemy – what Kahn called “escalation dominance”.
W]WXIQW1IER[LMPI[IETSRWEVI In the real world, the rungs of the ladder are rather
QEOMRKMRGVIEWMRKYWISJEYXSRSQ] imprecise. Imagine two competing AI systems, made of
WSJX[EVIXLEXEPPS[WXLIQXSMHIRXMJ] drones, sensors and hypersonic missiles, locked in an
IRIQ]XEVKIXWERHJMVI[MXLSYX escalatory game of chicken. If your machine backs off
MRXIVZIRXMSR7SQIKSZIVRQIRXW first, or even pauses to defer to your decision, it loses.
WYGLEWXLI9/ WLEZIGSQQMXXIHXS The intensity and speed of action pushes automation
EP[E]WOIITMRKElLYQERMRXLI ever higher. But how does the machine decide what it
PSSTz[MXLJMVMRKHIGMWMSRWEYXLSVMWIH will take to achieve escalation dominance over its rival?
F]ELYQER There is no enemy mind about which to theorise; no
3XLIVW]WXIQWRSXEFP]7SYXL scope for compassion or empathy; no person to
/SVIERKYRWEPSRKXLIFSVHIV[MXL intimidate and coerce. Just cold, inhuman probabilities,
2SVXL/SVIEEVIGPEWWIHEW decided in an instant.
lLYQERSRXLIPSSTzWSQISRIGER That was move 37 of AlphaGo’s second game against
MRXIVZIRIERHWXSTJMVMRKSRGIMXLEW the world champion. Perhaps it is also early December
WXEVXIH8LI-WVEIPM-VSR(SQIQMWWMPI 2041, and a vast swarm of drones skimming over the
HIJIRGIW]WXIQMWJYPP]EYXSQEXIH-J ocean at blistering speed, approaching the
MXHIXIGXWERMRGSQMRKQMWWMPISV headquarters of the US Pacific Fleet. We can’t bury our
EVXMPPIV]WLIPPMX[MPPJMVIEQMWWMPIXS heads and say it won’t happen, because the technology
MRXIVGITX2SLYQERMWVIUYMVIH already exists to make it happen. We won’t be able to
agree a blanket ban, because the strategic advantage to
anyone who develops it on the sly would be too great.
The solution to stop it happening is dispiritingly
familiar to scholars of strategic studies – to make sure
you win the coming AI arms race. ❚

Chapter 4 | Frontiers of AI | 65
CHAPTER 5

66 | New Scientist Essential Guide | Artificial Intelligence


The AI revolution is happening fast, but it is here to stay. How
do we best regulate machine minds and minimise their potential
deleterious effects – whether on jobs, on privacy, on fairness,
or on openness and equality?

As with everything in the world of modern AI, the questions start


with data. Should we really be giving so much of it away?

Chapter 5 | AI and society | 67


HEN Joanna Bryson, Google, Facebook and the rest – would argue they want
an AI researcher at the the huge amount of data they glean about our wants,
University of Bath, UK, is in desires and habits only to improve their products for
a house where she knows our benefit: to understand what we meant when we
there is an Amazon Echo, mistyped that query in the search bar, for example, or
a Google Home or any to determine better which friends’ posts we want to see.
similar voice-activated But that data also sells ads and products, and hones
digital assistant, she says the revenue-generating AI algorithms themselves. It’s
she holds a more guarded not just the big-name big-tech firms either. Google,
conversation. She’s all-too Amazon, Microsoft and others have all made some of
conscious of the possibility of her words being their AI algorithms open source, meaning outside
observed, recorded, dissected. developers can use them for their own applications.
Most of us haven’t thought that far, seduced by the Emails to UK online grocer Ocado, for instance, are
usefulness of such devices. Data-driven AI both feeds routinely read, prioritised and forwarded by an AI
on and fuels a huge data-gathering infrastructure. based on Google’s TensorFlow algorithm. An AI might
Since the advent of the world wide web, engineers and have answered the last time you phoned a call centre,
entrepreneurs have invented myriad ways to elicit and asked you what your enquiry was about, and routed it
collect information about us and our habits: asking based on your response. AIs are now approving our
users to accept a cookie, tag friends in images, rate a mortgages (or not), setting insurance premiums and
product, or “like” or share a social-media post. Every detecting credit card fraud through unusual
time we call on a voice-activated assistant, access the transaction patterns, all based on data of a more or less
internet to read the news, do a search, buy something, personal nature that we gave them in the first place.
play a game, or check our email, bank balance or social That has consequences that we are only now
media feed, we interact with this infrastructure. beginning to grapple with. For a start, the simple fact
The infrastructure is not like any medium invented that our smartphones and the apps that run on them
before. Unlike the copper cables that used to connect need to collect data such as our location, browsing
people in the telegraph or telephone age, it takes a keen history and social networks, to work as we expect them
interest in our actions. It looks back at us, anticipating to potentially turns them into surveillance devices.
our moves and guessing our intents. It gives a whole Debates about “backdoors” in various apps potentially
new meaning to the claim, made by the 1970s allowing access by security services indicate this is a
communications theorist Marshall McLuhan, that a very real threat. Where the boundaries of privacy can
medium can never be neutral. and should be drawn in an age when AI is watching
Because the resulting products know so much about our every move is one very live debate. ❚
us, they are intuitive, responsive and just darn useful.
↓-
',6-77)%60)

This is what makes modern AI a brilliant and powerful


technology, but also a fundamentally disruptive one. See page 74 later in this chapter for more on AI-
The big companies behind most AI – Amazon, Apple, “Big Brother” technologies-

68 | New Scientist Essential Guide | Artificial Intelligence


Chapter 5 | AI and society | 69
FAKE NEWS!
The data infrastructure that feeds AI and N SUMMER 2007, on a whim having just
finished his psychology degree, David Stillwell
big tech’s advertising-led business model made a Facebook app called myPersonality. It
exists to steer our preferences. But the more let people take a test that describes personality
types according to the “Big Five” traits, which
the machines know about us, the better the include degrees of agreeableness,
job they can do not just of nudging us, but conscientiousness and extroversion.
Months later, some researchers asked
also of spreading misinformation that exploits Stillwell if they could use his data. But he hadn’t
and reinforces our preferences and prejudices. collected any. He had only set the test up
because “I thought it would be cool,” he says. Then he
How do we stop that undermining things we wised up and started gathering data. It would prove a
hold dear? career-making move. When Michal Kosinski, then at
the University of Cambridge, UK, approached him a
year later, Stillwell had a data gold mine of more than a
million Facebook profiles paired to personality types
(that number now tops 6 million).
In 2013, Stillwell, Kosinski and colleague Thore
Graepel dropped the bombshell that machine-learning
techniques made it possible to predict someone’s
personality type simply from their Facebook likes. And
accurately too. Just nine likes is enough to predict your
personality traits as well as a colleague could. With 65
likes, as well as a friend; with 125 likes, a family member.
Most people have around 225 likes, so organisations
that possess this sort of data can predict your
personality just as well as a spouse could. Not only that,
it only takes a few Facebook likes to predict your age,
gender, intelligence, sexuality, political and religious
.%%4%66-)272964,383+)88=-1%+)7

views, relationship status and a host of other things.


We shouldn’t be surprised: after all, the power of the
algorithmically aggregated data that we willingly give
to the big tech companies to say who we are is what
makes these companies such a gold mine for
advertisers. Early on, most of us were either unaware
this was going on, or accepted it as a reasonable price to
pay for products such as Google search, Gmail or
Facebook. Then came revelations such as the company
8LITVIJIVIRGIEPKSVMXLQWSJ*EGIFSSOERH Cambridge Analytica, which worked on Donald
SXLIVWSGMEPQIHMETPEXJSVQWLEZIGSQIYRHIV Trump’s 2016 presidential election campaign,
JMVIJSVIRGSYVEKMRKXLIWTVIEHSJJEOIRI[W admitting it had harvested personal data from

70 | New Scientist Essential Guide | Artificial Intelligence


THE POWER
OF DEEPFAKES
3RILEVQPIWWIEVP]GVE^I[EWXSTEWXI
XLIEGXSV2MGSPEW'EKIMRXSJMPQWMR[LMGL
LIRIZIVETTIEVIH&YXXLIRXLIVI[IVI
XLIJEGIWSJJIQEPIGIPIFVMXMIWTEWXIH
SZIVXLSWISJTISTPIMRTSVRSKVETLMG
ZMHISWERHXLIVIWIEVGLIVW[LSYWIH%-
XSKIRIVEXIGSRZMRGMRKJSSXEKISJ[SVPH
PIEHIVWWE]MRK[SVHWXLEXRIZIVEGXYEPP] Facebook profiles and used it to feed individual
PIJXXLIMVQSYXLW voters political advertising targeted at their personality
1ER]TISTPIEVI[SVVMIHEFSYXXLI types. That gave many people pause for thought.
TSXIRXMEPLEVQXLEXWYGLQERMTYPEXIH Facebook has since tightened up its policies on third-
TMIGIWSJZMHISJSSXEKIrlHIITJEOIWzr party data use.
GSYPHGEYWIl- QYWYEPP]EXIGLSTXMQMWX But the bald truth is that our web use, and the data-
FYXXLMWMWSRI[LIVIXLIVIHJPEKW[IRX harvesting infrastructure companies have developed
YTzWE]W/EXI(IZPMREX+SPHWQMXLW on top of it, lay our psychologies bear – with eye-
9RMZIVWMX]SJ0SRHSRl-XLMROMXGSYPHFI opening consequences. “We can be hacked,” says
VIEPP]MRWMHMSYWz Stillwell, who is now at the The Psychometrics Centre at
2I[PE[WEVIERSTXMSRFYXXLI] the University of Cambridge. His latest experiment
GSYPHFEGOJMVIMJXLI]IRHYTSYXPE[MRK with Kosinski and others found that those targeted
FIRIJMGMEPETTPMGEXMSRWHIITP]MQQIVWMZI with online advertising based solely on a single
JMPQWSVZMHISKEQIWRSZIPJSVQWSJ Facebook like were 40 per cent more likely to click on
XLIVET]ERHZMVXYEPLYQERWXSIRXIVXEMR an online advert and 50 per cent more likely to follow
SVOIITYWGSQTER]l8LIVIEVIEPSXSJ through with a purchase than those seeing untailored
PSRIP]TISTPISYXXLIVI[LS[ERX advertising. When such messaging can be scaled to
WSQIFSH]XSXEPOXSzWE]W7XITLIR target many millions of people at the press of a button,
6SWIRFEYQSJ7ER*VERGMWGSFEWIH and with no regulatory oversight, that for some is an
GSQTER]:IWEPMYW'VIEXMSRWEZMWYEP alarming degree of influence. “The power balance right
IJJIGXWEVXMWX[LSLEW[SR3WGEVWJSVLMW now is weighted towards those who hold the data, and
[SVOSR*SVVIWX+YQTERH%ZEXEV we really don’t know how it’s being used,” says Stilwell.
'SQTERMIWPMOI7SYP1EGLMRIWMR Concerns don’t stop at targeted manipulation. The
2I[>IEPERHJSYRHIHF]1EVO7EKEV efficiency of the AI algorithms that underlie social
[LSTMSRIIVIHJEGMEPGETXYVIXIGLRMUYIW media at knowing what we like makes them echo
MRJMPQWWYGLEW%ZEXEVERH/MRK/SRK chambers. We are more likely to see, and share, stuff
EVIXV]MRKXSGVIEXIZMVXYEPLYQERWXSFI that conforms with our pre-existing prejudices, making
GSQTERMSRWSVIZIRXLIVETMWXW=IXIZIR these platforms breeding grounds for fake news.
[MXLEZEXEVWMRZIRXIHJVSQWGVEXGLXLIVI Stung by accusations they are unwittingly
EVIIXLMGEPGSRGIVRW-XMWIEW]XSGLEXXSE undermining democracy, social-media sites such as
GYWXSQIVWIVZMGIFSXSRPMRI[MXLSYX Facebook are trying to tighten up their rules for who
VIEPMWMRKMXMWR XLYQERJSVI\EQTPI&YX they allow to advertise and what they allow them to say.
JSVKIXXMRKXLEX[IEVIMRXIVEGXMRK[MXL Messaging apps such as the Facebook-owned
WSJX[EVIQIERW[IPIXSYVKYEVHHS[R WhatsApp have set limits on how many recipients
ERHVIZIEPXLMRKW[IQMKLXRSXLEZI messages can be forwarded onto, in the hope of
[ERXIHXSWE]W(IZPMR introducing friction against the spread of fake news.
-X WERSXLIVGEWI[LIVIXLIXIGLRSPSK] For many, that isn’t enough. ❚
MWLIVIERH[MPPFIYWIHJSVKSSHSVMPPr
ERHMXMWEUYIWXMSRSJLS[[IFEPERGIXLI ↓-
VMWOWERHVI[EVHW For more on how we might tighten our control-
of AI, see page 82 later in this chapter-

Chapter 5 | AI and society | 71


WHEN AI
DISCRIMINATES
AI systems know a lot about us, but do they whose workings are not transparent, had violated due
process. In June 2016, the Wisconsin Supreme Court
know the right things? Machine learners can
rejected his appeal – a verdict handed down just a
only ever be as good as the data they are trained month after the investigative news organisation
ProPublica discovered that the COMPAS system was
on, and often that simply isn’t good enough. AIs
twice as likely to incorrectly predict that a black person
can easily end up reflecting biases that exist on would reoffend than a white person. Equivant, the
company that makes the software, has always disputed
the part of the people who program them or in
that claim.
the data sets they are fed. This complication But as a study in 2018 showed, COMPAS was no better
at predicting the likelihood of reoffending than a panel
risks exacerbating already existing problems of
of 400 people randomly selected on the internet. Both
discrimination and inequality in society. got it right about two-thirds of the time, clearly
showing the limits of such predictive algorithms.
Such systems are affecting lives. Besides judges using
them to decide whether to grant bail or how long
someone should go to prison, police are using them to
predict where crimes are most likely to occur, banks to
N 2013, Eric Loomis was convicted of fleeing determine mortgage applications, and employers to
from the police and operating a vehicle without sift through applications to decide who gets the job.
its owner’s consent in La Crosse, Wisconsin. “We have a tendency to immediately trust them
Sentencing him to six years in prison, the judge because we consider them ‘smart’ or ‘intelligent’,” says
cited the “high risk” Loomis posed to the local Sandra Wachter at the Oxford Internet Institute.
community – a risk determined in part by his But all the evidence is that such software is riddled
score on the COMPAS assessment, a proprietary with biases, mostly from the data that feeds it. For
algorithm which at the time used 137 “features” example PredPol, an algorithm designed to predict
about a defendant and the case against them to when and where crimes will take place in many US
predict the likelihood they would reoffend. states, was shown in 2016 to repeatedly send officers to
Loomis challenged the ruling on the grounds that neighbourhoods in Oakland, California, with a high
the judge, by considering the outcome of an algorithm proportion of people from racial minorities, regardless

72 | New Scientist Essential Guide | Artificial Intelligence


1)8%136;36/7-783'/4,383

of the true crime rate in those areas. Because the needed. That might mean tweaking the inputs or
software learned from reports recorded by the police explicitly gathering more diverse data and test subjects.
rather than actual crime rates, it created a feedback The black-box nature of many machine learning
loop that might exacerbate racial biases. algorithms doesn’t help. After all, it’s the point that
Meanwhile some facial-recognition software has rather than following a simple procedure to make a
been shown to have a higher rate of false identification decision, they are trained to pick up on patterns within
of women and people with darker skin, because the thousands or millions of examples and apply these to
data it was trained on was overwhelmingly male and future assessments. The features in the data it picks up
white. Similar effects have been shown to cause image on doesn’t even need to be explicitly to do with, say,
searches for “CEO” to overwhelmingly serve up images race says Sonja Starr of the University of Michigan Law
of men, and software serving up job ads to show higher- School in Ann Arbor. For example, sentencing
paying jobs to men. algorithms take into account where someone lives, and
Sometimes artificial intelligence feeds back to their employment history, that may just be correlated
heighten human bias. In October 2017, police in Israel with racial background.
arrested a Palestinian worker who had posted a picture Such factors mean there often aren’t simple steps
of himself on Facebook posing by a bulldozer with the that can be retraced to check whether an AI’s logic was
caption “attack them” in Hebrew. Only he hadn’t: the sound. Even if there were, companies often want to
Arabic for “good morning” is very similar, and keep their algorithms under lock and key, afraid of
Facebook’s automatic translation software chose the losing their competitive edge.
wrong one. The man was questioned for several hours There are ways round this. We don’t always know
before someone spotted the mistake. how medicines work, but regulators require evidence
Facebook was quick to apologise, but data biases of their efficacy, through rigorous controlled trials,
persist. The resulting technology then becomes a before they are sold. We could do the same for
mechanism for amplifying discrimination, says algorithms – trusted independent third parties could
Anupam Chander, a law professor at the University of assess and evaluate these systems to avoid trade secrets
California, Davis. He argues that designers should becoming public. ❚
employ “algorithmic affirmative action”, explicitly
paying attention to race and other potentially biasing →-
factors in the data fed to the algorithm and in the Turn to page 85 later in this chapter for-
results it spits out, and then correcting course as more on ensuring “algorithmic accountability”-

Chapter 5 | AI and society | 73


BIG BROTHER AI
IS WATCHING YOU
Data fuels AI systems, but can we decide who has access
to that data, what use can be made of it, or whether it gets
deleted for ever? If we can’t, we don’t have control. The recent
spread of face-recognition apps suggests we are losing it.

N DECEMBER 2017, Ed Bridges was mingling


with the crowds of Christmas shoppers on the
streets of Cardiff, UK, when the police snapped
a picture of him.
He hadn’t been convicted of a crime, nor was
he suspected of committing one. He was simply
one of a vast number of people who have been
quietly added to face-recognition databases
without their consent, and most often, without
their knowledge.
Bridges took the police force, South Wales Police, to
court to get them to delete the images. Eventually, in
September 2019, a court ruled that the police’s actions
were lawful. The next month, however, the country’s
official in charge of data privacy, information
commissioner Elizabeth Denham, flagged serious
concerns about the lack of laws regulating the use of
face-recognition technology.
Face recognition can map faces in a crowd by
measuring the distance between facial features, then
compare results with a “watch list” of images, which can
include suspects, missing people and persons of interest.
For years, critics have warned that the technology is
an unparalleled invasion of privacy. Yet police forces
across the world have started to use the tech to scan
crowds at football matches, festivals, protests and on
busy streets in a bid to identify criminal suspects. Many
private spaces, including museums, restaurants and

74 | New Scientist Essential Guide | Artificial Intelligence


shops, are also introducing the technology. Bridges was snapped again a few months later, when he
But while most of us associate face recognition with joined a protest against an arms fair being held in the
CCTV cameras, it is increasingly in your smartphone city. “At lunchtime this face-recognition van suddenly
too. Facebook, for example, runs face recognition appeared across the road from the main group of
algorithms on users’ photos to automatically identify protesters,” he says. “I felt it was done to intimidate us,
them in other images on the site, which for years so we would not use our right to peacefully protest.”
functioned on an opt-out basis. Snapchat uses it to Megan Goulding, a lawyer at the human rights group
overlay fun animations onto your face. The latest Liberty, assisted his case. “Part of the reason we’re
iPhone ditched fingerprint scanners in favour of using challenging the use of face recognition at all is we think
face recognition to unlock devices. Amazon’s it’s pretty impossible to protect yourself,” she says.
Rekognition image analysis software promises, among “Because of the indiscriminate nature of the technology,
other things, that it can spot faces from a library of it can happen without your knowledge or consent.”
suspects for law enforcement. Politicians are also beginning to realise the dangers
You might feel reassured about having your face of letting face recognition technology go too far. US
stored in a database if it helps solve crime. However, Democratic congresswoman Alexandra Ocasio-Cortez
despite its popularity with law enforcement, face is one of its most high-profile opponents. “I don’t want
recognition is often inaccurate, turning up huge to see an authoritarian surveillance state, whether it’s
numbers of false positives. run by a government or whether it’s run by five
An independent analysis of a face recognition trial by corporations,” she says.
the London Metropolitan Police Service found that 81 Jo Swinson, a former MP and leader of the Liberal
per cent of people the system flagged as potential Democrat party in the UK, is another expressing
suspects were innocent. And it is even less accurate for concern. “Orwell wrote 1984 as a warning, not an
some ethnic minorities, which compounds the risk that instruction manual,” she says . “We should be very wary
these systems will entrench or exacerbate racial biases. of technology without proper debate and scrutiny
That risk is certainly real. A study of three about the implications for personal privacy.”
commercially available face-recognition systems, An outright ban probably isn’t the way forward, given
created by Microsoft, IBM and the Chinese company the legitimate uses the technology can have. “I think we
Megvii, showed that the systems failed to identify the need trials that protect privacy but allow development,”
gender of white men correctly only 1 per cent of the says Paul Wiles, the UK’s biometrics commissioner.
time. But the error rate rose for people with darker skin, “These should be peer reviewed, published, all the
reaching nearly 35 per cent for women. usual things you expect from a medical trial.”
Documents released by the UK Home Office in 2019 Such trials would demonstrate the reliability and
under a freedom of information request showed it limitations of the software, such as whether it is less
launched a face-recognition passport checking system accurate when matching people from minority groups.
in 2016 while knowing it didn’t work as well for people Only then can a sensible public debate about the
with darker skin. In one example, Joshua Bada, a black technology take place, says Wiles.
sports coach, was told by the system that his photo Wiles points to the approach of the Scottish
didn’t meet requirements after it mistook his lips for government. In July 2019 it published a document that
an open mouth. recommends general principles for the use and
Meanwhile there are very real worries that without retention of biometric data. A commissioner can then
appropriate regulation the technology could draw up codes of practice for each application, so that
contribute to a surveillance culture. China is using face- different rules can apply for matching the faces of
recognition technology to monitor and discriminate people who have been arrested to mugshots compared
against the Muslim population in the north-west with automatically scanning faces in crowds. “It’s a
province of Xinjiang. As well as scanning people’s faces clever solution to what seems an insoluble problem,”
before they enter markets or buy fuel, the system alerts says Wiles.
70-1(-783'/4,383

authorities if targeted individuals stray 300 metres However, this is a compromise too far for Goulding.
beyond their home or workplace, effectively building “In our view we don’t think it’s possible to balance
virtual checkpoints to hem in Uighur Muslims. the existence of this technology. The impact on rights
After first being photographed shopping in Cardiff, is too grave.” ❚

Chapter 5 | AI and society | 75


HOW TO
HACK AN AI
Deep-learning systems see pixels, not things, and that provides an easy way to
fool them into seeing things that aren’t there. With such systems taking over ever
more crucial roles in society, technology firms are scrambling for an answer.

VEHICLE is driving slowly on the top to make it think an image is something else entirely.
floor of a multistorey car park. Footage One of the most discussed hacks came in late 2017,
from an on-board camera shows what when Anish Athalye at the Massachusetts Institute of
the AI system controlling it can see: Technology and his colleagues managed to fool an AI
ranks of cars to the left, and is that a into misclassifying multiple images of a model turtle
person off to the right? Straight ahead taken at different angles just by altering the pattern on
there is something else. To any its shell. At one stage the AI was tricked into thinking
human observer, it is obviously a stop the turtle was a rifle. The same principle can be applied
sign. But the AI can’t seem to make to text, audio and video.
sense of it and keeps on driving. Athalye and his team have since shown that you can
This was only a stunt. Researchers had compose such adversarial attack images even when
deliberately stuck pieces of black tape on the sign to you aren’t privy to an AI’s inner workings. They showed
study how it confused the machine mind. Yet this Google’s Cloud Vision image-recognising service an
and several similar demonstrations are revealing image of a dog and checked that the AI classified it
something disturbing: AI can be hacked, and you correctly. This image was then gradually altered until it
don’t even need to break passwords to launch an attack. appeared to humans more and more like a photograph
As the technology begins to find more and more of two skiers standing in the snow. Even when the
applications that affect our lives, this is a threat we image had been undeniably transformed into this
need to take seriously. completely different scene, the algorithm’s response
At the heart of these hacks is the way deep-learning still came back: “dog”. In other words, you can hack an
neural networks pull off their feats without AI by trial and error. That sounds trivial – until you
understanding concepts. Such a network doesn’t know realise, for example, all the security-critical
what a face, a word or a stop sign is, but faced with a applications where face-recognition technology is
pattern of data it has already seen, it becomes able to already being employed.
recognise one and respond appropriately. “It’s not Tech giants are trying to fight back. In April 2018, IBM
forming a representation that’s anything like what the launched a toolbox to help AI researchers make neural
brain does,” says neuroscientist Bruno Olshausen at the networks resilient to attacks. It includes algorithms for
University of California, Berkeley. “It’s doing statistics creating adversarial images and ways of measuring
on the pixels to extract features.” how neural networks might respond to them.
If you understand how a deep-learning system does AI hacking is a “pretty good illustration of how
this, you can work out what small changes are needed dumb deep learning really is”, says psychologist Gary

76 | New Scientist Essential Guide | Artificial Intelligence


Marcus at New York University. The best way out of
the bind would be to try to make AIs more like real
DO YOU THINK brains. But this is a tall order, given we are nowhere

LIKE A MACHINE? near fathoming the grey matter between our ears.
Perhaps we should instead aim for a kind of middle
way, where AI may not know what it is looking at, but
)\TIVMQIRXW[MXLMQEKIWXLEX can apply common sense. Jim DiCarlo, a neuroscientist
JSSPER%-WYKKIWXXLEXLYQER at MIT, says we need to find a way to help networks
QMRHWWSQIXMQIW[SVOPMOIER become more confident about their own
EVXMJMGMEPRIYVEPRIX[SVO interpretation. If most of the network is certain
something is a dog, a few pixels that don’t fit this
7XEVIMRXSXLIWIMQEKIWSJWXEXMGERHXIPP classification shouldn’t convince it otherwise.
QI[LEX]SYWII8LEXMWMRIJJIGX[LEX Another option might be for AIs to focus not just
GSKRMXMZIWGMIRXMWXW'LE^*MVIWXSRIERH on pixels, but to recognise larger features: does it see
>LIRKPSRK>LSYEX.SLRW,STOMRW eyes, a nose, fur or a tail? Then the apparent
9RMZIVWMX]MR1EV]PERHEWOIHTISTPIXS disagreement of a few pixels could be discounted.
HSMREVIGIRXWXYH]8LIWXEXMGMQEKIWLEH How to do that is still unclear.
FIIRKIRIVEXIHWTIGMJMGEPP]XSJSSPQEGLMRI You may be unsurprised to hear that among those
QMRHWIZIRXLSYKLXLIVIMWRSQIERMRKXS eager to solve the problem is the US military. Its
XLIQEWJEVEWELYQERMWGSRGIVRIH)ZIR research arm, DARPA, is running a competition to find
WS[LIRXLITISTPI[IVITVSQTXIH[MXL artificial neural networks that can answer questions
MQEKIWSJEVSFMRERHEWOIHXSWE][LMGL containing a tricky mix of real-world concepts. The idea
WXEXMGMQEKIXLI%-[SYPHQMWXEOIRP] is to develop AIs with common sense.
GPEWWMJ]EWEVSFMRTIVGIRXSJXLIQKSX That is worth doing. “When people from government
MXVMKLXELMKLIVTVSTSVXMSRXLER[SYPHFI groups ask me about augmenting capabilities with
I\TIGXIHF]GLERGI machine learning, I always point out that a lot of it
could be bypassed with adversarial examples,” says Ian
%RW[IV8LI%-XLSYKLXXLIVMKLXLERH Goodfellow, a machine learning researcher at Apple.
MQEKI[EWEVSFMR It is one thing to hack an AI controlling a vehicle in a
car park. It would be quite another to hack an AI
controlling a weapon. ❚

Chapter 5 | AI and society | 77


WILL AI STEAL
ALL OUR JOBS...?
One thing about AI worries more people than any other: that it might take their
livelihood away. But experts are divided as to whether the technology will bring us
a life of leisure or a life of serfdom. As ever, the truth probably lies somewhere in
the middle. AI won’t take our jobs – but it will change them.

OHN MAYNARD KEYNES always has mainly affected blue-collar jobs. Now white-collar
assumed that robots would take our jobs. workers worry that AI will move on from being
According to the British economist, something that just curates Facebook feeds, and begin
writing in 1930, it was all down to “our to displace accountants, surgeons, financial analysts,
means of economising the use of labour legal clerks and journalists.
outrunning the pace at which we can In 2013 Carl Frey and Michael Osborne of the Oxford
find new uses for labour”. And that was no Martin Programme on the Impacts of Future
bad thing. Our working week would Technology at the University of Oxford looked at 702
shrink to 15 hours by 2030, he reckoned, types of work and ranked them according to how easy
with the rest of our time spent trying to it would be to automate them. They found that just
live “wisely, agreeably and well”. under half of all jobs in the US could feasibly be done by
It hasn’t happened like that – indeed, if anything machines within two decades.
many of us are working more than we used to. The list included jobs such as telemarketers and
Advanced economies that have seen large numbers library technicians. Not far behind were less obviously
of manual workers displaced by automation have susceptible jobs, including models, cooks and
generally found employment for them elsewhere, construction workers, threatened respectively by
for example in service jobs. The question is whether digital avatars, robochefs and prefabricated buildings
that can continue, now that artificial intelligence is made in robot factories. The least vulnerable included
turning its hand to all manner of tasks beyond the mental health workers, teachers of young children,
mundane and repetitive. clergy and choreographers. In general, jobs that
A survey in 2016 found that 82 per cent of people fared better required strong social interaction,
believe that AI will lead to job losses. Even if they don’t original thinking and creative ability, or very specific
usurp us, the fear is that AI could cut ordinary workers’ fine motor skills of the sort demonstrated by dentists
salaries by reducing the value of human labour, and surgeons.
allowing executives to hoover up the savings. Many Others find that list overblown. A working paper for
economists suggest that increasing levels of the rich-world OECD club in 2016 suggested that AI
automation are a significant factor behind a general would not be able to do all the tasks associated with all
rise in inequality in recent decades. So far automation these jobs – particularly the parts that require human

78 | New Scientist Essential Guide | Artificial Intelligence


71%-9)3+)88=-1%+)7

interaction – and only about 9 per cent of jobs are fully outfits like TaskRabbit, which helps people find
automatable. What’s more, past experience shows that casual labourers to complete all sorts of chores.
jobs tend to evolve around automation. In such set-ups, workers are typically considered self-
According to this more Keynesian view, employed contractors, so the company has no
technological progress will continue to improve our obligation to keep supplying work or provide benefits
lives. The most successful innovations are those that like holiday pay or pensions. That has already led to
complement rather than usurp us, says Ben strikes and legal disputes.
Shneiderman, who founded the human-computer How can we adapt? The answer might simply be to
interaction lab at the University of Maryland. update our social frameworks to reflect the new reality
“Technologies are most effective when their designs of work. Many countries are considering new
amplify human abilities,” says Shneiderman. They regulatory frameworks for the gig economy. Another
could help us solve problems, communicate widely, or proposal is an AI tax on companies that are saving
create art, music and literature, he believes. money by replacing workers with algorithms.
“Robots could be a liberating force by taking away Others are thinking more radically about how to
routine work,” says Tom Watson, a former MP and reconfigure our whole relationship with work.
deputy leader of the UK’s Labour Party. In 2016, he set Proposals such as Universal Basic Income, giving
up an expert commission on the effect of AI on everyone a certain minimum sum to cover housing,
employment that ended up concluding that AI could healthcare and living expenses, have been trialled in
create as many jobs as it destroys. He is, however, some countries, and aim among other things to
concerned about the imbalance in power between smooth the inequalities that an AI-driven workplace
those calling the robot shots and the rest of us. “We’ve might bring. That speaks to an important point:
got to be careful that big corporations and employers ultimately we, not AI, are in charge of our own destiny.
don’t amass all the benefits while ordinary workers are Given the benefits of work for our health and well-
left to lump the negatives,” he says. being, maybe we’ll opt not to abolish fulfilling,
One arena where such questions are particularly rewarding work. “There will be inequities and
pressing is the gig economy, in which AI systems serve disruptions, but that’s been going on for hundreds of
up a platter of casual labour to a convenient app for years,” says Shneiderman. “The question is: is the
consumers. Examples include the taxi firm Uber and future human-centred? I say it is.” ❚

Chapter 5 | AI and society | 79


HERE’S at least one sense in which

... AND AlphaGo’s iconic victory against Lee Sedol


in 2016 wasn’t a fair fight. The human Go
champion’s brain was consuming around
20 watts of power, with only a fraction of

BREAK THE that being used for the game itself;


meanwhile the AI was using some 5000
watts, and it wasn’t doing anything else.

CLIMATE?
Increased energy consumption is one
rarely discussed downside of AI. The
technique behind most recent breakthroughs, deep
learning on neural networks, involves performing ever
more computations on ever more data. “The models
AI’s data-hungry habits consume a lot of are getting deeper and getting wider, and they use
more and more energy to compute,” says Max Welling
energy. How much that ends up contributing to of the University of Amsterdam, the Netherlands.
global warming depends on the uptake of the Take image recognition, for instance. The more you
want a neural network to recognise – distinguishing not
technology and how efficient we can make it – just dogs, but different breeds of dogs, say – the more
but with computing already accounting for complex your network must be. The most complex are
now described by more than 100 billion parameters.
5 per cent of the world’s total electricity This is important for two reasons. First, energy is
consumption, it’s a real concern. money, there’s a ceiling to the amount any person or
firm can spend on any technology. Second, our mobile
devices can only use so much energy before they

80 | New Scientist Essential Guide | Artificial Intelligence


/;%6/38-783'/4,383
either overheat or their battery expires. Many big tech companies are also developing
This is why voice assistants like Siri or Alexa need an specialised hardware for running AI. Nvidia, for
internet connection for full functionality – your phone instance, has produced a chip just for self-driving cars –
or smart speaker doesn’t have the processing power although the latest version still uses 500 watts. Google,
needed to run the AI locally, or the space to store all the meanwhile, has created what it calls tensor processing
associated data. All this means that our use of AI can’t units, or TPUs, designed to run its TensorFlow
keep expanding indefinitely unless we reduce its machine-learning framework, and is renting them out
energy requirements. “What matters is how much via the cloud.
intelligence we can squeeze out per joule,” says Welling. Already such innovations have brought the energy
Driverless cars are a good example. According to a overheads of AI down substantially – but that tends to
study from 2018, the extra sensors and processors be compensated by trying to get AI to do more.
required for a self-driving car will lead to them using 20 “Certainly energy use will keep on going up with more
per cent more energy than conventional cars – and sophisticated AI,” says Welling.
that’s assuming the power for the processors can be It’s difficult to say what the overall effect of AI on
limited to 200 watts, as opposed to about 10 times that global energy consumption will be. “Not much is
currently. For taxi companies using AI to directly known about it at this stage,” says Anders Andrae of
replace human drivers, the savings in wages would Huawei Technologies, who studies the energy
probably far outweigh the higher energy costs. But for consumption of information technology. “But it is very
ordinary car owners this would be a major issue – not to dangerous to say that this is nothing to care about.”
mention for the planet. “I think AI will generate even more data than we have
One workaround is to get rid of excessive precision in seen,” he says. “If more data means money, they will use
deep-learning algorithms, which increases processor artificial intelligence to make more data, and more data
requirements. “Even if you use fewer bits, you can train means more electricity.” And until our electricity
a neural network to get the same results,” says Avishek systems are 100 per cent renewable, more electricity
Biswas of the Massachusetts Institute of Technology. means more global warming. ❚

Chapter 5 | AI and society | 81


TAKING
BACK
CONTROL
In his 1942 short story Runaround, Isaac ADY DELVAUX, a member of
the European Parliament for
Asimov introduced three famous laws of Luxembourg and author of an EU
robot conduct: a robot should not harm report on laws for robotics,
compares the situation we now
human beings; it should obey human orders; face with AI to when cars first
and it should protect its own existence. started to appear on roads. “The
first drivers had no rules,” she says.
“They each did what they thought
That’s science fiction, of course, but sensible or prudent. But as
technology spreads, society needs rules.”
Asimov’s idea of rigid rules that keep Whether AI can bend to one set of rules is another
wayward robots in check has stuck around matter. Legislation that protects passengers and
pedestrians from driverless cars going haywire, for
in the popular imagination. It has also been example, will be powerless to stop data-scraping
the starting point of many a real-world algorithms from influencing our vote. Medical robots
programmed to diagnose and treat people will need
debate about how we better ensure AI different regulations from those sent onto the
battlefield. “AI is not a single thing that can be
serves everyone and doesn’t harm anyone.
regulated,” says Steven Croft, the Bishop of Oxford,
who sits on the UK House of Lords Select Committee
on Artificial Intelligence. “There are different risks,
different physical and emotional dangers.”

82 | New Scientist Essential Guide | Artificial Intelligence


FIVE
COMMANDMENTS
FOR AI
%R]YRMZIVWEPPE[WSJ%-GSRHYGX[MPP
TVSFEFP]MRGPYHIXLIJSPPS[MRK

LAW #1:
%-QE]RSXMRNYVIELYQERSVEPPS[
ELYQERXSGSQIXSLEVQrYRPIWWMXMW
FIMRKWYTIVZMWIHF]ERSXLIVLYQER

LAW #2:
%-QYWXFIEFPIXSI\TPEMRMXWIPJ

LAW #3:
%-WLSYPHVIWMWXXLIYVKIXSTMKISRLSPI

LAW #4:
%-QYWXRSXMQTIVWSREXIELYQER
+6)10-2-783'/4,383

-"8
%-WLSYPHEP[E]WLEZIERSJJW[MXGL

Many of Isaac Azimov’s stories explore the medical robots: we cannot cut the risk of harm to
unintended consequences of robots trying to apply zero, but humans will be far safer with machines than
hard-and-fast laws in different scenarios. In Runaround, we are with one another.
a robot gets stuck in a loop when it tries to satisfy the Many find such utilitarian reasoning unpalatable,
Second Law (obey the orders given to it by a human) however. “The taking of life in war can only be justified
and Third Law (protect its own existence) at the same in the most extreme circumstances,” says Croft.
time. “Asimov’s laws are perfect for novels but not for “Human judgement is therefore a very important part
putting into practice,” says Delvaux. of the regrettable waging of war. Anything that moves
Ultimately, these debates boil down to questions away from that is ultimately dehumanising.” Wherever
of ethics, rather than technology. What values do we robots have the capacity to inflict harm, therefore – in
as a society consider to be worth protecting? Croft the operating theatre, on the battlefield or on the road –
stresses the need to avoid the “philosophy of extreme human oversight is essential.
libertarianism” of companies in Silicon Valley or
the totalitarian approach employed by countries ←-
such as China. For more on AI on the road and at war, -
But are there any basic principles everyone can see pages 58 and 62-
agree on? Not allowing AI systems to harm humans
seems like a no-brainer, and a UN ban on autonomous But it’s not always immediately obvious where that
weapons has widespread support. But some argue capacity might lie – or how an artificially intelligent
that letting robots kill might actually save lives, by system might inflict harm just by doing what it’s
reducing the number of lethal mistakes made by supposed to do. “There will be accidents,” says Rebecca
human soldiers. A similar argument is used to justify Crootof at Yale University. “The question is, will we let
the occasional fatal accident with driverless cars and the harm fall where it lands, or will we figure out >

Chapter 5 | AI and society | 83


how to hold appropriate entities accountable and She cites Netflix as an example of categorisation
ensure that victims are compensated?” done well. It tailors its recommendations of what to
Delvaux believes that responsibility for AIs, and watch simply to individuals: people who liked this film
robotic technologies that exploit them lies with the also liked that one. Your gender and age have nothing
manufacturers. “If you put something on the market to do with it. That still leaves open the big question of
you should be liable for it,” she says. She suggests that privacy, however: a system that knows our every
robots should be given an e-personality that would act individual preference can easily be used against us.
as a handle for the group of people behind it, in much And there are other big unresolved questions. As we
the same way as companies are treated as individuals become capable of creating AI, and robots incorporating
under the law to ensure collective responsibility for AI. that are more and more human-like, we must
their actions. “A corporation has humans behind it,” consider the kind of interactions we are comfortable
she says. “It’s the same for a robot.” having with them. We have barely begun to consider
The trouble is that liability law typically requires what types of emotional relationships people might
defendants to have foreseen the outcomes of their have with human-like companion robots, for example.
actions, but with AI software that learns, this may not Cultural differences may come into play. “In Japan,
be reasonable. To avoid such unpredictable behaviour robots caring for the elderly is completely acceptable,”
from ever arising, one German commission suggested says Ulrich Furbach of the University of Koblenz in
that there should be no self-learning components in Germany. “In Germany, it’s considered inhuman.”
driverless cars – which would prevent them ever being But we can probably all agree that however lifelike a
able to learn from mistakes. machine appears to be, we should always be able to tell
What we can ask for instead is that an AI is able to whether we are in fact interacting with a machine or
explain its decisions. The AI Now Institute, a cross- human. “We can’t prevent manufacturers from
disciplinary research centre at New York University, putting human-like robots on the market because
recommends that organisations involved in high- consumers will buy them,” says Delvaux. “But we
stakes domains like welfare, criminal justice, healthcare should make people aware if they interact with a robot
and education should no longer use black-box systems that it is not a human.”
whose algorithms cannot be scrutinised. That may In all of this however, there is a truism: in our
require new hybrid approaches to AI, where machine increasingly automated world, it is easy to forget that
learning software is combined with techniques that are machines are programmed, owned and operated by
more easy for humans to understand. humans. Like any other invention, their decisions,
purpose and function are all designed with some
←- higher, all-too-human goal in mind: safety, comfort,
Turn back to page 72 for specific instances of- efficiency, making a killing – literally or metaphorically.
AI discrimination and bias- Designing laws for machines to follow is an
entertaining thought experiment, but ultimately a
For Anne-Marie Imafidon, a trustee at the Institute for distraction. The real robot laws need to be written to
the Future of Work in Palo Alto, California, part of the keep people in check, not machines.
key is to get machines to stop sharing our obsession “This is a debate for everyone in the world,” says
with categorising things. When we train software, we Croft. “As a human race, we need to access deep
typically ask it to label things – is this a dog or a cat? But traditions of wisdom to address these fundamental
once you lump something into a category – black male questions. We cannot answer them through
age 18-25, woman over 65 – a lot of assumptions come technology alone.”
into play that may not be relevant to the task at hand. There is at least one technological fix we might all
That’s where biases creep in and things can start to go agree on, however. “A human should always be able to
wrong, says Imafidon. shut down a machine,” says Delvaux. ❚

84 | New Scientist Essential Guide | Artificial Intelligence


CAN ALGORITHMS
BE ACCOUNTABLE?
When AIs take decisions that affect our lives, information about the decision. In many cases, they
could plausibly be forced to give up their code to a
we need to know why they do what they do, so government watchdog, which would go through it line
we can hold them to account when they blunder, by line to understand the decisions it makes.
Some call this a right to an explanation, but it is not
discriminate, or otherwise overstep the line. clear how informative the explanations will be, given
That means assessing the possible errors, the black-box nature of deep-learning AIs, for example.
“These things think in a very foreign way,” says David
motivations and biases that flow into them, just Gunning at the US Defense Advanced Research Projects
as we would with any human intelligence. That’s Agency (DARPA), which is interested in AI’s potential to
supercharge reconnaissance, among other things.
easier said than done. “They use bizarre mathematical logic that is very alien
to us.”

LGORITHMS are not intrinsically ←-


mysterious things. They are simply Turn back to chapter 2 for more on how AI works-
sets of instructions that tell a computer
how to perform a task. The trouble For Regina Barzilay at the Massachusetts Institute of
starts with the way the details of many Technology, the answer lies in making AIs that can
in use today are proprietary – whether explain themselves. “Transparency helps build
it’s Google or Facebook’s all-pervasive confidence,” she says.
examples, or one determining your That is do-able. A team led by Trevor Darrell at the
next mortgage or credit card University of California, Berkeley, for instance, took a
application. The companies behind machine-learning system designed to identify bird
them want, of course, to protect their intellectual species in photographs and bolted on another with the
property and so continue to make money. sole purpose of explaining how it arrives at its
Governments around the world are increasingly on conclusions. For example, it correctly identified a
the tech companies’ case. The European Union fired the picture of a white pelican because, it explained, “this
first salvo with its General Data Protection Regulations, bird has a white body, long neck, and large orange bill”.
or GDPR, which came into force in 2018. Besides giving Meanwhile Barzilay and her team have done something
the public beefed-up rights of consent for how similar in a medical setting, working with an AI
companies use data, it also introduces a right to be designed to predict the type of cancer a person has from
informed when automated decisions are made that their medical records. Here, the explanation doesn’t
affect your life, and to challenge the outcomes. If things come in the form of a line of text, but in a nod to the
go wrong, companies must give meaningful parts of the report that led the AI to its conclusion. >

Chapter 5 | AI and society | 85


←- technique could form part of a certification system that
Turn to page 52 for an interview with- every algorithm must go through before it is released,
medical AI researcher Regina Barzilay- says Datta. “It can also be used on systems already in
use,” he adds, so biased AI can be exposed and
Training the system wasn’t easy: the team had to challenged under relevant laws.
manually annotate thousands of reports, which were The trouble with the counterfactual approach is that
then fed into the algorithm to teach the system to it works best when reasonably simple bits of
process documents itself. Prising open the black box in information are used to make a decision – a few
this way will always mean making trade-offs, says personal details, say. It is a lot trickier when there is an
Gunning, who leads DARPA’s multimillion-dollar almost continuous stream of data to analyse, as in the
Explainable AI project. “The highest-performing case of an AI behind the wheel of a self-driving car.
system will be the least explainable,” he says. Machines But some argue that even in life-or-death scenarios,
can create far more complicated and intricate models we may not always need AI to show its workings. In
of the world than most humans can comprehend. 2017, Kilian Weinberger of Cornell University asked an
Ultimately, if this technology is going to be most useful audience at the Neural Information Processing Systems
when it goes beyond what humans can do, forcing it to conference in Los Angeles to imagine they had a heart
explain itself could in many cases hold it back. disease that required surgery. There is a 10 per cent
But perhaps AIs don’t actually have to explain fatality rate if a human performs the procedure, but
themselves. “You don’t have to crack open the black only a 1 per cent fatality rate if a robot does it. If the
box to demonstrate fairness,” says Chris Russell at surgeon makes a mistake, they can explain it: sorry, I
the Alan Turing Institute in London. Instead of cut the wrong artery. But the robot can’t because it uses
explaining why something happens, Russell and his machine-learning software. “Which one would you
colleagues use a “counterfactual” approach: they tweak pick?” asked Weinberger.
the inputs to demonstrate what would have to change Assuming the error rates are accurate, you would
to alter an AI’s decision. Say you were denied a loan, for trust the robot, he argued. We take these leaps of faith
example, you might find that if your salary were all the time. We have been using aspirin for thousands
£30,000 rather than £25,000, the loan would have of years, initially in the form of willow bark, but didn’t
been approved. understand how it worked until the 1970s. “You don’t
“What people want is to understand the decision, so have to understand why a drug works to get it approved
that they can either challenge it or have an indication of by the regulators,” said Weinberger. “You just have to
what would need to change to alter it,” says Sandra show that it does.”
Wachter at the Oxford Internet Institute in the UK, who That said, it is not only a trust issue – it is also about
worked with Russell to develop the technique. legal responsibility. Elaine Herzberg was the first
Anupam Datta at Carnegie Mellon University in pedestrian victim of a driverless car, killed by an Uber
Pittsburgh, Pennsylvania, is using a similar approach to vehicle as she pushed her bicycle across a highway in
root out biased and discriminatory AIs. He and his Tucson, Arizona, at night. Her death brought into sharp
colleagues propose testing them by tweaking inputs focus the question of how an AI can be held to account
such as gender or ethnicity, and seeing whether the in the same way a human would be. This stuff is no
outcome changes. For example, if two people who longer hypothetical, and the stakes are high for all of us.
differed only in ethnic origin weren’t given the same “Society needs to understand what’s happening, so that
likelihood of committing a crime in the future, that we can ask about what kind of world we want,” says
would indicate that the system may be biased. The Adrian Weller, at the University of Cambridge. ❚

86 | New Scientist Essential Guide | Artificial Intelligence


INTERVIEW IYAD RAHWAN

“AI IS A NEW KIND OF


AGENT IN THE WORLD”
Why do we need to keep tabs on algorithms?
How do we learn to live with AI? That’s We have a new kind of agent in the world. It’s not an
not a trivial question, says Iyad Rahwan. animal or a human, it’s a machine. Today, only
computer scientists explore the behaviours of
He investigates the challenges it throws up algorithms. My team is studying machine behaviour as
as the director of the Center for Humans & we would a human, animal or corporation – in the wild.
Machines at the Max Planck Institute for Human Are you saying algorithms behave like animals?
Development in Berlin, Germany, and at the Animals are much more complicated, yet we have a
long history of studying their behaviour. It’s the same
MIT Media Lab. We wouldn’t buy a self-driving for human psychology: we are far from understanding
car that would kill us to save pedestrians, he the brain, but we know a lot about human behaviour.
Even if we could see behind corporate walls and access
says – and that’s why we must fundamentally algorithmic source codes, we wouldn’t understand
rethink how we make AI behave. their impact on society. The crucial thing is that
algorithms act in the world, so we should look at their
behaviour in the world.

Algorithms already direct many aspects of our lives.


Aren’t we a little late to the party?
I’m not as pessimistic as that, but we do need to catch
up because we are frequently taken by surprise by the
consequences of algorithmic technologies. A good
example is Facebook’s newsfeed. It’s just a ranking
algorithm that serves news, and posts from your
friends. People thought it would usher in an era of
democratic sophistication, connecting people with
views outside their own, but what happened is exactly
the opposite. There will be more unpleasant surprises,
and my role is to try to shed light on these. >

Chapter 5 | AI and society | 87


“Technology can be a
force for good, but also
a force for evil”
/)26-',%6(732

88 | New Scientist Essential Guide | Artificial Intelligence


You once created a psychopathic AI. Why? them at all cost. And that’s what an “every AI for
The AI we created, dubbed Norman, sees violence itself” approach could bring about: AI that caters only
and horror in every image it looks at. It is far from for the preferences of consumers who can afford them,
what a human psychopath might be like, but we rather than to the public good.
launched it to popularise an important notion in
machine learning: that if an algorithm exhibits Are you confident that AI won’t outpace
undesirable bias, the culprit is often the data used to society’s ability to control it?
train it. Norman was an evocative way to explain that It’s very tempting to think of AI as this monolithic
point to a layperson. entity that will somehow, suddenly, get too powerful
and become unstoppable, like Skynet in the Terminator
What is appropriate behaviour for an AI system? movies. That’s a very unlikely scenario. More likely is
Part of my work on machine ethics explores what what we have today: different groups who struggle for
people think is appropriate. What do humans expect a financial success and power, and complex social
machine to do, and how do humans evaluate mistakes processes of competition and regulation. I think they
and moral violations by machines? will evolve to the next level and be AI-augmented, but
they won’t fundamentally change.
Do people perceive machine mistakes differently We want AI to invent new drugs and power social
to human ones? media, but we also want to protect the weak and make
They do. We take machines to be systematic, so when sure AI, and those who control it, don’t amass too much
an algorithm makes a mistake, we distrust it more than power. That’s a political process and we need to
a similar mistake by a human. But, crucially, when there acknowledge that.
is shared control between a human and a machine and
a mishap occurs, people throw more blame on the What worries you most about where this
human. I have found this by looking at autonomous technology is heading?
vehicle accidents. Technology can be a force for good, but also a force
for evil. This goes all the way back to the invention of
Such as the case of the self-driving Uber car fire. I worry about its politicisation. Cultural wars that
that hit and killed Elaine Herzberg? are happening today about how we run society,
Yes. With the Uber accident, people were very quick to whether we should have more progressive values or
blame the human in the driver seat. more conservative values, and how you negotiate those
things… I worry that these fights will slip to algorithms.
Is there such a thing as optimal ethical
behaviour for a self-driving car? What are the risks of an algorithmic culture war?
Consider Isaac Asimov’s laws of robotics, designed to A politician eventually moves on, retires or you can
prevent robots harming humans. Thinking about AI embarrass them into changing tack. But algorithms
has been dominated by the idea that once we find that, say, allocate resources and influence hiring
perfect rules for ethical conduct, a machine can derive decisions are more entrenched and hidden. That
the correct behaviour in any given situation. That’s a becomes dangerous because government systems may
misguided approach. My Moral Machine project has get locked into a particular set of values and become
quizzed more than 4 million people on a wide range of unchangeable as society evolves.
road-accident scenarios, and that work suggests there
can be no universal rules for automated vehicles. So when it comes to societies, one size should
not fit all, so to speak?
So what do you propose? That’s right. I was born in Syria, but I lived for many
I’m pushing for a negotiated social-contract years in the United Arab Emirates, Australia and the US,
approach. As a society we want to get along well, and I’ve seen various ways of running society. These
but to do it we need property rights, free speech, experiences have always coloured my reaction to this
protection from violence and so on. We need to think tendency of analytically minded people to pursue
about machine ethics in the same way. People are universal goals of ethics for AI. I’m very suspicious of
happy, in the abstract, to endorse a car that might that sort of thinking.
sacrifice its driver to save multiple pedestrians, but
they certainly don’t want to buy that car. If you leave But still, which way is best?
it up to consumers, they will buy cars that prioritise (Laughs) It’s impossible to say. ❚

Chapter 5 | AI and society | 89


CHAPTER 6

90 | New Scientist Essential Guide | Artificial Intelligence


Even the limited instances of artificial intelligence we have developed
so far have huge capability both to improve our lives and to disrupt them.
It is natural to wonder what further developments will bring – both for
machine intelligences, and for us.

Could superintelligences of our creation could in some way turn


against us or supersede us on Earth? That idea is a staple both of science
fiction and discussions surrounding the future of AI, and it makes a natural
finishing point for our discussions. But while machines may well in the
long run outsmart us, there are any number of reasons to believe they’ll
never usurp us, argues Toby Walsh

>

Chapter 6 | Will AI take over? | 91


OWEVER you look at it, the future
appears bleak. The world is under
immense stress environmentally,
economically and politically. It’s hard
to know what to fear the most. Even
our own existence is no longer certain.
Threats loom from many possible
directions: a giant asteroid strike,
global warming, a new plague, or
nanomachines going rogue and
turning everything into grey goo.
Another threat is artificial intelligence. In December
2014, Stephen Hawking told the BBC that “the
development of full artificial intelligence could spell
the end of the human race… It would take off on its
own, and redesign itself at an ever increasing rate.
PROFILE Humans, who are limited by slow biological evolution,
couldn’t compete, and would be superseded.” Last year,
TOBY he followed that up by saying that AI is likely “either the
WALSH best or worst thing ever to happen to humanity”. Other
prominent people, including Elon Musk, Bill Gates and
Toby Walsh is professor Steve Wozniak, have made similar predictions about
of artificial intelligence at the risk AI poses to humanity.
the University of New South Hawking’s fears revolve around the idea of the
technological “singularity”. This is the point in time at
Wales in Sydney, Australia,
which machine intelligence starts to take off, and a new
and author of books more intelligent species starts to inhabit Earth. We can
including Machines that trace the idea of the technological singularity back to a
Think: The Future of Artificial number of different thinkers including John von
Intelligence and 2062: The Neumann, one of the founders of computing, and the
World that AI Made science fiction author Vernor Vinge.
',6-77)%60)

The idea is roughly the same age as research into AI


itself. In 1958, mathematician Stanisław Ulam wrote a
tribute to the recently deceased von Neumann, in >

92 | New Scientist Essential Guide | Artificial Intelligence


Chapter 6 | Will AI take over? | 93
which he recalled: “One conversation centered on the
ever accelerating progress of technology and changes
in the mode of human life, which gives the appearance
of approaching some essential singularity… beyond
which human affairs, as we know them, could not
continue”.
More recently, the idea of a technological singularity
has been popularised by Ray Kurzweil, who predicts it
will happen around 2045, and Nick Bostrom, who has
written a bestseller on the consequences.
There are several reasons to be fearful of machines
overtaking us in intelligence. Humans have become
the dominant species on the planet largely because we
are so intelligent. Many animals are bigger, faster or
stronger than us. But we used our intelligence to invent
tools, agriculture and amazing technologies like steam

',6-77)%60)
engines, electric motors and smartphones. These have
transformed our lives and allowed us to dominate the
planet.

←-
See chapter 1 for more on the nature of human-
and artificial intelligence-
It is therefore not surprising that machines that think –
and might even think better than us – threaten to usurp
us. Just as elephants, dolphins and pandas depend on
our goodwill for their continued existence, our fate in
turn may depend on the decisions of these superior
thinking machines.
The idea of an intelligence explosion, when
machines recursively improve their intelligence and
thus quickly exceed human intelligence, is not a
particularly wild idea. The field of computing has
profited considerably from many similar exponential

94 | New Scientist Essential Guide | Artificial Intelligence


“There are strong reasons
why a technological
singularity is improbable”
trends. Moore’s law predicted that the number of and humans are roughly matched will likely be brief.
transistors on an integrated circuit would double every Shortly thereafter, humans will be unable to compete
two years, and it has pretty much done so for decades. intellectually with artificial minds.”
So it is not unreasonable to suppose AI will also If there is one thing that we should have learned from
experience exponential growth. the history of science, it is that we are not as special as
Like many of my colleagues working in AI, I predict we would like to believe. Copernicus taught us that the
we are just 30 or 40 years away from AI achieving universe does not revolve around Earth. Darwin
superhuman intelligence. But there are several strong showed us that we are not so different from other apes.
reasons why a technological singularity is improbable. Watson, Crick and Franklin revealed that the same DNA
The “fast-thinking dog” argument code of life powers us and the simplest amoeba. And
Silicon has a significant speed advantage over our artificial intelligence will no doubt teach us that human
brain’s wetware, and this advantage doubles every two intelligence is itself nothing special. There is no reason
years or so according to Moore’s law. But speed alone to suppose that human intelligence is a tipping point,
does not bring increased intelligence. Even if I can that once passed allows for rapid increases in
make my dog think faster, it is still unlikely to play intelligence.
chess. It doesn’t have the necessary mental constructs, Of course, human intelligence is a special point
the language and the abstractions. Steven Pinker put because we are, as far as we know, unique in being able
this argument eloquently: “Sheer processing power is to build artefacts that amplify our intellectual abilities.
not a pixie dust that magically solves all your We are the only creatures on the planet with sufficient
problems.” intelligence to design new intelligence, and this new
Intelligence is much more than thinking faster or intelligence will not be limited by the slow process of
longer about a problem than someone else. Of course, human reproduction and evolution. But that does not
Moore’s law has helped AI. We now learn faster, and off bring us to the tipping point, the point of recursive self-
bigger data sets. Speedier computers will certainly help improvement. We have no reason to suppose that
us to build artificial intelligence. But, at least for human intelligence is enough to design an artificial
humans, intelligence depends on many other things intelligence that is sufficiently intelligent to be the
including years of experience and training. It is not at starting point for a technological singularity.
all clear that we can short circuit this in silicon simply Even if we have enough intelligence to design super-
by increasing the clock speed or adding more memory. human artificial intelligence, the result may not be
The anthropocentric argument adequate to precipitate a technological singularity.
The singularity supposes human intelligence is some Improving intelligence is far harder than just being
special point to pass, some sort of tipping point. intelligent.
Bostrom writes: “Human-level artificial intelligence The “diminishing returns” argument
leads quickly to greater-than-human-level artificial The idea of a technological singularity supposes that
intelligence… The interval during which the machines improvements to intelligence will be by a relative >

Chapter 6 | Will AI take over? | 95


constant multiplier, each generation getting some father of both computing and AI, famously proved that
fraction better than the last. However, the performance such a problem is not computable in general, no matter
of most of our AI systems has so far been that of how fast or smart we make the computer analysing the
diminishing returns. There are often lots of low- code. Switching to other types of device like quantum
hanging fruit at the start, but we then run into computers will help. But these will only offer
difficulties when looking for improvements. This helps exponential improvements over classical computers,
explain the overly optimistic claims made by many of which is not enough to solve problems like Turing’s
the early AI researchers. An AI system may be able to halting problem. There are hypothetical
improve itself an infinite number of times, but the hypercomputers that might break through such
extent to which its intelligence changes overall could computational barriers. However, whether such
be bounded. For instance, if each generation only devices could exist remains controversial.
improves by half the last change, then the system will
never get beyond doubling its overall intelligence.
The “limits of intelligence” argument
TWO FUTURES
There are many fundamental limits within the So there are many reasons why we might never
universe. Some are physical: you cannot accelerate past witness a technological singularity. But even without
the speed of light, know both position and momentum an intelligence explosion, we could end up with
with complete accuracy, or know when a radioactive machines that exhibit super-human intelligence.
atom will decay. Any thinking machine that we build We might just have to program much of this painfully
will be limited by these physical laws. Of course, if that ourselves. If this is the case, the impact of AI on our
machine is electronic or even quantum in nature, these economy, and on our society, may happen less quickly
limits are likely to be beyond the biological and than people like Hawking fear. Nevertheless, we
chemical limits of our human brains. Nevertheless, should start planning for that impact.
AI may well run into some fundamental limits. Some Even without a technological singularity, AI is likely
of these may be due to the inherent uncertainty of to have a large impact on the nature of work. Many jobs,
nature. No matter how hard we think about a problem, like taxi and truck driver, are likely to disappear in the
there may be limits to the quality of our decision- next decade or two. This will further increase the
making. Even a super-human intelligence is not going inequalities we see in society today. And even quite
to be any better than you at predicting the result of the limited AI is likely to have a large influence on the
next EuroMillions lottery. nature of war. Robots will industrialise warfare,
The “computational complexity” argument lowering the barriers to war and destabilising the
Finally, computer science already has a well- current world order. They will be used by terrorists and
developed theory of how difficult it is to solve different rogue nations against us. If we don’t want to end up
problems. There are many computational problems for with Terminator, we had better ban robots in the
which even exponential improvements are not enough battlefield soon. If we get it right, AI will help make us
to help us solve them practically. A computer cannot all healthier, wealthier and happier. If we get it wrong,
analyse some code and know for sure whether it will AI may well be one of the worst mistakes we ever
ever stop – the “halting problem”. Alan Turing, the get to make. ❚

96 | New Scientist Essential Guide | Artificial Intelligence


ESSENTIAL
GUIDE №3

HUMAN
HEALTH
WHAT YOU NEED TO KNOW
TO LIVE LONG AND PROSPER

INFECTIOUS DISEASE / CANCER & AGEING /


MENTAL HEALTH / LIFESTYLE CONDITIONS /
PREVENTATIVE MEDICINE / AND MORE

ON SALE 19 AUGUST
ESSENTIAL
GUIDE№2

ARTIFICIAL
INTELLIGENCE
WHAT IS ARTIFICIAL INTELLIGENCE? WHAT ARE
ITS BENEFITS AND RISKS? ARE MACHINES GOING
TO TAKE MY JOB? OR TAKE OVER THE WORLD?
WITH BEWILDERING SPEED, AI HAS ADVANCED TO UNDERPIN
MANY ASPECTS OF OUR EVERYDAY LIVES – AND THAT'S JUST THE
BEGINNING. THIS SECOND ESSENTIAL GUIDE FROM THE MAKERS OF
NEW SCIENTIST MAGAZINE TELLS YOU ALL YOU NEED TO KNOW
ABOUT A TECHNOLOGY NO ONE CAN AFFORD TO IGNORE, INCLUDING:
❶ What AI is – and isn't
❷ How neural networks and deep learning works
❸ AI applications from medical bots to driverless cars
❹ The social and economic challenges of AI
❺ What the future holds for intelligent machines

£9.99

0 2

9 772634 015019

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy