New Scientist Essential Guide No2 2020
New Scientist Essential Guide No2 2020
REGINA BARZILAY
GARRY K ASPAROV
GUIDE№2
ARTIFICIAL
INTELLIGENCE
THE PAST, PRESENT AND FUTURE
OF MACHINES THAT THINK
EDITED BY
RICHARD WEBB
NEW
SCIENTIST
ESSENTIAL
GUIDE
ARTIFICIAL HE second in our series of Essential Guides
examines perhaps the most revolutionary
technological development in a generation,
and one that is happening with bewildering
'3:)6',6-77)%60)%&3:)7,937,9-783'/4,383
WHAT AI HOW AI VS
IS – AND MACHINES HUMANS
ISN’T LEARN
Like any transformative new What does it mean to say that a The story of artificial intelligence so far
technology, artificial intelligence machine learns? The core idea of can be told through a series of episodes
brings both risks and opportunities. artificial intelligence sounds where they’ve beaten us at our own
To truly understand what they are, almost magical, but the reality as game – literally. Games from chess to
we must first cut through the hype implemented today is quite Go have proved excellent test beds for
and get to grips with some very basic mechanical, as Nello Cristianini AI, offering rules-based challenges that
questions about what learning explains – it’s all down to probability, are familiar to humans and mimic
machines can and can’t do. Peter statistics and a heck of a lot of data. conditions of everyday life.
Norvig sets the scene.
p. 30 Deep Blue 2 vs Garry Kasparov,
PLUS
1997
PLUS p. 20 A timeline of AI
p. 32 IBM Watson vs Jeopardy!, 2011
p. 10 Turing’s legacy p. 22 What makes machine
p. 34 AlphaGo vs Lee Sedol, 2016
p. 12 What is intelligence, anyway? learning different?
p. 37 Libratus vs Texas hold’em poker,
p. 23 Data, data, data
2017
p. 25 The power of deep learning
p. 38 AlphaGo Zero vs AlphaGo, 2017
p. 27 What is a neural network?
p. 39 AI + human vs human,
now and the future
PLUS
p. 35 INTERVIEW: Garry Kasparov
“We don’t need to lose
out to machines”
Where do the limits of machine minds Artificial intelligence stands Could superintelligences of our
lie? Mathematician Marcus du Sautoy accused of all sorts of deleterious creation somehow turn against
kicks off our survey of the cutting effects on society. As is often the us or supersede us on Earth? That
edge by discussing the potential for case with AI, the questions start idea is a staple both of science
AI creativity, and strategy researcher with data – should we be feeding it fiction and discussions surrounding
Kenneth Payne closes it by looking at so much for free? the future of AI. But while machines
how AI will affect the conduct of war. may well in the long run outsmart
PLUS us, there are any number of reasons
PLUS p. 70 Fake news! to believe they’ll never usurp us,
p. 45 Who owns AI art? p. 71 The power of deepfakes argues Toby Walsh.
p. 46 Appliance to science p. 72 When AI discriminates
p. 48 The promise of AI medicine p. 76 Big brother AI is watching you
p. 52 INTERVIEW: Regina Barzilay p. 77 Do you think like a machine?
“The real power comes when p. 78 Will AI steal our jobs...?
you put human and AI together”
p. 80 ...and break the climate?
p. 55 March of the intelligent robots
p. 82 Take back control
p. 58 The driverless car challenge
p. 83 Five commandments for AI
p. 60 Into fifth gear
p. 85 Can algorithms be accountable?
p. 65 The military-AI complex
p. 87 INTERVIEW: Iyad Rahwan
“AI is a new type of agent
in the world”
A modern approach
→-
Chapter 2 has more on the nuts and bolts of AI-
→-
Turn to chapter 5 to read about the-
challenges AI poses to society today-
→-
Will AI ever overtake human intelligence?-
Turn to chapter 6 for more-
INTELLIGENCE,
matching – skills that the machine
minds powering search engines, face-
recognition technologies and the like
are already getting rather good at.
for society known as the AI winters. It was not until the late 1990s
that many of the advances predicted in 1956 started to
happen. Before this wave of success, the field had >
augmented along the way with data from your own software, and feed them with millions of examples, the
usage. The software can literally learn your style. The result might look like highly adaptive behaviour that
same basic algorithm can handle different languages, feels intelligent to us. Yet, remarkably, the agent has no
adapt to different users and incorporate words and internal representation of why it does what it does.
phrases it has never seen before, such as your name or
street. The quality of its suggestions will depend mostly →-
on the quantity and quality of data on which it is Chapter 4 discusses cutting-edge AI technologies-
trained. So long as the data set is sufficiently large and
close in topic to what you are writing, the suggestions The underlying algorithms can get more complicated.
should be helpful. The more you use it, the more it Online retailers keep track not just of purchases, but
learns the kinds of words and expressions you use. also of any user behaviour during a visit to the site.
It improves its behaviour on the basis of experience, They might track information such as which items you
which is the definition of learning. have added to your basket but later removed, which
Note that a system of this type will probably need to you have rated and what you have added to your wish
be exposed to hundreds of millions of phrases, which list. Yet more data can be extracted from a single
means being trained on several million documents. purchase: time of day, address, method of payment,
That would be difficult for a human, but is no challenge even the time it took to complete the transaction. And
at all for modern hardware. this, of course, is done for millions of users.
The next step up in complexity is a product As customer behaviour tends to be rather uniform,
recommendation agent. Consider your favourite this mass of information can be used to constantly
online shop. Using your previous purchases, or even refine the agent’s performance. Some learning
just your browsing history, the agent will try to find the algorithms are designed to adapt on the fly; others are
items in its catalogue that have the highest probability retrained offline every now and then. But they all use
of being of interest to you. These will be computed the multitude of signals extracted from your actions to
from the analysis of a database containing millions of adapt their behaviour. In this way, they constantly learn
transactions, searches and items. Here, too, the number and track our preferences. It is no wonder that we
of parameters that need to be extracted from the sometimes end up buying a different item from the
training set can be staggering: Amazon, the world’s one we thought we wanted.
largest online retailer, has hundreds of millions of Intelligent agents can even propose items just to
customers and tens of millions of product lines. see how you respond. Extracting information in this
Matching users to products on the basis of previous way can be as valuable as completing a sale. Online
transactions requires statistical analysis on a massive retailers act in many ways as autonomous learning
scale. As with autocomplete, no traditional agents, constantly walking a fine line between the
understanding is required – it does not need exploration and exploitation of their customers.
psychological models of customers or literary criticism Learning something they did not know about you can
of novels. Each of the basic mechanisms is simple be as important as selling something. To put it simply,
enough that we might call it a statistical hack, but when they are curious.
we deploy many of them simultaneously in complex Consider now that these nuts and bolts of >
1950
Alan Turing publishes the seminal
paper “Computing machinery and
intelligence“. Its opening sentence is 1973
“I propose to consider the question, The first “AI winter” sets in as
‘Can machines think?’ ” progress stalls, and funding and 1988
interest dry up IBM researchers publish a paper
entitled “A statistical approach to
1956 language translation”, setting out a
The term “artificial intelligence” 1975 new way to use data, rather than
is coined at a workshop at Dartmouth A system called MYCIN diagnoses rules, to determine programming
College bacterial infections and recommends outcomes
antibiotics using deduction based on
a series of yes/no questions. It was
1959 never used in practice 1989
Computer scientists at Carnegie NASA’s AutoClass program uses this
Mellon University create the General probabilistic approach to discover
Problem Solver, a program that can 1987 several previously unknown classes
solve logic puzzles Second AI winter begins of stars in telescope data
MACHINE LEARNING rank the answers for you, translate a document among
the search results and select which ads to display. And
DIFFERENT? this is just on the surface.
Unknown to users, the system will probably also
be running tests to compare the performance of
;MXLXVEHMXMSREPGSQTYXIVTVSKVEQW different methods by using them on different random
XLIQEGLMRIKIXWPMRIF]PMRI subsets of users. This is known as A/B testing. Every
MRWXVYGXMSRW;MXLQEGLMRIPIEVRMRK time you use an online service, you are giving it a lot
LS[IZIVXLIGSQTYXIVQYWX[SVOSYX of information about the quality of the methods being
LS[FIWXXSWSPZIXLITVSFPIQ8LI tested behind the scenes. All this is on top of the
VIWYPXMWEQEGLMRIXLEXIWWIRXMEPP] revenue you generate for them by clicking on ads or
TVSKVEQWMXWIPJ buying products.
-QEKMRIMRWXVYGXMRKEVSFSXXSQEOI
WSYT8LIGSRZIRXMSREPETTVSEGL[SYPH →-
FIXS[VMXISYXETVIGMWIVIGMTIJSV Chapter 5 has a thorough discussion of the-
7SYT&SXXSJSPPS[*MVWXTIIPXLISRMSR implications of data-driven AI for society-
XLIRGYXXLISRMSR&YXE7SYT&SXFEWIH
SRQEGLMRIPIEVRMRK[SYPHMRWXIEH While each of these mechanisms is simple enough,
[SVOSYX[LEXXSHSSRMXWS[RTIVLETW their simultaneous and constant application on a vast
F][EXGLMRKXLSYWERHWSJZMHISWSJ scale results in a highly adaptive behaviour that looks
TISTPIQEOMRKWSYTERHXV]MRKXSGSQI intelligent to us. Using the same or similar statistical
YT[MXLMXWS[RWSYTPMOIVIGMTISVF] techniques, in multiple parts of a system and at various
EXXIQTXMRKXSQEOIWSYTEKEMRERH scales, computers can now learn to recognise faces,
EKEMRERHPIEVRMRKJVSQJIIHFEGOSR transcribe speech, translate text from one language to
XLIVIWYPXWSJIEGLEXXIQTX another and answer questions. According to some
-RXLIGEWISJ7SYT&SXXLI online dating companies, they can even find us
GSRZIRXMSREPETTVSEGL[SYPHFI potential love matches. Integrated into larger systems,
QSWXIJJMGMIRX&YXWMQTPIVIGMTIWHSR X those can power products and services ranging from
I\MWXMRQER]WGIREVMSW8LIVIMWR X Siri and Amazon Echo to autonomous cars.
SRIJSVVIGSKRMWMRK[SVHWMREWSYRH Nonetheless, every time we understand one of the
VIGSVHMRKWE]SVJSVZIVMJ]MRKEJEGIXS mechanisms behind AI, we cannot help feeling a little
YRPSGOETLSRI%RHXLMWMW[LIVI cheated. AlphaGo, for example, learned its winning
QEGLMRIPIEVRMRKGSQIWMRXSMXWS[R strategies by studying millions of past matches and
&][SVOMRKSYXLS[XSUYMGOP]WTSX then playing against various versions of itself for
TEXXIVRWMRZEWXEQSYRXWSJHEXEER%- millions of further matches. An impressive feat. But
GERQEWXIVI\GIIHMRKP]GSQTPI\XEWOW such AI systems generate adaptive and purposeful
behaviour without needing the kind of self-awareness
that we like to consider the mark of “real” intelligence.
Would Lovelace dismiss their suggestions as
unoriginal? Possibly, but while the philosophers
debate, the field keeps moving forward. ❚
Chapter 3 | Humans vs AI | 25
recognise two images as being of the same thing? Or, →-
given a set of photos, how would you identify all the
ones of a football match? A programmer could write an
Turn to page 58 for more on the challenges-
algorithm to looks for typical features like goalposts,
of programming driverless cars-
but it’s a lot of work.
In general, this sort of complex pattern recognition Neural networks have been around since the 1940s and
can’t be programmed directly. In response, software 1950s, but only recently have they started to have much
engineers have above all resorted to a specific machine success. The change of fortunes is due to the huge rise
learning technique known as deep learning. While this in both the amount of data we produce and the amount
sounds exotic, it is actually another form of the data- of computer power available.
driven approach, using big data to adjust millions of These AIs require anywhere between thousands to
parameters. It relies on a programming architecture millions of examples to learn how to do something. But
called neural networks, and it has more than a whiff of now millions of videos, audio clips, articles, photos and
the original conception of AI mimicking the human more are uploaded to the internet every minute,
brain put forward by Alan Turing and others. By making it much easier to get hold of suitable data sets –
training the network on many instances of relevant especially if you are a researcher at one of the large
data, it tweaks its own parameters in response to technology companies that hold information about
“right” answers, and so eventually manages to spot their customers. Processing these data sets and
patterns itself (see “What is a neural network?”, right). training AIs with them is a power-hungry task, but
processing power has roughly doubled every two years
←- since the 1970s meaning modern supercomputers are
Turn back to page 10 for more on- up to the task.
Alan Turing’s contributions to AI- AlphaGo, the AI created by Google-owned DeepMind
to play the ancient Chinese board game Go, and which
This is an incredibly powerful technique that can be beat the human champion Lee Sedol in 2016, is perhaps
applied in all sorts of situations. A robot vacuum the most famous deep-learning AI. It had no strategies
cleaner, for example, can be trained by being shown directly programmed into it, not even the rules of the
thousands of examples of humans vacuuming rooms game. But after viewing thousands of hours of human
along with the relevant sensor inputs. By strengthening play, and then refining its technique by playing against
the relevant connections, feedback loops are created by itself, AlphaGo became the best Go player in the world.
which a neural network vacuum can then eventually
learn which patterns of inputs correspond to which →-
actions, so that it can clean the room by itself. The You’ll find the full story of AlphaGo on page 34-
challenge of making truly autonomous vehicles is just
a scaled up version of the same problem, with far more With modern hardware and giant data sets, neural
unpredictable inputs – and the stakes potentially much networks deliver the best performance in certain
higher if things go wrong. perceptual tasks involving pattern recognition, most
Chapter 3 | Humans vs AI | 27
CHAPTER 3
Games such as chess and Go are excellent test beds for machine
learning, offering challenges that are familiar to humans and
mimic conditions of everyday life. Generally, there’s been only
one winner. Just ask Garry Kasparov.
Chapter 3 | Humans vs AI | 29
Deep Blue 2 vs Garry Kasparov, 1997 game, Kasparov played the wrong pawn in a well-
known opening, and Deep Blue 2 was quick to spring
a trap from which there was little escape. Kasparov’s
This match still remains seared on many face on the big screen in the auditorium stared in
horror. Then he buried his head in his hands. After
minds as the first time a silicon mind beat the Deep Blue 2 had played its 19th move, Kasparov
best human competitor at an iconic intellectual resigned the game, and the match. It was the shortest
losing game of his career.
pursuit. Ultimately, however, this was a victory This had been a battle between two different ways
more for computing brawn than brain. of playing chess: search and evaluation. Search is about
following different possible lines of play: what your
opponent might do if you do X, and what your responses
to that might be. This method produces a “tree” of
’M NOT afraid to say that I’m possibilities that gains exponentially in complexity
afraid,” said Garry Kasparov, the the further you attempt to look into the future.
world chess champion, after the Evaluation, on the other hand, is about recognising
fifth game of his six-game 1997 patterns and weighing up the comparative merits of
series against Deep Blue 2. The different ones. A pattern is any feature, simple or
match between man and complex, that can be spotted with a glance at the
machine was all square at 2 ½ chessboard. This might be the material value of a
points apiece as Kasparov spoke position – a rook, for example, is worth about one
to the chess aficionados and bishop plus one pawn – or geometrical patterns
world press assembled at the indicating future threats and opportunities.
Equitable Center on New York’s Seventh Avenue. Human minds are limited when it comes to search:
The statement raised to an extraordinary pitch the even leading chess players can only compute a few
tension surrounding the final game the next day. branches in the tree of future possibilities, and only as
Kasparov was fiery, flamboyant and a merciless chess far as a few moves ahead. But a human mind honed by
competitor, and people loved him for it. When he had years of practice is highly skilled at evaluation. Studies
first taken on Deep Blue, his current opponent’s first have shown that chess grandmasters have a databank
iteration, in a six-game match the year before, he had of around 100,000 patterns in their heads that they can
an historically unprecedented rating. After a wobble recognise and interpret as they look at the board and
losing the first game, to few people’s surprise Kasparov use to drive their play forward, much as a writer can
won the series 4-2. assemble learned words, phrases or whole sentences to
But in the second game of the 1997 rematch, carry a story to its denouement.
catastrophe had overtaken the best chess mind of his For Kasparov’s machine opponent, things were
era. Flustered by his opponent’s relentless play, exactly the other way round. Deep Blue had little
Kasparov had needlessly resigned when he could have capacity for evaluation. It was simply programmed
',6-77)%60)
forced a draw, leaving the series poised on a knife edge. with the rules of chess, plus a few hundred simple
You could forgive him a little fear. patterns, and routines that included ways to recognise
And then it came. On his seventh move of the sixth the safety of the king and to check how much room >
>91%46)77-2'%0%1=783'/4,383
possibilities probabilistically against its pre-
programmed patterns to decide what was its best next
move. With Kasparov taking up to 15 minutes to
elaborate a deeply thought-through plan for his next
move, Deep Blue 2 had ample opportunity to calculate a
branching tree of several hundred billion possibilities
while he did so. Around half of the time, according to its
programmers, it had anticipated Kasparov’s likely
move, so could reply instantly. The effect on Kasparov
when a succession of his deeply pondered moves were
W
answered without delay was immense – and it HEN, in 2007, IBM first suggested training
ultimately wore him down. a machine to play the US quiz show
Yet for all its skilled play, Deep Blue had very little Jeopardy!, many artificial intelligence
intelligence. It didn’t learn from experience. It couldn’t researchers were doubtful the company
actually tell chess sense from nonsense, and it was would succeed. Jeopardy! is unusual as a
blind to what a chess position or chess game is really quiz show in that contestants are given answers and
about. It could offer no useful analysis of why it made required to supply the right question. For example, if
apparently deeply calculated moves. the host says that “This cigar-smoking prime minister
Forget artificial intelligence, or at this stage much in led Britain during the second world war”, a contestant
the way of machine learning. Deep Blue was a product might reply “Who is Winston Churchill?”.
above all of Moore’s law, the continual increase of Understanding Jeopardy! requires an ability to parse
processor power over time. But its victory showed that human language, and culture, in all its bewildering
computational capacity was now at a point where the complexity. Clues can involve puns and clues-within-
ability to crunch vast amounts of data could produce clues, with topics ranging from pop culture to
outputs superior to those of the best human minds. As technology. Competing demands an encyclopaedic
such, it contained the kernel of all that was to follow. knowledge and an ability to understand complex
Chapter 3 | Humans vs AI | 33
opponent’s moves. Then there were the hundreds of
AlphaGo vs Lee Sedol, 2016
training hours it had put in, observing games to
learn how others played Go. Lee admitted to being
The triumphs of Deep Mind 2 and Watson “quite nervous”, and backed off from his prediction
in their respective games both pushed the of a 5-0 victory.
C
REATED over 2500 years ago, Go is a challenge of a Go game you play stones only on the outer four
for the nimblest of minds. Its 19 by 19 board lines, and so prepare the ground for an assault on the
allows for 10171 possible layouts, dwarfing the central part of the board later.
roughly 1050 possible configurations on a This now-infamous “move 37” turned on its head all
standard 8 by 8 chess board. “Go is probably the that humans thought they knew about how to play Go.
most complex game ever devised by man,” says Demis Commentators at the time thought it was a mistake –
Hassabis, founder of DeepMind, the Google-owned but it won the game. In the event, Lee won just one
company that created AlphaGo. game of the match, with AlphaGo wrapping up the
Alpha Go’s five-game match with human Go other four. “It was so hard to watch,” said Andrew
champion Lee Sedol was held in the swanky Four Jackson of the American Go Association, who
Seasons hotel in the heart of downtown Seoul in March commentated on the games. “He just got steamrolled.”
2016. Lee was a national hero in his native South Korea, But though its programmers could explain the outline
known for his brashness and unconventional and of how they had told AlphaGo to learn to play, no one
creative play. Hundreds of reporters from around the could explain how it had come to the game-changing
world were in attendance, and the match was televised conclusions it had. This was AI as a black box.
live across Korea, China and Japan. Continued on page 37 >
Lee flashed his swagger at a press conference
preceding the match, predicting he would win in a →-
“landslide”. He rowed back a little later after watching Do AIs such as AlphaGo demonstrate-
DeepMind’s representatives explain the principles of true creativity? See page 42-
AlphaGo’s neural network and deep-learning
algorithm, and how it employed a special sort of search →-
method, known as Monte Carlo Tree Search, to Turn to page 85 to learn more about the problems of-
compute possible scenarios and anticipate its accountability that black-box AI brings-
“WE DON’T
NEED TO
LOSE OUT TO
MACHINES”
Garry Kasparov took a long time to get over
his 1997 defeat by Deep Blue – but he’s since
learned to embrace artificial intelligence
Chapter 3 | Humans vs AI | 35
INTERVIEW: GARRY KASPAROV
I had a time machine, I’m sure I could think of better those caught in the turbulence. But it’s easy to focus on
uses for it. That match was such an anomaly, it has the negative things because we see their impact much
taken years for me to even attempt to draw lessons more clearly, while the new jobs and industries of the
from it. future can’t be imagined so easily. I think we’ll be
surprised, as we have been throughout history, by the
What are the broader lessons from your defeat? bright future that all these amazing tools will help us
As the proverbial man in man-versus-machine I feel build, and how many new positive trends appear.
obligated to defend humanity’s honour, but I’m also a
realist. History has spoken: for nearly any discrete task, Computer scientist Larry Tesler once said
including playing chess, machines will inevitably “intelligence is whatever machines haven’t done
outstrip even human-plus-machine. AI is hitting us in a yet”. Where are the next targets?
huge wave, so it is time to embrace it and to stop trying Where aren’t they? The biggest public impact might be
to hold on to a dying status quo. felt in medical diagnosis. This is an area that doesn’t
require 100 per cent or even 99.99 per cent accuracy to
What did you make of the 2016 match when AlphaGo be an improvement on human results. You wouldn’t
beat the human Go champion Lee Sedol? trust a self-driving car if it was only 99 per cent
AlphaGo played some genuinely unusual moves, accurate. But human doctors are only 60 or 70 per cent
strong moves that a top human would never consider. accurate in diagnosing many things, so machine or
It doesn’t surprise me that there is room for this in a human-plus-machine hitting 90 or 99 per cent will be a
game as long and subtle as Go, where an individual huge improvement. As soon as this is standard and
move is worth less than in chess. It’s even possible that successful, people will say it’s just a fancy tool, not AI at
entirely new ways of playing Go will be discovered as all, as Tesler predicted.
the machine gets stronger. It’s also likely that humans
won’t be able to imitate these new strategies, since they →-
depend on the machine’s unique capabilities. See chapter 4 for more on AI applications today-
You called Deep Blue the end and AlphaGo the What happens if AI, high-tech surveillance and
beginning. What did you mean? communications are sewn up by the ruling class?
Chess was considered a perfect test bed for cognition Ruling class? Sounds like Soviet propaganda! New tech
research but it turned out the world chess champion is always expensive and employed by the wealthy and
could be beaten while barely scratching the surface of powerful even as it provides benefits and trickles down
artificial intelligence. I’m sure some things were into every part of society. But it seems fanciful – or
learned about parallel processing and the other dystopian – to think there will be a harmful monopoly.
technologies Deep Blue used, but the real science was AI isn’t a nuclear weapon that can or should be under
known by the time of the 1997 rematch. AlphaGo was lock and key; it’s a million different things that will be
an entirely different thing. Deep Blue’s chess an important part of both new and existing technology.
algorithms were good for playing chess very well. The Like the internet, created by the US military, AI won’t be
machine-learning methods AlphaGo uses are kept in a box. It’s already out.
applicable to practically anything.
Will handing off ever more decisions to AI
What does that mean for the wider world? result in intellectual stagnation?
AI is clearly booming as a technology, but there’s no Technology doesn’t cause intellectual stagnation,
way to know what part of the curve we are in. Periods of but it enables new forms of it if we are complacent.
rapid change are turbulent and confusing, and we are Technology empowers intellectual enrichment and our
seeing the social apprehension that comes with a wave ability to act on our curiosity. With a smartphone, for
of automation, even more so because robots and example, you have the sum total of human knowledge
algorithms are moving in on jobs that require college in your pocket and can reach practically any person on
degrees. Of course, real dangers and human anguish the planet. What will you do with that incredible
come with the AI wave, and we can’t be callous toward power? Entertain yourself or change the world? ❚
M
IDWAY through a 2015 poker competition not lose money in the long run.
at the Rivers casino in Pittsburgh, As with AlphaGo’s surprise moves, it turned out the
Pennsylvania, one player seemed to lose the tactics the AIs developed often diverged from those
plot. Opponents watched baffled as Claudico that humans would naturally consider winning
risked large amounts of money with weak approaches. They use a broader range of stakes than
cards, or raised the stakes aggressively to win a handful human players generally employ, from tiny bets to
of chips – and then suddenly appeared passive, huge raises. “Betting $19,000 to win a $700 pot just isn’t
dithering over decisions and avoiding big wagers. something that a person would do,” was how one of
Yet despite never having set foot in a casino before, Claudico’s human opponents at the Brains vs AI contest
Claudico had played more poker games than all of the put it. They rarely raise the stakes to the limit, even for
other players put together. Created by Tuomas the best possible hand, and they play a broader range of
Sandholm and his team at the nearby Carnegie Mellon hands than a human might, choosing occasionally to
University, it had learned poker by playing billions of play weak cards rather than fold.
hands against itself. One explanation is that human minds have to
Human ingenuity still won the Brains vs Artificial make simplifications when dealing with the
Intelligence contest back in 2015 – by a whisker. It didn’t complexity of a game like poker. Rather than
take long for the machines to get the edge. In 2017 considering all possible moves, we mentally bunch
Claudico’s successor, called Libratus, took on four of the similar situations together. We do the same in daily life:
world’s best poker players separately head to head at we round numbers up or down to the nearest ten or
the same casino. After 120,000 hands over 20 days, hundred, or use stereotypes to categorise people. This
Libratus won with a lead of over $1.7 million in chips. process of abstraction makes the world easier to
A poker-proficient AI was a remarkable handle, but means we can lose out to people who are
breakthrough. As in other games, Libratus had to deal using a better approximation of the world than we are.
with a huge number of potential futures: even in a two- Machine minds don’t need it.
player version of poker with limited bet sizes, known as That approach could benefit us elsewhere, thinks
heads-up limit poker, there are 3.16 × 1017, or 316 million Sandholm. “There are applications in cybersecurity,
billion, potential situations that could come up. But in negotiations, military settings, auctions and more,”
poker, unlike in chess or Go, players don’t know what he says. His lab has also been looking at how the
cards their opponents have. Brute-force computation machines can bolster the fight against infectious
alone is never going to win the day. disease, by viewing treatment plans as game strategies
A poker-playing AI has to take into account how its to be optimised, even if your information about the
opponent is playing and rework its approach so it infection’s next move is imperfect. “You can learn to
doesn’t give away when it has a good hand or is battle diseases better even if you have no extra >
Chapter 3 | Humans vs AI | 37
I
F YOU want to beat ‘em, you must first join ‘em.
That was the principle the first AI gamers followed:
they were trained, at least initially, through
observing thousands of instances of how humans
played the games.
Not so AlphaGo Zero. The zero stood for “zero data”:
AlphaGo Zero was programmed simply with the rules
of Go, and then started playing at random against
itself, learning what worked. Three days and 4.9 million
such games later, it took on AlphaGo. In a 100-game
grudge match, it won 100-0, becoming the world’s best
game-playing AI.
>)6&36-783'/4,383
“Humankind has accumulated Go knowledge from
millions of games played over thousands of years,” the
DeepMind programmers wrote in their paper
announcing the breakthrough. “In the space of a few
days… AlphaGo Zero was able to rediscover much of
this Go knowledge, as well as novel strategies that
provide new insights into the oldest of games.”
For example, the AI learned many different josekis –
medicines at your disposal – you just use them sequences of moves that result in no net loss for either
smarter,” he says. side. Initially AlphaGo Zero used many familiar ones
But Libratus also illustrates the limits of AI as it written down during the thousands of years Go has
currently stands. It is the world’s best heads up no-limit been played, but as its self-training continued, it started
Texas hold ’em poker player, but ask it “why Texas?” – to favour previously unknown sequences.
or where or what Texas – and it wouldn’t even be able Self-training is a huge step forward. “With this
to process the question. An algorithm trained to do one approach you no longer have to rely on getting expert
thing is next to useless at doing something else. That quality human data,” says David Churchill at Memorial
ambition to create an artificial general intelligence that University, Canada. That opens up whole new fields to
can perform any task the human brain can remains AI where, unlike Go, good-quality, easily comparable
just that – an ambition. data are hard to come by. Climate science, drug
discovery, protein folding and quantum chemistry are
just some of the fields DeepMind is investigating. “In 10
years, I hope that these kinds of algorithms will be
AlphaGo Zero vs AlphaGo, 2017 routinely advancing the frontiers of scientific research,”
says Demis Hassabis, the firm’s CEO.
L
IKE other human champions facing a machine intuition. Go players who compete with AlphaGo often
opponent, Grzegorz “MaNa” Komincz rated his remark on that. “They expected it to play in a way that
chances. “A realistic goal would be 4-1 in my was perhaps dull but efficient and instead there was
favour,” he told an interviewer before the match. real beauty and inventiveness to the games”, says David
One of the world’s best players of the video Silver at DeepMind.
game StarCraft II, Komincz was at the height of a
successful esports career. Artificial intelligence →-
company DeepMind invited him to face its latest The next chapter has more on-
AI, a StarCraft II-playing bot called AlphaStar, in AI creativity-
December 2018.
Komincz was expected to be a tough opponent. It’s a product largely of the deep “reinforcement”
He wasn’t. After being thrashed 5-0, he was less cocky. learning that now underpins all these AIs. In a process
“I wasn’t expecting the AI to be that good,” he said. “I of trial and error, successes, such as winning a game of
felt like I was learning something.” Go, are rewarded, and the appropriate connections
Many others at the receiving end of an AI thrashing within the AI’s neural network become reinforced,
say the same – and used the experience to get even which in turn reinforces a specific behaviour.
better. Working together, humans and AIs can bounce In particular, where the AI learns a game purely
ideas back and forth, each guiding the other to better by playing itself, as with AlphaZero – the “zero” stands
solutions than would be possible alone. “It will be an for zero input – it is untainted by human bias. It
amazing extension of thought,” says Anders Sandberg simply picks up its own ways of doing things.
from the Future of Humanity Institute at the University “AlphaZero discovers thousands of concepts that lead
of Oxford, UK. to winning more games,” says Silver. “To begin with,
Games that AI has conquered provide a wealth of these steps are quite elementary, but eventually this
examples. All professional players now practice with same process can discover knowledge that is
chess computers, for example. These tend to play surprising even to top human players.” The great hope
defensive games, so the style of top players has become of AI is that this could become true in other areas, too,
more defensive too. After losing to AlphaGo, European and so supplement and extend human expertise
Go champion Fan Hui trained against the AI and across the board. ❚
Chapter 3 | Humans vs AI | 39
CHAPTER 4
That raises profound questions not just about what the limits of machine
minds are, but what we want them to be. Marcus du Sautoy kicks off our
exploration by asking the question, can AI ever be truly creative?
Chapter 4 | Frontiers of AI | 41
N OCTOBER 2018, a portrait of Edmond Belamy
sold at Christie’s in New York for $432,500, nearly
45 times its maximum estimated price. Nothing
that out of the ordinary, perhaps. Except Belamy
didn’t exist. He was the fictitious product of the
artist’s mind – and the mind that created him
wasn’t even human.
Ever since the 1840s, when Ada Lovelace
became obsessed with the possibility that
Charles Babbage’s Analytical Engine, a proposed
mechanical computer, could do more than simple
computations, we have been contemplating the idea
that it isn’t just biological life that may be creative.
Recognising that music is an art form similar to
mathematics in its manipulation of pattern, Lovelace
speculated “the engine might compose elaborate and
PROFILE scientific pieces of music of any degree of complexity
or extent”.
MARCUS It is fairly easy to discount or at least qualify many
DU SAUTOY claims of AI creativity today. Much of what they are
doing involves little more than data science and
Marcus du Sautoy is a statistical number-crunching, and requires a lot of
mathematician and the human intervention. That isn’t to say that there aren’t
Simonyi professor for the some striking examples of AI potentially
demonstrating true creativity. Take the unexpected
public understanding of
move 37 of the second game in the titanic battle of Go
science at the University of between the human champion Lee Sedol and the
Oxford, UK. He is the author DeepMind algorithm AlphaGo in March 2016. Human
of numerous books, competitors have since aped AlphaGo’s tactic to
including The Creativity establish a competitive advantage. The AI’s discovery
Code: How AI Is learning to taught the world a new way to play an ancient game. >
write, paint and think ←-
',6-77)%60)
3&:-397%-',6-78-) 7
judged by its ability to solve problems, but creating art
isn’t a problem-solving activity. The monetary value of
the Edmond Belamy portrait came about partly
because it was created by an AI, not through some
independent assessment of its artistic value.
The AlphaGo story demonstrates another way AI can %-EVX4SVXVEMXSJ)HQSRH&IPEQ]
help to create a sort of value. It comes not so much in WSPHJSVMR
making machines that act like creative humans, but in
stopping creative humans behaving like machines. We independent nature. Lovelace herself raised it all
can get terribly stuck in our ways of thinking. An AI’s those years ago as she wrote about Babbage’s machine:
unprejudiced exploration of the terrain, meanwhile, “It is desirable to guard against the possibility of
can sometimes reveal new pinnacles of achievement. exaggerated ideas that might arise as to the powers
You may be at the top of Snowdon, thinking you have of the Analytical Engine, The Analytical Engine has no
reached the ultimate height, but that is because you pretensions whatever to originate anything. It can do
don’t know Mount Everest exists. whatever we know how to order it to perform.”
Many of the examples of music created by AI are still Ultimately, she believed, you couldn’t get more out
stuck in the foothills, comprising poor pastiches of of the machine than you had put in.
Mozart or Beethoven’s works. But there are examples This raises a crucial question: how much of any AI’s
where code has helped us cross the valley to more “creativity” is the creativity of the human coder rather
interesting peaks. The Continuator, a jazz improvisor than the code? Cameras heralded a new age of human
designed by François Pachet, director of the Spotify creativity that can be seen displayed today in art
Creator Technology Research Lab, provides another museums across the world. But no one assigns the
example of how AI can help us escape the straitjacket of camera any part in the act of creativity.
creative conventions. But that is an imperfect analogy. Humans are
When musicians improvised with the Continuator, machines running the instructions of a code, our DNA.
they were amazed. The algorithm was passing a kind of Our parents are responsible for that code, yet we don’t
musical Turing test, responding in a way regard ourselves as a mere vessel for the creativity of
indistinguishable from a human improvisor. And its our parents. Part of how a child differentiates itself
responses weren’t simply a mash-up of what had gone from its parents comes from its unique interaction
before: the musicians could hear the Continuator with its environment. This interaction also shapes our
playing things that were recognisably connected with creative abilities.
their style of performance, but which they never And that is exactly what is happening with AI.
thought possible. Machine learning allows code to change, mutate and
Although novelty, surprise and value are three key update itself based on its interaction with new data:
components to measure if AI is being creative, I think a inputs from the environment. In creative terms, the
fourth element must also be introduced if we are going potential result is shown by a work from the artist Ian
to herald real creativity in AI: originality of a truly Cheng displayed at the Serpentine Gallery in London in
Chapter 4 | Frontiers of AI | 45
8%;-783'/4,383
Chapter 4 | Frontiers of AI | 47
THE PROMISE
OF AI MEDICINE
In hospitals and doctors’ surgeries, the reliable, unflappable power of machine
minds has great potential in enabling more accurate diagnosis and more
appropriate and efficient care. But while AI medical apps are already available,
there are still great hurdles to overcome before this can become a fully fledged
revolution: issues of trust, accountability and data privacy.
TIFF neck, headache, tingling in your And it’s kicking at an open door. According to the US
fingers. You list your symptoms, answer Institute of Medicine, something like one in 10 medical
a few questions about how long they’ve diagnoses is wrong. Such errors contribute to as many
lasted and whether they seem to be as 80,000 unnecessary deaths each year in the US
getting worse. Then, without ever alone.
leaving home or queueing at the clinic, The tech underlying the apps knits together several
you get the diagnosis: a strained neck. strands of AI: the ability to process natural language,
Or, at least, eight out of 10 people with including people describing their symptoms; the
those symptoms have one. ability to trawl vast databases of the world’s medical
This isn’t some future scenario. The knowledge in an instant; and deep-learning software
app Ada, which offered up this diagnosis, boasts it has trained to spot correlations between millions of
completed 15 million medical assessments worldwide different complaints and conditions.
since it was launched in 2016. It took six years for 100 Deep-learning networks have outperformed doctors
data scientists to train the artificial intelligence behind at diagnoses as varied as melanomas and diabetic
Ada, using real medical records. With each case it sees, retinopathy, a complication of diabetes that damages
&0%'/6)(70-1(-783'/4,8383
it gets a bit smarter. blood vessels in the eye. Other AI tools can identify
Health insurers and healthcare providers see a bright cancers from CT scans or MRIs, or even predict from
future for AI-based medical apps. “A machine cannot be data about general health which people may have a
negligent,” says Ali Parsa, founder of the company heart attack. The US Food and Drug Administration has
behind one another app, Babylon. “It doesn’t get already approved over a dozen AI algorithms.
stressed. It doesn’t get hungry. It does exactly what it is For all that, there are plenty of reasons to proceed
meant to every single time.” carefully. “I’m bullish about the ability of AI to do >
wider sharing of data may be necessary. That might care, it’s unlikely doctors will be cut out of the picture
require new legislation. “Current laws don’t really cover entirely, says Vimla Patel, a cognitive psychologist and
the kind of sharing scenarios we need to make these specialist in biomedical informatics at the New York
systems work,” says bioinformatics researcher Joshua Academy of Medicine. AI can augment clinicians’
Denny of Vanderbilt University in Tennessee. Any abilities, but can’t do all the heavy lifting: for example, a
policies will of course require informed consent. Some human doctor’s (ideally) empathic relationship with
of those involved even argue that sharing your health the patient is an essential part of good care. “When
data should be seen as a civic duty, and only those who things get complex, and medicine often is complex,
opt in should reap any benefits. you need human reasoning to make decisions,” she
Navathe is also concerned that regulators are says. “Computers, no matter how sophisticated, cannot
applying lower standards to algorithms than to drugs replace that.”
or devices as to whether their use actually benefits Nevertheless, AI can change how healthcare is
people. “Because they are not invasive, they seem lower delivered for the better, says Kohane. At present,
risk.” But, he says, if doctors are basing treatments on doctors have to manage mounds of paperwork and
AIs, the same standards need to be applied. digital form-filling while trying to stay on top of the
emerging research to keep their knowledge current. If
→- AI could ease some of this burden, doctors would be
Turn to page 68 for more on- freer to listen to patients and take detailed histories.
AI data-privacy concerns- Meanwhile instead of going to see a doctor only when
we are sick, our health will be constantly assessed by
Liability is another issue. Malpractice laws are complex AIs, based on data from devices such as smart watches
and vary from place to place, so it’s unclear how they along with our genome sequences.
might need to change to accommodate AI. But doctors At least this is the vision of companies like Babylon
already use machines to make some diagnoses – Health, maker of an app for connecting people to
software that helps them identify tumours in MRI doctors via an AI triage system. “The system will
scans or abnormalities in echocardiograms, for become part and parcel of your life, constantly
example. “If a doctor is in the loop, the legal and ethical monitoring your health,” says Saurabh Johri, the firm’s
stuff is not going to be that challenging,” says Isaac chief science officer. With millions of people using such
Kohane, head of biomedical informatics at Harvard systems, disease outbreaks could also be detected and
Medical School in the US.; ultimately, it’s the doctor’s tracked in real time, he says. With a situation such as
responsibility. If AI and doctor disagree, a supervising the covid-19 pandemic, such apps could enable more
physician or committee could break the tie. rapid reaction and contact tracing – and perhaps save
For all AI might help provide better, more reliable thousands of lives. ❚
Chapter 4 | Frontiers of AI | 51
INTERVIEW REGINA BARZILAY
Chapter 4 | Frontiers of AI | 53
they do before surgery, and the cancer was all over the the hospitals we are generally working with, there has
place. So they needed to do a biopsy, which found that it been a smaller proportion of African-American
was a false positive. The cancer was not everywhere. women. We can address this algorithmically, but we are
Why don’t we train a machine to predict what’s going also collecting data from hospitals that have a
on, instead of doing these painful, expensive reasonable representation of diverse groups.
procedures? Every single step of the way, the reason we
selected a certain treatment for me was that we weren’t If we apply machine learning to diagnosis or
sure. We were just going for the most aggressive thing. treatment, is there still a role for the doctor?
Millions of women get breast cancer, but I found out Absolutely. In some cases, the machine rivals the
later that all the oncology decisions made in the US human. Sometimes it performs below the human. But
today are based on the 3 per cent of those women who the real power comes when you put the two together.
participate in clinical trials. That’s very problematic: Still, the doctor doesn’t have to agree with the machine.
what is the chance that the women in that 3 per cent Ultimately, the doctor makes the decisions and
will be like you or me? But if you look at the whole approves treatment.
population of women with breast cancer, the chances
are significantly higher. How are you applying machine learning to the
problem of cancer overdiagnosis?
What are the privacy concerns with gathering We are reducing the uncertainty. There’s a condition
people’s health data in a massive set like this? found in breasts called high-risk lesions. In the US, all
We think about that from the design stage onwards. To patients with these lesions get surgery. But 87 per cent
get access to any data, we need approval from an of them are in fact benign: the patients didn’t need the
institutional review board at the hospital to make sure operation. With machine learning, we can now identify
it is handled according to protocol. The data lives at the 30 per cent of the women who don’t need the surgery.
hospital – we don’t bring it to MIT. We also anonymise That’s not all of them, but it is a big step.
it. Any workable system needs to be totally integrated
with clinical care, observing the hospital’s rules of If your system had been in use at the hospital when
patient privacy and rights. you were a patient, would you have felt reassured?
Oh, absolutely. I was diagnosed in 2014. But if you look
What about bad data? Can that bias your machine- again at my mammogram from 2013, you already see
learning model? cancer there. When you look at the one from 2012, you
Certainly. If the hospital’s pathologists miss cancer already see some small spots. Maybe we didn’t know
diagnoses or misdiagnose something else as cancer, what it was. How would my life have been different if I
the system will be trained on noisy data. We need to had been diagnosed in 2012? Maybe I didn’t need to lose
make sure that whatever human-generated my hair and all the other things that came with my
information we are using is as clean as possible. treatment. As a patient, not a scientist, I think we have
Another type of bias we have seen is one where a to do everything to make sure all this uncertainty is
certain population is under-represented in the general resolved and we are applying the best technologies
patient population. That leads to higher error rates. In available to what we care about the most – our health. ❚
INTELLIGENT
enthralling, in fact, that millions of people have
watched the YouTube video of these SpotMini robots
performing tricks such as opening inward-swinging
doors, getting back up after slipping on a banana skin,
ROBOTS
doing the Running Man dance and negotiating narrow
staircases. For their humanoid companion Atlas, the
highlights reel includes running through the woods,
doing backflips and picking up parcels.
For creatures as brilliant at it as us, it is easy to forget
Footage of robots performing impressive that moving is hard. When you walk, or even just stand
still, your brain is constantly telling your muscles to
feats of dexterity are internet gold, but they make thousands of tiny adjustments. For the most part,
aren’t quite as capable as they seem. For all attempts to build machines that can do even some of
the things animals routinely do have been hopeless.
their physical prowess, the intelligence of A quick glance at the footage from the world’s
these mechanical creatures is sorely limited, foremost robotics competitions proves that. Basically,
most of the time, even the most advanced robots fall
but efforts to build learning brains into these over. SpotMini and Atlas, built by the company Boston
machines is spurring a new robotics Dynamics, stand out, capable of feats of locomotion
and dexterity that took evolution millions of years to
revolution. That could in turn inspire entirely perfect. “You can’t fail to be impressed by what they do,”
new paradigms of AI. says Joanna Bryson, an AI researcher at the University
&37832(=2%1-'7
of Bath, UK.
They are far from autonomous, however: there is
almost always a person controlling them via a laptop
and an Xbox controller, sending instructions such as >
Chapter 4 | Frontiers of AI | 55
)7%
*SYVPIKKIHJVMIRHW
7TEGI&SO XST MWXLIGVIEXMSRSJ7[MWW
WXYHIRXW%2=FSXMGW %2=QEP QMHHPI
ERH&SWXSR(]REQMGW &MK(SK FIPS[
%2=&38-'7
make it possible. And if something unexpected
happened, the robot wouldn’t have a clue how to
react. They lack the ability to learn things for
themselves. As Mark Raibert, the founder of Boston
Dynamics, has himself put it, the world’s most
advanced robots “can’t do almost everything”.
This is the new frontier of robotics. “If you want
a robot to do anything useful, you need it to be able to
adapt on the fly to new situations,” says Chelsea Finn
at Stanford University in California. “That’s what
we’re trying to do.”
One of the big challenges is to get the software to
recognise when it is succeeding or failing. Researchers
have essentially two strategies. The first is imitation
learning: showing the robot how to do something by
either teleoperating it or having it watch videos of a
&37832(=2%1-'7
arm teaching itself to do various tasks, from picking “Current AI is built on solving specific tasks with
and placing apples to folding shorts and – somewhat large amounts of data and supervision,” says Gupta. “To
idiosyncratically – wrapping forks in towels, based build AI that can solve general-purpose tasks, that can
entirely on data collected from its camera as it fumbled reason in domains with very little supervision, we need
around with these objects. AIs to learn predictive models and causal, common-
“Basically, the robot learned to predict what happens sense knowledge. Embodiment allows us to do that.”
if it performs an action,” says Finn. “Then we give it Not everybody is convinced. “Many animals have
some goal and it figures out what actions it should take very sophisticated ways of interacting with the
to achieve that.” environment and have nothing like human
It is much like the way children learn by exploring intelligence,” says evolutionary psychologist David
the world, throwing toys around or spilling water. Geary at the University of Missouri. For robots to reach
“Traditionally with machine learning you train one our intellectual heights, he believes, some process akin
model to do one thing – translate French into English, to evolution would be required, specifically designed to
say – and you always start from scratch,” says Finn, “but weed out robots with poorer conceptual abilities.
here you can build a single model and use it for many Robots currently learning for themselves are nothing
different tasks. That is very powerful.” like approaching that level, or even that of Atlas or
For some experts, this convergence of robotics and SpotMini: they are stationary, cumbersome and
artificial intelligence could mark a turning point in the confined to the lab. And while some may get the chills
quest for machines with something approaching thinking of autonomous robots stalking confidently
human-level general intelligence. across the terrain, for the time being at least, our
“Embodiment is a critical part of how humans primary concern about autonomous robots should be
and many animals learn, because it allows you to build that they are too stupid rather than too smart.
and test hypotheses,” says Finn. For Abhinav Gupta at Imagine a clumsy robot attempting to rescue a
Carnegie Mellon University in Pittsburgh, family from a burning building, or an inflexible AI
Pennsylvania, and part of Facebook’s AI team, this is controlling heavy machinery in unfamiliar
the only way AIs can achieve the kind of common- circumstances. When opportunity knocks for the new
sense knowledge about how the physical world works world of robots, we want to see that they can open the
that we take for granted. door without pulling it off its hinges. ❚
Chapter 4 | Frontiers of AI | 57
THE
DRIVERLESS
CAR
CHALLENGE
.978C794)6-783'/4,383
N THE night of 3 September 2010, that has stopped at an intersection for schoolchildren
33-year-old Brian Wood was driving to cross senses a lorry approaching too fast from
along a highway in Washington state in behind, should it move out of the way to protect the
the northwestern US. Asleep in the car’s passengers, or take a hit and save the children?
passenger seat was his wife Erin, seven “Many or all of those decisions will have to be
months pregnant with their first child. programmed into the car,” says philosopher Jason
The couple were on their way from Millar of the University of Ottawa, Canada.
Vancouver, Canada, to spend time at Answering such “what do we do if…” questions is a
her parents’ vacation home by the two-step process. First, the vehicle needs to be able to
picturesque Puget Sound. accurately detect a hazard; second, it must decide on its
Out of nowhere, a Chevy Blazer came hurtling response. The first step mainly depends on the efficient
towards them. By the time Wood saw it, it was too late. collection and processing of data on the whereabouts
He braked hard and swerved right to take the brunt of and speed of surrounding vehicles, pedestrians or
the impact. He died instantly, but his wife and their other objects.
unborn daughter survived. These might include video cameras to read traffic
We hope it never happens to us, but any driver might lights and road signs, or systems that emit laser or
find themselves making such a split-second, life-and- radar pulses and analyse what bounces back. Data from
death decision. They are part rational, part reflex, and these various sensors must be combined into a stream,
draw on a delicate balance of altruism and self-interest rather as we integrate what our various senses are
programmed into all of us. To his wife, Wood was a hero telling us. “In robotics, it’s called sensor fusion,” says
– but indisputably he was a human. Raúl Rojas of the Free University of Berlin, who heads
As our cars edge towards making these decisions for AutoNOMOS labs, an autonomous vehicles research
us, cases like this raise profound ethical questions. To effort funded by the German government.
drive safely in a human world, autonomous vehicles But the hazards those sensors are reporting back on
must learn to think like us – or at least understand how aren’t always obvious. Consider the first death linked to
humans think. But how will they learn, and which driverless technology, in May 2016. It happened
humans should they try to emulate? because Tesla’s Autopilot system failed to detect that
The ethical challenges posed by driverless cars are the whiteness ahead wasn’t part of a bright spring sky,
often illustrated by the infamous trolley problem but the side of a trailer, resulting in a crash that killed
beloved of moral philosophers. Imagine a trolley car the passenger in the car.
out of control, and five oblivious people on the track A human might have made that mistake too, but
ahead. They will die if you do nothing – or you could sometimes driverless vehicles make a hash of things we
flip a switch and divert the car to a different track where master intuitively. “One of the things AutoNOMOS cars
it will kill only one person. What should you do? have struggled with is someone walking behind a
In a similar spirit, should an autonomous vehicle parked bus,” says Rojas. A human mind would expect
avoid a jaywalker who suddenly steps off the curb, even them to reappear, and supply a pretty accurate estimate
if it means swinging abruptly into the next lane? If a car of when and where, but for a driverless car, it seems >
Chapter 4 | Frontiers of AI | 59
that’s an extrapolation too far.
Even if a sensor system allows an autonomous car
to assess its environment perfectly, the second step to
driving in a morally informed way – taking the
information gathered, assessing relative risks and
acting accordingly – remains an obstacle course. “At a
basic level, it’s about setting up rules and priorities. For
example, avoid all contact with human beings, then
animals, then property,” says ethicist Patrick Lin of
California Polytechnic State University in San Luis
Obispo. “But what happens if the car is faced with
running over your foot or swerving into an Apple store
and causing millions of dollars in damage?”
Part of the problem with such a rules-based approach
+33+0)
is that often there are no rules – at least, no single set
that a sensor system based purely on obvious physical
cues could hope to implement.
For one thing, even the most sophisticated sensor
system can’t compute societal cues we all rely on when surrounding vehicles, or sensors inside the bus might
driving. Imagine you’re at an intersection and it’s not autonomously track its weight, including whether a
clear who should go first, says David Danks, who person is sitting in a particular seat, says Lin. But who
teaches psychology and philosophy at Carnegie Mellon decides a hierarchy of what lives are worth, and how do
University in Pittsburgh, Pennsylvania. “You creep out we eliminate discrimination and bias in how the cars
a bit. Then someone waves and you go all the way are programmed?
through.” You understand that, having signalled, the There’s one way to avoid such thorny moral
other driver will wait until you have gone past before questions, says Lin: simply ignore them. After all,
moving off. a human driver is likely to know nothing about those
Programming this “theory of mind” into cars has in the vehicles around them. “We could avoid some
proved challenging. “A 4-year-old has more theory of ethical dilemmas by being deliberately blind to certain
mind than a driverless car,” says Danks. Reliably facts,” says Lin. This “veil of ignorance” approach
recognising what mental states are encoded in facial amounts to developing responses to simple versions
expressions or bodily movements is way beyond even of likely situations, either by preprogramming them
cutting-edge tech. If a car assesses a given situation or letting the car learn on the job.
very differently from the way a human would, it may The first approach suffers from the problem that
take an unexpected action – with potentially disastrous it is pretty much impossible to anticipate all possible
consequences, especially where a mix of autonomous scenarios – for example, an encounter with a woman
and human-driven vehicles are on the road. in an electric wheelchair chasing a duck into the road
Another problem with a rules-based approach is that with a broom, as recorded by a Google car in 2014
the information a video camera or radar echo can (pictured above).
supply is limited. Sensors might be able to identify a The second approach seems more promising.
moving vehicle as a bus; but would they be able to A car might learn as it goes along, for example, that
identify it as a bus full of schoolchildren? jaywalkers are more likely on city streets than country
Technologically, there are probably fixes. A human roads, but that swerving to avoid one on a country road
intervention might program in details of the number brings less likelihood of hitting something else. Or it
and age of the passengers to be broadcast to might learn that it’s OK to break the speed limit
INTO FIFTH GEAR But basic rules still need to be programmed, and
whole new ethical issues also arise, says Millar: a
programmer will not be able to predict what exactly
%JYPP]EYXSRSQSYWGEVXLEXGEVVMIW
a car will do in a given situation. We don’t want
SYXQSVEPVIEWSRMRK WIIQEMRWXSV]
autonomous vehicles to act unpredictably. Just as it’s
[SYPHVEXIEWPIZIPSREWGEPIHIZIPSTIH
important for cars to predict the actions of human road
F]XLI7SGMIX]SJ%YXSQSXMZI)RKMRIIVW
users, so it matters for people to be able to anticipate a
,IVIMWLS[XLIWGEPIFVIEOWHS[R
car’s behaviour. Hence the question of what an
autonomous car will do when it encounters that
LEVEL 0 trolley-problem-like dilemma.
2SEYXSRSQSYWJIEXYVIWQE] But fixating on such an extreme case probably
LEZIEYXSQEXMGKIEVWLMJX1SWXGEVW doesn’t help, says Rojas. “Who has ever experienced
GYVVIRXP]SRXLIVSEH that situation? It’s one possible situation in a million
days. We first need to solve 99.9 per cent of the more
LEVEL 1 pressing problems” – things like how to avoid
7SQIEYXSRSQSYWJIEXYVIWWYGL pedestrians, stay within a lane, operate safely in bad
EWEYXSQEXMGFVEOMRKGVYMWIGSRXVSP weather, or push software updates to cars while
1ER]RI[IVGEVQSHIPW safeguarding them from hackers. Millar agrees, but
says that’s not what the thought experiment is about.
LEVEL 2 “It’s just used to illustrate the point that engineers
%YXSQEXIHWXIIVMRKFVEOMRKERH don’t have the moral authority to make all the
EGGIPIVEXMSRFYXVIUYMVIWLYQER decisions in their cars,” he says.
SZIVWMKLX)K8IWPE1SHIP71IVGIHIW More openness and agreement about common
&IR^)'PEWW:SPZS7 standards on the part of those developing autonomous
vehicles would be a start. But probably no solution will
LEVEL 3 be perfect, warns Nick Bostrom, a philosopher at the
'EVGERQSRMXSVMXWIRZMVSRQIRX University of Oxford and director of its Future of
ERHHVMZIEYXSRSQSYWP]FYXQE]VIUYIWX Humanity Institute. “We should accept that some
LYQERMRXIVZIRXMSREXER]XMQI)K%YHM people will be killed by these cars.” That’s where we also
%2MWWER4VS4-038 /ME need to put things in context, he says. In 2013, car
(6-:);-7) crashes killed nearly 1.3 million people around the
world, injuring up to 50 million more. Nine in every 10
LEVEL 4 accidents result from human factors: a moment’s
'EVGERHVMZIMRHITIRHIRXP]FYX distraction caused by reading a text message or yelling
QE]VIUYIWXLYQERMRXIVZIRXMSRMR for the kids to behave, or falling asleep because of
YRYWYEPGSRHMXMSRWWYGLEWI\XVIQI monotonous motorways.
[IEXLIV)K:SPZS8IWPE*SVH In Brian Wood’s case, the 21-year-old driver of the
&1;M2I\X oncoming Chevy had been distracted by taking off her
sweater. Had her car been fully autonomous, it seems
LEVEL 5 likely Wood would be alive today. The challenge isn’t to
'EVGERHVMZIMRHITIRHIRXP]MR build the perfect system for regulating traffic on the
EPPGSRHMXMSRW road in an age of autonomous vehicles; it is to agree
what a system looks like that is better than the human-
governed system we have now. ❚
Chapter 4 | Frontiers of AI | 61
THE BOTS OF WAR
“ONLY the dead have seen the end of war,” the philosopher George Santayana once bleakly
observed. Humanity’s martial instincts are deep-rooted. Over millennia, we have fought wars
according to the same strategic principles based in our understanding of each other’s minds.
But AI introduces another sort of military mind – one that even though we program how it thinks,
may not end up thinking as we do. Forget killer robots: we are only just beginning to work through
the profound and troubling effects these new combatants might have, says Kenneth Payne.
Underlying this is theory of mind – the human ability It involves chance and can be emotional. There is
to gauge what others are thinking and how they will scope for misperception and miscommunication, and
react to a given situation, friend or foe. Theory of mind a grasp of human psychology can be vital for success.
is essential to answer strategy’s big questions. How Artificial intelligence changes all this. First, it
much force is enough? What does the enemy want, and swings the logic of strategy decisively towards attack.
how hard will they fight for it? AI’s pattern recognition makes it easier to spot
Strategic decision-making is often instinctive and defensive vulnerabilities, and allows more precise
unconscious, but also can be shaped by deliberate targeting. Its distributed swarms of robots are hard to
reflection and an attempt at empathy. This has survived kill, but can concentrate rapidly on critical weaknesses
even into the nuclear era. Some strategic thinkers held before dispersing again. And it allows fewer soldiers to
that nuclear weapons changed everything because be risked than in warfare today.
their destructive power threatened punishment This all creates a powerful spur for moving first in
against any attack. Rather than denying aggressors any crisis. Combined with more accurate nuclear
their goals, they deterred them from ever attacking. weapons in development, this undermines the basis
That certainly did require new thinking, such as the of cold-war nuclear deterrence, because a well-planned,
need to hide nuclear weapons, for example on well-coordinated first strike could defeat all a
submarines, to ensure that no “first strike” could defender’s retaliatory forces. Superior AI capabilities
destroy all possibility for retaliation. Possessing would increase the temptation to strike quickly and
nuclear weapons certainly strengthens the position of decisively at North Korea’s small nuclear arsenal,
militarily weaker states; hence the desire of countries for example.
from Iran to North Korea to acquire them. By making many forces such as crewed aircraft and
But even in the nuclear era, strategy remains human. tanks practically redundant, AI also increases >
Chapter 4 | Frontiers of AI | 63
uncertainty about the balance of power between states. DeepMind AI that beat the human champion Lee
States dare not risk having second-rate military AI, Sedol at the board game Go in 2016. With enough past
because a marginal advantage in AI decision-making behaviour to go on, this works even in a game such as
accuracy and speed could be decisive in any conflict. poker where, unlike Go, not all information is freely
AI espionage is already under way, and the scope for a available and a healthy dose of chance is involved.
new arms race is clear. It is difficult to tell who is This approach could work well at the tactical level –
winning, so safer to go all out for the best AI weapons. anticipating how an enemy pilot might respond to a
Were that all, it would be tempting to say AI manoeuvre, for example. But it falls down as we
represents just another shift in strategic balance, as introduce high-level strategic decisions. There is too
nuclear weapons did in their time. But the most much unique about any military crisis for previous
unsettling, unexplored change is that AI will make data to model it.
decisions about the application of force very differently An alternative method is for an AI to attempt to
to humans. model the internal deliberations of an adversary. But
AI doesn’t naturally experience emotion, or this only works where the thing being modelled is less
empathy, of the sort that guides human strategists. sophisticated, as when an iPhone runs functional
We might attempt to encode rules of engagement into replicas of classic 1980s arcade games. Our strategic
an AI ahead of any conflict – a reward function that AI might be able to intuit the goals of an equally
tells it what outcome it should strive towards and how. sophisticated AI, but not how the AI will seek to achieve
At the tactical level, say with air-to-air combat between them. The interior machinations of an AI that learns
two swarms of rival autonomous aircraft, matching our as it goes are something of a black box, even to those
goals to the reward function that we set our AI might be who have designed it.
doable: win the combat, survive, minimise civilian Where the enemy is human, the problem becomes
casualties. Such goals translate into code, even if there more complex still. AI could perhaps incorporate
may be tensions between them. themes of human thinking, such as the way we
But as single actions knit together into military systematically inflate low-risk outcomes. But that is
campaigns, things become much more complex. AI looking for patterns again. It doesn’t understand
Human preferences are fuzzy, sometimes what things mean to us; it lacks the evolutionary logic
contradictory and apt to change in the heat of battle. that drives our social intelligence. When it comes to
If we don’t know exactly what we want, and how badly, understanding what others intend – “I know that
ahead of time, machine fleets have little chance of you know that she knows” – machines still have a
delivering those goals. There is plenty of scope for our long way to go.
wishes and an AI’s reward function to part company. Does that matter? Humans aren’t infallible mind-
Recalibrating the reward function takes time, and you readers, and in the history of international crises
can’t just switch AI off mid-battle – hesitate for a misperception abounds. In his sobering account of
moment, and you might lose. nuclear strategy, The Doomsday Machine, Daniel Ellsberg
That is before we try to understand how the describes a time when the original US early warning
adversary may respond. Strategy is a two-player game, system signalled an incoming Soviet strike. In fact, the
at least. If AI is to be competitive, it must anticipate system’s powerful radar beams were echoing back from
what the enemy will do. the surface of the moon. Would a machine have paused
The most straightforward approach, which plays to for thought to ascertain that error before launching a
AI’s tremendous abilities in pattern recognition and counterstrike, as the humans involved did?
recall, is to study an adversary’s previous behaviour and An AI’s own moves are often unexpected. AlphaGo’s
look for regularities that might be probabilistically game-winning “move 37” was down to probabilistic
modelled. This method was used by AlphaGo, the reasoning and a flawless memory of how hundreds of
Chapter 4 | Frontiers of AI | 65
CHAPTER 5
of the true crime rate in those areas. Because the needed. That might mean tweaking the inputs or
software learned from reports recorded by the police explicitly gathering more diverse data and test subjects.
rather than actual crime rates, it created a feedback The black-box nature of many machine learning
loop that might exacerbate racial biases. algorithms doesn’t help. After all, it’s the point that
Meanwhile some facial-recognition software has rather than following a simple procedure to make a
been shown to have a higher rate of false identification decision, they are trained to pick up on patterns within
of women and people with darker skin, because the thousands or millions of examples and apply these to
data it was trained on was overwhelmingly male and future assessments. The features in the data it picks up
white. Similar effects have been shown to cause image on doesn’t even need to be explicitly to do with, say,
searches for “CEO” to overwhelmingly serve up images race says Sonja Starr of the University of Michigan Law
of men, and software serving up job ads to show higher- School in Ann Arbor. For example, sentencing
paying jobs to men. algorithms take into account where someone lives, and
Sometimes artificial intelligence feeds back to their employment history, that may just be correlated
heighten human bias. In October 2017, police in Israel with racial background.
arrested a Palestinian worker who had posted a picture Such factors mean there often aren’t simple steps
of himself on Facebook posing by a bulldozer with the that can be retraced to check whether an AI’s logic was
caption “attack them” in Hebrew. Only he hadn’t: the sound. Even if there were, companies often want to
Arabic for “good morning” is very similar, and keep their algorithms under lock and key, afraid of
Facebook’s automatic translation software chose the losing their competitive edge.
wrong one. The man was questioned for several hours There are ways round this. We don’t always know
before someone spotted the mistake. how medicines work, but regulators require evidence
Facebook was quick to apologise, but data biases of their efficacy, through rigorous controlled trials,
persist. The resulting technology then becomes a before they are sold. We could do the same for
mechanism for amplifying discrimination, says algorithms – trusted independent third parties could
Anupam Chander, a law professor at the University of assess and evaluate these systems to avoid trade secrets
California, Davis. He argues that designers should becoming public. ❚
employ “algorithmic affirmative action”, explicitly
paying attention to race and other potentially biasing →-
factors in the data fed to the algorithm and in the Turn to page 85 later in this chapter for-
results it spits out, and then correcting course as more on ensuring “algorithmic accountability”-
authorities if targeted individuals stray 300 metres However, this is a compromise too far for Goulding.
beyond their home or workplace, effectively building “In our view we don’t think it’s possible to balance
virtual checkpoints to hem in Uighur Muslims. the existence of this technology. The impact on rights
After first being photographed shopping in Cardiff, is too grave.” ❚
VEHICLE is driving slowly on the top to make it think an image is something else entirely.
floor of a multistorey car park. Footage One of the most discussed hacks came in late 2017,
from an on-board camera shows what when Anish Athalye at the Massachusetts Institute of
the AI system controlling it can see: Technology and his colleagues managed to fool an AI
ranks of cars to the left, and is that a into misclassifying multiple images of a model turtle
person off to the right? Straight ahead taken at different angles just by altering the pattern on
there is something else. To any its shell. At one stage the AI was tricked into thinking
human observer, it is obviously a stop the turtle was a rifle. The same principle can be applied
sign. But the AI can’t seem to make to text, audio and video.
sense of it and keeps on driving. Athalye and his team have since shown that you can
This was only a stunt. Researchers had compose such adversarial attack images even when
deliberately stuck pieces of black tape on the sign to you aren’t privy to an AI’s inner workings. They showed
study how it confused the machine mind. Yet this Google’s Cloud Vision image-recognising service an
and several similar demonstrations are revealing image of a dog and checked that the AI classified it
something disturbing: AI can be hacked, and you correctly. This image was then gradually altered until it
don’t even need to break passwords to launch an attack. appeared to humans more and more like a photograph
As the technology begins to find more and more of two skiers standing in the snow. Even when the
applications that affect our lives, this is a threat we image had been undeniably transformed into this
need to take seriously. completely different scene, the algorithm’s response
At the heart of these hacks is the way deep-learning still came back: “dog”. In other words, you can hack an
neural networks pull off their feats without AI by trial and error. That sounds trivial – until you
understanding concepts. Such a network doesn’t know realise, for example, all the security-critical
what a face, a word or a stop sign is, but faced with a applications where face-recognition technology is
pattern of data it has already seen, it becomes able to already being employed.
recognise one and respond appropriately. “It’s not Tech giants are trying to fight back. In April 2018, IBM
forming a representation that’s anything like what the launched a toolbox to help AI researchers make neural
brain does,” says neuroscientist Bruno Olshausen at the networks resilient to attacks. It includes algorithms for
University of California, Berkeley. “It’s doing statistics creating adversarial images and ways of measuring
on the pixels to extract features.” how neural networks might respond to them.
If you understand how a deep-learning system does AI hacking is a “pretty good illustration of how
this, you can work out what small changes are needed dumb deep learning really is”, says psychologist Gary
LIKE A MACHINE? near fathoming the grey matter between our ears.
Perhaps we should instead aim for a kind of middle
way, where AI may not know what it is looking at, but
)\TIVMQIRXW[MXLMQEKIWXLEX can apply common sense. Jim DiCarlo, a neuroscientist
JSSPER%-WYKKIWXXLEXLYQER at MIT, says we need to find a way to help networks
QMRHWWSQIXMQIW[SVOPMOIER become more confident about their own
EVXMJMGMEPRIYVEPRIX[SVO interpretation. If most of the network is certain
something is a dog, a few pixels that don’t fit this
7XEVIMRXSXLIWIMQEKIWSJWXEXMGERHXIPP classification shouldn’t convince it otherwise.
QI[LEX]SYWII8LEXMWMRIJJIGX[LEX Another option might be for AIs to focus not just
GSKRMXMZIWGMIRXMWXW'LE^*MVIWXSRIERH on pixels, but to recognise larger features: does it see
>LIRKPSRK>LSYEX.SLRW,STOMRW eyes, a nose, fur or a tail? Then the apparent
9RMZIVWMX]MR1EV]PERHEWOIHTISTPIXS disagreement of a few pixels could be discounted.
HSMREVIGIRXWXYH]8LIWXEXMGMQEKIWLEH How to do that is still unclear.
FIIRKIRIVEXIHWTIGMJMGEPP]XSJSSPQEGLMRI You may be unsurprised to hear that among those
QMRHWIZIRXLSYKLXLIVIMWRSQIERMRKXS eager to solve the problem is the US military. Its
XLIQEWJEVEWELYQERMWGSRGIVRIH)ZIR research arm, DARPA, is running a competition to find
WS[LIRXLITISTPI[IVITVSQTXIH[MXL artificial neural networks that can answer questions
MQEKIWSJEVSFMRERHEWOIHXSWE][LMGL containing a tricky mix of real-world concepts. The idea
WXEXMGMQEKIXLI%-[SYPHQMWXEOIRP] is to develop AIs with common sense.
GPEWWMJ]EWEVSFMRTIVGIRXSJXLIQKSX That is worth doing. “When people from government
MXVMKLXELMKLIVTVSTSVXMSRXLER[SYPHFI groups ask me about augmenting capabilities with
I\TIGXIHF]GLERGI machine learning, I always point out that a lot of it
could be bypassed with adversarial examples,” says Ian
%RW[IV8LI%-XLSYKLXXLIVMKLXLERH Goodfellow, a machine learning researcher at Apple.
MQEKI[EWEVSFMR It is one thing to hack an AI controlling a vehicle in a
car park. It would be quite another to hack an AI
controlling a weapon. ❚
OHN MAYNARD KEYNES always has mainly affected blue-collar jobs. Now white-collar
assumed that robots would take our jobs. workers worry that AI will move on from being
According to the British economist, something that just curates Facebook feeds, and begin
writing in 1930, it was all down to “our to displace accountants, surgeons, financial analysts,
means of economising the use of labour legal clerks and journalists.
outrunning the pace at which we can In 2013 Carl Frey and Michael Osborne of the Oxford
find new uses for labour”. And that was no Martin Programme on the Impacts of Future
bad thing. Our working week would Technology at the University of Oxford looked at 702
shrink to 15 hours by 2030, he reckoned, types of work and ranked them according to how easy
with the rest of our time spent trying to it would be to automate them. They found that just
live “wisely, agreeably and well”. under half of all jobs in the US could feasibly be done by
It hasn’t happened like that – indeed, if anything machines within two decades.
many of us are working more than we used to. The list included jobs such as telemarketers and
Advanced economies that have seen large numbers library technicians. Not far behind were less obviously
of manual workers displaced by automation have susceptible jobs, including models, cooks and
generally found employment for them elsewhere, construction workers, threatened respectively by
for example in service jobs. The question is whether digital avatars, robochefs and prefabricated buildings
that can continue, now that artificial intelligence is made in robot factories. The least vulnerable included
turning its hand to all manner of tasks beyond the mental health workers, teachers of young children,
mundane and repetitive. clergy and choreographers. In general, jobs that
A survey in 2016 found that 82 per cent of people fared better required strong social interaction,
believe that AI will lead to job losses. Even if they don’t original thinking and creative ability, or very specific
usurp us, the fear is that AI could cut ordinary workers’ fine motor skills of the sort demonstrated by dentists
salaries by reducing the value of human labour, and surgeons.
allowing executives to hoover up the savings. Many Others find that list overblown. A working paper for
economists suggest that increasing levels of the rich-world OECD club in 2016 suggested that AI
automation are a significant factor behind a general would not be able to do all the tasks associated with all
rise in inequality in recent decades. So far automation these jobs – particularly the parts that require human
interaction – and only about 9 per cent of jobs are fully outfits like TaskRabbit, which helps people find
automatable. What’s more, past experience shows that casual labourers to complete all sorts of chores.
jobs tend to evolve around automation. In such set-ups, workers are typically considered self-
According to this more Keynesian view, employed contractors, so the company has no
technological progress will continue to improve our obligation to keep supplying work or provide benefits
lives. The most successful innovations are those that like holiday pay or pensions. That has already led to
complement rather than usurp us, says Ben strikes and legal disputes.
Shneiderman, who founded the human-computer How can we adapt? The answer might simply be to
interaction lab at the University of Maryland. update our social frameworks to reflect the new reality
“Technologies are most effective when their designs of work. Many countries are considering new
amplify human abilities,” says Shneiderman. They regulatory frameworks for the gig economy. Another
could help us solve problems, communicate widely, or proposal is an AI tax on companies that are saving
create art, music and literature, he believes. money by replacing workers with algorithms.
“Robots could be a liberating force by taking away Others are thinking more radically about how to
routine work,” says Tom Watson, a former MP and reconfigure our whole relationship with work.
deputy leader of the UK’s Labour Party. In 2016, he set Proposals such as Universal Basic Income, giving
up an expert commission on the effect of AI on everyone a certain minimum sum to cover housing,
employment that ended up concluding that AI could healthcare and living expenses, have been trialled in
create as many jobs as it destroys. He is, however, some countries, and aim among other things to
concerned about the imbalance in power between smooth the inequalities that an AI-driven workplace
those calling the robot shots and the rest of us. “We’ve might bring. That speaks to an important point:
got to be careful that big corporations and employers ultimately we, not AI, are in charge of our own destiny.
don’t amass all the benefits while ordinary workers are Given the benefits of work for our health and well-
left to lump the negatives,” he says. being, maybe we’ll opt not to abolish fulfilling,
One arena where such questions are particularly rewarding work. “There will be inequities and
pressing is the gig economy, in which AI systems serve disruptions, but that’s been going on for hundreds of
up a platter of casual labour to a convenient app for years,” says Shneiderman. “The question is: is the
consumers. Examples include the taxi firm Uber and future human-centred? I say it is.” ❚
CLIMATE?
Increased energy consumption is one
rarely discussed downside of AI. The
technique behind most recent breakthroughs, deep
learning on neural networks, involves performing ever
more computations on ever more data. “The models
AI’s data-hungry habits consume a lot of are getting deeper and getting wider, and they use
more and more energy to compute,” says Max Welling
energy. How much that ends up contributing to of the University of Amsterdam, the Netherlands.
global warming depends on the uptake of the Take image recognition, for instance. The more you
want a neural network to recognise – distinguishing not
technology and how efficient we can make it – just dogs, but different breeds of dogs, say – the more
but with computing already accounting for complex your network must be. The most complex are
now described by more than 100 billion parameters.
5 per cent of the world’s total electricity This is important for two reasons. First, energy is
consumption, it’s a real concern. money, there’s a ceiling to the amount any person or
firm can spend on any technology. Second, our mobile
devices can only use so much energy before they
LAW #1:
%-QE]RSXMRNYVIELYQERSVEPPS[
ELYQERXSGSQIXSLEVQrYRPIWWMXMW
FIMRKWYTIVZMWIHF]ERSXLIVLYQER
LAW #2:
%-QYWXFIEFPIXSI\TPEMRMXWIPJ
LAW #3:
%-WLSYPHVIWMWXXLIYVKIXSTMKISRLSPI
LAW #4:
%-QYWXRSXMQTIVWSREXIELYQER
+6)10-2-783'/4,383
-"8
%-WLSYPHEP[E]WLEZIERSJJW[MXGL
Many of Isaac Azimov’s stories explore the medical robots: we cannot cut the risk of harm to
unintended consequences of robots trying to apply zero, but humans will be far safer with machines than
hard-and-fast laws in different scenarios. In Runaround, we are with one another.
a robot gets stuck in a loop when it tries to satisfy the Many find such utilitarian reasoning unpalatable,
Second Law (obey the orders given to it by a human) however. “The taking of life in war can only be justified
and Third Law (protect its own existence) at the same in the most extreme circumstances,” says Croft.
time. “Asimov’s laws are perfect for novels but not for “Human judgement is therefore a very important part
putting into practice,” says Delvaux. of the regrettable waging of war. Anything that moves
Ultimately, these debates boil down to questions away from that is ultimately dehumanising.” Wherever
of ethics, rather than technology. What values do we robots have the capacity to inflict harm, therefore – in
as a society consider to be worth protecting? Croft the operating theatre, on the battlefield or on the road –
stresses the need to avoid the “philosophy of extreme human oversight is essential.
libertarianism” of companies in Silicon Valley or
the totalitarian approach employed by countries ←-
such as China. For more on AI on the road and at war, -
But are there any basic principles everyone can see pages 58 and 62-
agree on? Not allowing AI systems to harm humans
seems like a no-brainer, and a UN ban on autonomous But it’s not always immediately obvious where that
weapons has widespread support. But some argue capacity might lie – or how an artificially intelligent
that letting robots kill might actually save lives, by system might inflict harm just by doing what it’s
reducing the number of lethal mistakes made by supposed to do. “There will be accidents,” says Rebecca
human soldiers. A similar argument is used to justify Crootof at Yale University. “The question is, will we let
the occasional fatal accident with driverless cars and the harm fall where it lands, or will we figure out >
>
',6-77)%60)
engines, electric motors and smartphones. These have
transformed our lives and allowed us to dominate the
planet.
←-
See chapter 1 for more on the nature of human-
and artificial intelligence-
It is therefore not surprising that machines that think –
and might even think better than us – threaten to usurp
us. Just as elephants, dolphins and pandas depend on
our goodwill for their continued existence, our fate in
turn may depend on the decisions of these superior
thinking machines.
The idea of an intelligence explosion, when
machines recursively improve their intelligence and
thus quickly exceed human intelligence, is not a
particularly wild idea. The field of computing has
profited considerably from many similar exponential
HUMAN
HEALTH
WHAT YOU NEED TO KNOW
TO LIVE LONG AND PROSPER
ON SALE 19 AUGUST
ESSENTIAL
GUIDE№2
ARTIFICIAL
INTELLIGENCE
WHAT IS ARTIFICIAL INTELLIGENCE? WHAT ARE
ITS BENEFITS AND RISKS? ARE MACHINES GOING
TO TAKE MY JOB? OR TAKE OVER THE WORLD?
WITH BEWILDERING SPEED, AI HAS ADVANCED TO UNDERPIN
MANY ASPECTS OF OUR EVERYDAY LIVES – AND THAT'S JUST THE
BEGINNING. THIS SECOND ESSENTIAL GUIDE FROM THE MAKERS OF
NEW SCIENTIST MAGAZINE TELLS YOU ALL YOU NEED TO KNOW
ABOUT A TECHNOLOGY NO ONE CAN AFFORD TO IGNORE, INCLUDING:
❶ What AI is – and isn't
❷ How neural networks and deep learning works
❸ AI applications from medical bots to driverless cars
❹ The social and economic challenges of AI
❺ What the future holds for intelligent machines
£9.99
0 2
9 772634 015019