Genius Makers (2021
Genius Makers (2021
This
expansive report covers the sprawling history of AI, from its early development to today’s current
controversies.
INTRODUCTION
Cade Metz. Genius Makers. The Mavericks who brought AI to Google, Facebook, and the
Way back in 1968, the film 2001 A Space Odyssey introduced the world to HAL, a
malevolent supercomputer with a mind and an agenda all of its own. Of course, such advanced
technology is just the stuff of science fiction. Or is it? Today, government researchers, college
professors, and starry-eyed entrepreneurs around the world are in a heated race to develop truly
artificial intelligence.
And many technologies verging towards AI are already part of our lives. These Blinks
explore just how we got to this point and where contemporary trends might take us in the future.
companies like Google, Microsoft, and OpenAI, this survey of the AI landscape shows that science
CHAPTER 1 OF 8
July 7, 1958. Men huddle around a massive, refrigerator-sized computer deep within the
offices of the United States Weather Bureau in Washington, D.C. They watch intently as Frank
Rosenblatt, a young professor from Cornell University, shows the computer a series of cards. Each
The machine is supposed to identify which have the mark on their left side and which have
it on the right. At first, it can't tell the difference. But as Rosenblatt continues the flash cards, the
computer's accuracy improves. After 50 tries, it identifies the card's orientation nearly perfectly.
Rosenblatt calls the machine the Perceptron. While these days it seems rudimentary, it's actually
Though at the time, it was dismissed as a novelty. The key message here is, early research
into artificial intelligence was met with skepticism. Today, we recognize Rosenblatt's Perceptron
and its successor, the Mark I, as very early versions of a neural network. Neural networks,
computers that use a process sometimes called machine learning, underlie much of what we
currently call artificial intelligence. At the most basic level, they work by analyzing massive
amounts of data and searching for patterns. As a network identifies more patterns, it refines its
Back in 1960, this process was slow and involved a lot of trial and error. To train the Mark
I, scientists fed the computer pieces of paper with a letter, such as an A, B, or C, printed on each.
Using a series of calculations, the computer would guess which letter it saw. Then a human would
mark the guess as correct or incorrect. The Mark I would then update its calculations so it could
guess more accurately the next time. Scientists like Rosenblatt compared this process to those of
the human brain, arguing that each calculation was like a neuron.
By connecting many calculations that update and adapt over time, a computer could learn
as humans do. Rosenblatt called this connectionism. Yet there were detractors, like MIT computer
scientist Marvin Minsky. In a 1969 book, Minsky criticized the concept of connectionism. He
argued that machine learning could never scale up to solve more complex problems. Minsky's
Throughout the 1970s and early 80s, interest in researching neural networks declined.
During this so-called AI winter, few institutions funded neural network research and progress on
machine learning stalled. But it didn't stop completely. A few scientists continued toying with
CHAPTER 2 OF 8
Blink 2 of 8 From the very start, Jeff Hinton was something of an outsider. In the early 70s,
he obtained a PhD from the University of Edinburgh. That was the height of the AI winter, yet
Hinton still favored a connectionist approach to AI. Unsurprisingly, he struggled to find a job after
graduation.
For the next decade or so, Hinton drifted through academia. He took jobs at the University
of California, San Diego, Carnegie Mellon University, and finally the University of Toronto. All
the while, he continued to refine his theories of machine learning. He believed that adding
additional layers of computation, a process he called deep learning, could unlock the potential of
neural networks. Over the years, he won over a few doubters and made some progress, albeit
The key message here is, deep learning made neural networks tech's new favorite toy. Li
Deng and Jeff Hinton first got to talking at NIPS, an AI conference in Whistler, British Columbia.
At the time, Deng was developing speech recognition software for Microsoft. Hinton, recognizing
an opportunity, suggested that deep learning neural networks could outperform any conventional
approach. Deng was skeptical, but intrigued, and the two decided to work together. Deng and
Hinton spent most of 2009 working at Microsoft's research lab in Redmond, Washington.
Together, they wrote a program that used machine learning models to analyze hundreds of
hours of recorded speech. The program ran on special GPU processing chips normally used for
computer games. After weeks of processing, the results were astounding. The program could
analyze audio files and pick out individual words with astounding accuracy. Soon, other tech
companies were experimenting with similar programs. Navdeep Jaitley, a scientist at Google, used
His program had an error rate of only 18%. These early successes made a strong argument
for the potential power of neural networks. Moreover, researchers realized that the same basic
concepts could be applied to analyzing many different problems, from image search to the
navigation of self-driving cars. Within a few years, deep learning was the hottest technology in
Silicon Valley.
Google, always an ambitious company, led the charge by buying up Hinton's research firm,
DNN Research, as well as other AI startups such as the London-based DeepMind. Yet this was
just the beginning. In the coming years, competition would only become more fierce.
CHAPTER 3 OF 8
Back in November 2013, Clément Faribé was having a quiet night at home when, suddenly,
his phone rang. He answered, expecting to hear a friend or a colleague. Instead, he heard Mark
Faribé was a researcher at NYU's Deep Learning Lab. For weeks, various Facebook
employees had attempted to recruit him to join the social media company. Faribé had been hesitant,
but a personal appeal from the CEO himself piqued his interest. Of course, Faribé wasn't alone.
Many of his colleagues received similar offers. The tech giants of Silicon Valley were locked in a
recruitment arms race, each company determined to be the industry leader in the emerging field of
AI.
The key message here is, Silicon Valley's biggest companies poured money into AI
research. In the early 2010s, deep learning and neural networks were still relatively new
technologies. However, the ambitious tech entrepreneurs at places like Facebook, Apple, and
Google were all convinced that artificial intelligence was the future. While no one knew exactly
how AI would be used to generate profits, each company wanted to be the first to find out. Google
got a head start by buying DeepMind, but Facebook and Microsoft were close behind, each
spending millions hiring AI researchers. What did a social media company like Facebook see in
AI?
Well, a cutting-edge neural network could optimize the business by making sense of the
massive amounts of data on its servers. It could learn to identify faces, translate languages, or
anticipate buying habits to serve targeted ads. Down the line, it could even operate sophisticated
bots that could carry out tasks like messaging friends or placing orders. In short, it could make the
site come alive. Google had similarly grandiose plans for its research. Its AI specialists, people
like Analia Angelova and Alex Kryzewski, envisioned using Google Street View data to train self-
Another researcher, Demis Asabis, was designing a neural network to improve the energy
efficiency of the millions of servers the company relied on to operate. In the press, these projects
were hyped as forward-thinking and potentially world-changing. But not everyone was so
could easily go awry. He posited that super-intelligent machines could be unpredictable and make
decisions that put humanity at risk. Such warnings didn't slow the wild ramp-up of investment,
though.
CHAPTER 4 OF 8
take turns placing black and white stones on a grid, each trying to encircle the other. Yet in reality,
Go is incredibly complex.
The contest contains a vast number of potential paths and is so unpredictable that no
computer could beat the best human player. Not until 2015, that is. In October of that year,
Google's AI program, AlphaGo, took on Fan Hui, a top-ranked player. AlphaGo, a neural net
system trained by analyzing millions of games, was unstoppable. It won five matches in a row. A
few months later, it defeated Lee Sedol, the reigning human champion.
Clearly, AI had turned a corner. And the more scientists studied neural networks, the more
powerful these networks became. The key message here is, neural networks have the potential to
outdo humans in many fields. In the decades after Rosenblatt first began experimenting with
Perceptron, the capabilities of neural networks grew by leaps and bounds. This incredible uptick
was fueled by two trends. One, computer processors kept getting faster and cheaper.
Any modern chip could compute vastly more calculations than earlier models. And two,
data had become an abundant resource, so networks could be trained on a huge variety of
learning principles to any number of issues. Take, for example, the problem of diagnosing diabetic
retinopathy. This common condition causes blindness if untreated. Yet, detecting it early usually
requires a skilled doctor to carefully examine a patient's eye for tiny lesions, hemorrhages, and
subtle discolorations.
In countries like India, where doctors can be scarce, there isn't always enough manpower
to examine everyone. Enter Google engineer Varun Gulshan and physician Lily Pang. Working
together, these two scientists devised a plan to efficiently diagnose diabetic retinopathy. Using a
pool of 130,000 digital eye scans from India's Aravind Eye Hospital, the pair trained a neural net
to spot the subtle warning signs of the disease. After crunching the data, their program could
automatically analyze any patient's eyes in seconds. Moreover, it was accurate 90% of the time,
Similar projects could revolutionize the future of healthcare. Neural networks could be
trained to analyze X-rays, CAT scans, MRIs, and any other type of medical data to efficiently spot
diseases and abnormalities. In time, they could even learn to identify patterns too subtle for humans
to see.
CHAPTER 5 OF 8
Imagine you're scrolling through your Twitter feed. You see the usual stuff, a short joke, a
targeted advertisement, a heated argument about pop culture. Then you come across something
that catches your eye. A video of Donald Trump, but it's not any Donald Trump you know.
This Trump is speaking fluent Mandarin. And it's not some bad overdub either. The
motions of his mouth are perfectly in sync with the syllables you hear. His body language matches
the rhythm of his speech. Yet as you play the clip again and again, you can catch small glitches
You weren't tricked this time, but as AI technology progresses, you might not be so
confident in the future. The key message here is, more sophisticated AI has the potential to distort
our view of reality. In the early 2010s, machine learning research focused on teaching computers
to spot patterns in information. AI programs trained on large image sets were apt at identifying
and sorting pictures based on their content. But in 2014, Ian Goodfellow, a researcher at Google,
To do this, Goodfellow designed the first Generative Adversarial Network, or GAN. These
work by having two neural nets train each other. The first network generates the images, and the
other uses complex algorithms to judge their accuracy. As the two networks swap information over
and over, the new images become more and more true to life. While fake images have always
existed, GANs made generating realistic renderings of any person or thing easier than ever. Very
quickly, early adopters used the technology to create convincing videos of politicians, pop stars,
While some of these so-called deepfakes are harmless fun, others place people into
pornographic images, raising obvious red flags. Deepfakes aren't the only problem facing AI
research. Critics have also noted the field's issues with racial and gender bias. In 2018, computer
scientist Joy Boulamwini showed that facial recognition programs designed by Google and
Facebook faltered when identifying non-white, non-male faces. The networks were trained on data
that skewed toward white males, distorting their accuracy. Such findings have made people
rightfully concerned about AI's potential to be oppressive, a topic we'll explore in the next Blink.
CHAPTER 6 OF 8
Blink 6 of 8 In the fall of 2017, eight engineers at Clarify, an NYC-based AI research and
development startup, were given an odd task. They were asked to build a neural network capable
of identifying people, vehicles, and buildings. Specifically, the program should excel at working
in a desert environment. The engineers hesitantly began work, but disturbing rumors soon
emerged.
The team wasn't developing tools for any regular client. They were working for the US
Department of Defense. Their new neural network would allow drones to navigate. Unknowingly,
they were part of a government weapons program. After the news came out, the engineers quit the
It seemed that AI was increasingly intertwined with the world of politics. The key message
here is, AI can easily be misused for political ends. Private companies weren't the only
organizations eager to explore the growing capabilities of AI. Governments were also increasingly
aware of the transformative potential of machine learning technologies. In China, the State Council
funded an ambitious plan to become the world leader in AI by 2030. Similarly, the United States
government was investing ever-increasing amounts of money into developing AI systems, often
In 2017, the Defense Department and Google entered discussions about a new multi-
million dollar partnership called Project Maven. The agreement would have Google's AI teams
develop neural networks to optimize the Pentagon's drone program. Unsurprisingly, the prospect
of aiding the military made many engineers uncomfortable. More than 3,000 employees signed a
petition to drop the contract. While the company ultimately relented, Google's executive board
hasn't ruled out more partnerships in the future. Meanwhile, Facebook was embroiled in a political
During the 2016 election cycle, a British startup called Cambridge Analytica harvested
private data from 50 million Facebook profiles and used it to create misleading campaign ads for
Donald Trump. The scandal highlighted the platform's moderation problem. With so many users,
the site had become a hotbed for content, including radical propaganda and misinformation, often
dubbed fake news. In 2019, Zuckerberg testified to Congress that his company would use AI to
However, the solution isn't perfect. Even the most advanced neural networks struggle to
parse the nuances of political speech. Not only that, malevolent AI can produce new
misinformation as fast as it can be moderated. In reality, anyone using AI for good will always be
CHAPTER 7 OF 8
It's a bright spring day in May 2018, and the Shoreline Amphitheater in Mountain View,
California is packed. The crowd is going wild, but the man on stage isn't some rock star wielding
a guitar. No, it's Google's CEO Sundar Pichai, and he's holding a phone. This is I-O, the tech
company's yearly conference, and Pichai just demonstrated Google's newest innovation, the
Google Assistant.
Using a neural net technology called WaveNet, the Assistant can make phone calls using a
realistic-sounding human voice. In the clip Pichai just played, the AI program successfully made
a restaurant reservation. The woman at the cafe didn't even realize she was speaking with a
computer. The AI's new capabilities wow this particular crowd, but not everyone is so impressed.
The key message here is, neural networks still don't think or learn like humans. When NYU
psychology professor Gary Marcus saw Pichai's demonstration, he merely rolled his eyes.
Like Marvin Minsky before him, he's pessimistic about the true potential of machine
learning. While Google presented its AI Assistant as having a nearly human-level understanding
of speech and conversation, Marcus thought that in actuality, the program only seemed impressive
because it was performing a very specific, predictable task. Marcus's argument draws on a school
of thought called nativism. Nativists believe that a huge portion of human intelligence is hardwired
into our brains by evolution. This makes human learning fundamentally different from neural net
deep learning. For instance, a baby's brain is so agile, it can learn to identify an animal after being
Meanwhile, to perform the same task, a neural net must be trained on millions of images.
For nativists, this difference in innate ability explains why neural net AI hasn't improved as fast as
anticipated, especially when it comes to nuanced tasks like understanding language. While Google
Assistant's AI can push its way through basic rote conversation, it's not exactly capable of engaging
with the more complex discourses that come easily to an average person. An AI might be able to
Of course, researchers are working to overcome this obstacle. Teams at Google and
OpenAI are currently experimenting with an approach called universal language modeling. These
systems train neural nets to understand language in a more nuanced, context-specific manner and
have shown some progress. Whether they'll make great conversation partners, however, is yet to
be seen.
CHAPTER 8 OF 8
responsible for inventing and operating many of the services that make the modern world run. It
also generates an enormous amount of wealth, tens of billions of dollars. So imagine if there were
Could AI make this possible? Maybe, at least according to Ilya Sutskever, chief scientist
at OpenAI. If scientists could build artificial intelligence as capable and creative as the human
mind, it would be revolutionary. One super-intelligent computer could build another, better
version, and so on and so forth. Eventually, AI would take humanity beyond what we can even
Even Silicon Valley's brightest minds aren't entirely sure. The key message here is,
researchers continue to push AI beyond its current limits. When Frank Rosenblatt first introduced
the Perceptron, there were skeptics, but there were also optimists. Around the world, scientists and
futurists made bold claims about computers soon matching or even surpassing humanity in
technical ability and intellectual prowess. Herbert Simon, a professor at Carnegie Mellon, wagered
it would happen in a mere two decades. Clearly, such predictions didn't entirely pan out.
Yet, despite the uneven speed of progress, many still believe that human-like or even
fact, they're betting on it. In 2018, OpenAI updated its charter to include developing AGI as an
explicit goal of the company's research. And shortly after the announcement, Microsoft agreed to
invest more than $1 billion toward the research team's ambitious goal. Exactly how AGI can be
achieved is still unclear, but researchers are approaching the problem from several angles. Some
companies, such as Google, Nvidia, and Intel, are working on developing new processing chips
The idea is that this supercharged hardware will allow networks to process enough data to
overcome current barriers faced by machine learning programs. Meanwhile, Geoff Hinton, one of
machine learning's earliest boosters, is taking a different path. His research focuses on a technology
called capsule networks. This experimental model is said to more closely mirror the human brain
Still, it'll be years before anything concrete comes to fruition. By then, a completely new
CONCLUSION
Final summary
The key message in these blinks is, recent advancements in artificial technology have
generated a lot of hype, anxiety, and controversy. Much of what we currently call AI is based on
neural network models, a process that uses strings of calculations to analyze massive amounts of
data and identify patterns. Governments and private companies have honed this technology to do
everything from optimize image searching and serving up internet ads, to diagnosing diseases and
piloting autonomous aircraft. Where AI research will lead in the future is unclear, but some are
intelligence, and other digital technology issues. Previously, he was a senior staff writer