Artificial Intelligence
Artificial Intelligence
Jen Mallia
| Reviewed by Margaret Rouse
Last updated: 9
June, 2023
What Does Artificial Intelligence (AI) Mean?
Artificial intelligence (AI), also known as machine intelligence, is a branch of
computer science that focuses on building and managing technology that can
learn to autonomously make decisions and carry out actions on behalf of a
human being.
At its heart, AI uses the same basic algorithmic functions that drive
traditional software, but applies them in a different way. Perhaps the most
revolutionary aspect of AI is that it allows software to rewrite itself as it
adapts to its environment.
AI initiatives are also talked about in terms of their belonging to one of four
categories:
A Theory of Mind player factors in other player’s behavioral cues and finally,
a self-aware professional AI player stops to consider if playing poker to
make a living is really the best use of their time and effort.
Early milestones in AI
The first AI programs
The earliest successful AI program was written in 1951 by Christopher Strachey, later
director of the Programming Research Group at the University of Oxford.
Strachey’s checkers (draughts) program ran on the Ferranti Mark I computer at
the University of Manchester, England. By the summer of 1952 this program could play
a complete game of checkers at a reasonable speed.
Information about the earliest successful demonstration of machine learning was
published in 1952. Shopper, written by Anthony Oettinger at the University of
Cambridge, ran on the EDSAC computer. Shopper’s simulated world was a mall of eight
shops. When instructed to purchase an item, Shopper would search for it, visiting shops
at random until the item was found. While searching, Shopper would memorize a few of
the items stocked in each shop visited (just as a human shopper might). The next time
Shopper was sent out for the same item, or for some other item that it had already
located, it would go to the right shop straight away. This simple form of learning, as is
pointed out in the introductory section What is intelligence?, is called rote learning.
The first AI program to run in the United States also was a checkers program, written in
1952 by Arthur Samuel for the prototype of the IBM 701. Samuel took over the essentials
of Strachey’s checkers program and over a period of years considerably extended it. In
1955 he added features that enabled the program to learn from experience. Samuel
included mechanisms for both rote learning and generalization, enhancements that
eventually led to his program’s winning one game against a former
Connecticut checkers champion in 1962.
Evolutionary computing
Samuel’s checkers program was also notable for being one of the first efforts at
evolutionary computing. (His program “evolved” by pitting a modified copy against the
current best version of his program, with the winner becoming the new standard.)
Evolutionary computing typically involves the use of some automatic method of
generating and evaluating successive “generations” of a program, until a highly
proficient solution evolves.
Holland joined the faculty at Michigan after graduation and over the next four decades
directed much of the research into methods of automating evolutionary computing, a
process now known by the term genetic algorithms. Systems implemented in Holland’s
laboratory included a chess program, models of single-cell biological organisms, and a
classifier system for controlling a simulated gas-pipeline network.
Genetic algorithms are no longer restricted to “academic” demonstrations, however; in
one important practical application, a genetic algorithm cooperates with a witness to a
crime in order to generate a portrait of the criminal
Newell, Simon, and Shaw went on to write a more powerful program, the General
Problem Solver, or GPS. The first version of GPS ran in 1957, and work continued on the
project for about a decade. GPS could solve an impressive variety of puzzles using a trial
and error approach. However, one criticism of GPS, and similar programs that lack
any learning capability, is that the program’s intelligence is entirely secondhand, coming
from whatever information the programmer explicitly includes.