cs8691-AI - 1st-Unit
cs8691-AI - 1st-Unit
UNIT I
INTRODUCTION
"It is a branch of computer science by which we can create intelligent machines which
can behave like a human, think like humans, and able to make decisions."
Artificial Intelligence exists when a machine can have human based skills such as
learning, reasoning, and solving problems. With Artificial Intelligence you do not need to
preprogram a machine to do some work, despite that you can create a machine with
programmed algorithms which can work with own intelligence, and that is the awesomeness
of AI. It is believed that AI is not a new technology, and some people says that as per Greek
myth, there were Mechanical men in early days which can work and behave like humans.
1.2 DEFINITIONS OF AI
AI definitions can be categorized into four, they are as follows:
Systems that think like humans
Systems that think rationally
Systems that act like humans
System that act rationally
1.2.1 Acting Humanly: Turning Test Approach
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by
itself, demonstrate, explain, and can advise to its user.
To create the AI first we should know that how intelligence is composed, so the
Intelligence is an intangible part of our brain which is a combination of Reasoning, learning,
problem-solving perception, language understanding, etc. To achieve the above factors
for a machine or software Artificial Intelligence requires the following discipline:
1.3.3 Advantages of Artificial Intelligence
Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.
Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for
some theorems.
Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field. At that time high-level computer languages such as FORTRAN,
LISP, or COBOL were invented. And the enthusiasm for AI was very high at that
time.
Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named
as WABOT-1.
The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
During AI winters, an interest of publicity on artificial intelligence was decreased.
1.4.5 A boom of AI (1980-1987)
Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human
expert.
In the Year 1980, the first national conference of the American Association of
Artificial Intelligence was held at Stanford University.
The duration between the years 1987 to 1993 was the second AI Winter duration.
Again Investors and government stopped in funding for AI research as due to high
cost but not efficient result. The expert system such as XCON was very cost effective.
Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
1.4.8 Deep learning, big data and artificial general intelligence (2011-present)
Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had
to solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
Year 2012: Google has launched an Android app feature "Google now", which was
able to provide information to the user as a prediction.
Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't notice
that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom. Nowadays companies like Google, Facebook,
IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
Personal Assistants:
Virtual assistants are already there and some of us would used them. However, as the
technology grows, we can expect them to act as personal assistants and emote like humans.
With artificial intelligence, deep learning, and neural networks, it s highly possible that we
can make robots emote and make them assistants. They could be used in tons of different
purposes such as in hospitality industry, day care centers, elder care, in clerical jobs and
more.
1.5 AGENT
An agent is anything that can viewed as perceiving its environment through sensors
and acting upon that environment through effectors. An Agent runs in the cycle of perceiving,
thinking, and acting those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and
even we are also agents.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
An AI system can be defined as the study of the rational agent and its environment.
The agents sense the environment through sensors and act on their environment through
actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc.
An agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.
1.6 INTELLIGENT AGENTS
A rational agent is an agent which has clear preference, models uncertainty, and acts
in a way to maximize its performance measure with all possible actions. A rational agent is
said to perform the right things. AI is about creating rational agents to use for game theory
and decision theory for various real-world scenarios. For an AI agent, the rational action is
most important because in AI reinforcement learning algorithm, for each best possible action,
agent gets the positive reward and for each wrong action, an agent gets a negative reward.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:
Performance measure which defines the success criterion.
Agent prior knowledge of its environment.
Best possible actions that an agent can perform.
The sequence of percepts.
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent program. It
can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
F : P* → A
Performance
Agent Environment Actuators Sensors
measure
Patient Keyboard
1. Medical Healthy patient Tests
Hospital (Entry of
Diagnose Minimized cost Treatments
Staff symptoms)
Performance
Agent Environment Actuators Sensors
measure
Camera
Dirt detection
Room
Cleanness Wheels sensor
Table
2. Vacuum Efficiency Brushes Cliff sensor
Wood floor
Cleaner Battery life Vacuum Bump Sensor
Carpet
Security Extractor Infrared Wall
Various obstacles
Sensor
Camera
Percentage of Conveyor belt
3. Part - Jointed Arms Joint angle
parts in correct with parts,
picking Robot Hand sensors.
bins. Bins
2. Deterministic vs Stochastic:
If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
A stochastic environment is random in nature and cannot be determined completely
by an agent.
In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent:
If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
The agent design problems in the multi-agent environment are different from single
agent environment.
5. Static vs Dynamic:
If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
For dynamic environment, agents need to keep looking at the world at each action.
Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.
6. Discrete vs Continuous:
If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
A chess game comes under discrete environment as there is a finite number of moves
that can be performed.
A self-driving car is an example of a continuous environment.
7. Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
In a known environment, results for all actions are known to the agent. In unknown
environment, agent needs to learn how it works in order to perform an action.
It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
8. Accessible vs Inaccessible
If agent can obtain complete and accurate information about environment state, then
such an environment is called an Accessible environment else it is called inaccessible.
An empty room whose state can be defined by its temperature is an example of an
accessible environment.
Information about an event on earth is an example of Inaccessible environment.
1.7 TURING TEST IN AI
In 1950, Alan Turing introduced a test to check whether a machine can think like a
human or not, this test is known as the Turing Test. In this test, Turing proposed that the
computer can be said to be an intelligent if it can mimic human response under specific
conditions. Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery
and Intelligence," which considered the question, "Can Machine think?"
The Turing test is based on a party game "Imitation game," with some modifications.
This game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two
players and his job is to find that which player is machine among two of them.
The test result does not depend on each correct answer, but only how closely its
responses like a human answer. The computer is permitted to do everything possible to force
a wrong identification by the interrogator.
The questions and answers can be like:
Interrogator: Are you a computer?
PlayerA (Computer): No
Interrogator: Multiply two large numbers such as (256896489*456725896)
Player A: Long pause and give the wrong answer.
In this game, if an interrogator would not be able to identify which is a machine and
which is human, then the computer passes the test successfully, and the machine is said to be
intelligent and can think like a human.
"In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no AI
program to till date, come close to passing an undiluted Turing test".
ELIZA: ELIZA was a Natural language processing computer program created by
Joseph Weizenbaum. It was created to demonstrate the ability of communication between
machine and humans. It was one of the first chatterbots, which has attempted the Turing Test.
Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed
to simulate a person with Paranoid schizophrenia(most common chronic mental disorder).
Parry was described as "ELIZA with attitude." Parry was tested using a variation of the
Turing Test in the early 1970s.
Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg
in 2001. This bot has competed in the various number of Turing Test. In June 2012, at an
event, Goostman won the competition promoted as largest-ever Turing test content, in which
it has convinced 29% of judges that it was a human. Goostman resembled as a 13-year old
virtual boy.
Features required for a machine to pass the Turing test:
Natural language processing: NLP is required to communicate with Interrogator in
general human language like English.
Knowledge representation: To store and retrieve information during the test.
Automated reasoning: To use previously stored information to answer the questions.
Machine learning: To adapt new changes and can detect generalized patterns.
Vision (For total Turing test): To recognize the interrogator actions and other
objects during a test.
Motor Control (For total Turing test): To act upon objects if requested.
1.8 TYPES OF AI AGENTS
Agents can be grouped into five classes based on their degree of perceived intelligence
and capability. All these agents can improve their performance and generate better action
over the time. These are given below:
Simple Reflex Agent
Model-based reflex agent
Goal-based agents
Utility-based agent
Learning agent
The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
The Model-based agent can work in a partially observable environment, and track the
situation.
Model: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Updating the agent state requires information about:
How the world evolves
How the agent's action affects the world.
1.8.3. Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
Utility-based agent act based not only goals but also the best way to achieve the goal.
The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.
1.9 PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS
Problems are the issues which come across any system. A solution is needed to solve
that particular problem.
Steps to Solve Problem using Artificial Intelligence
The definition of the problem must be included precisely. It should contain the
possible initial as well as final situations which should result in acceptable solution.
Analyzing the problem and its requirement must be done as few features can have
immense impact on the resulting solution.
Identification of Solutions:
From all the identified solutions, the best solution is chosen basis on the results
produced by respective solutions.
Implementation:
1.9.1 Representation of AI
Problem formulation involves deciding what actions and states to consider, when the
description about the goal is provided. It is composed of:
Initial State - start state
Possible actions that can be taken
Transition model – describes what each action does
Goal test – checks whether current state is goal state
Path cost – cost function used to determine the cost of each path.
The initial state, actions and the transition model constitutes state space of the
problem - the set of all states reachable by any sequence of actions. A path in the state space
is a sequence of states connected by a sequence of actions. The solution to the given problem
is defined as the sequence of actions from the initial state to the goal states. The quality of the
solution is measured by the cost function of the path, and an optimal solution is the one with
most feasible path cost among all the solutions.
Hence starting state is {0,0,0,0,0,0,0,0,0}. The goal state or winning combination will
be board position having <O= or <X= separately in the combination of ({1,2,3}, {4,5,6},
{7,8,9},{1,4,7},{2,5,8}, {3,6,9}, {1,5,9}, { 3,5,7}) element values. Hence two goal states can
be {2,0,1,1,2,0,0,0,2} and {2,2,2,0,1,0,1,0,0}. These values correspond to the goal States.
Any board position satisfying this condition would be declared as win for
corresponding player. The valid transitions of this problem are simply putting >1? or >2? in
any of the element position containing 0. In practice, all the valid moves are defined and
stored. While selecting a move it is taken from this store. In this game, valid transition table
will be a vector (having 39 entries), having 9 elements in each.
On reaching the 7th attempt, we reach a state which is our goal state. Therefore, at this state,
our problem is solved.
It is a normal chess game. In a chess game problem, the start state is the initial
configuration of chessboard. The final or goal state is any board configuration, which is a
winning position for any player (clearly, there may be multiple final positions and each board
configuration can be thought of as representing a state of the game). Whenever any player
moves any piece, it leads to different state of game. It is estimated that the chess game has
more than 10120 possible states. The game playing would mean finding (or searching) a
sequence of valid moves which bring the board from start state to any of the possible final
states.
1.9.7 Missionaries and Cannibals Problem
All or some of these production rules will have to be used in a particular sequence to
find the solution of the problem. The rules applied and their sequence is presented in the
following Table.
PART A (2 MARK QUESTIONS)
1. What is AI?
2. Define an agent.
An agent?s behavior is described by the agent function that maps any given percept
sequence to an action.
Task environments are essentially the "problems" to which rational agents are the
"solutions". A Task environment is specified using PEAS (Performance, Environment,
Actuators, and Sensors) description.
The simplest kind of agent is the simple reflex agent. These agents select actions on
the basis AGENT of the current percept, ignoring the rest of the percept history.
Knowing about the current state of the environment is not always enough to decide
what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on.
The correct decision depends on where the taxi is trying to get to. In other words, as well as a
current state description, the agent needs some sort of goal information that describes
situations that are desirable-for example, being at the passenger's destination.
10. What are utility based agents?
Goals alone are not really enough to generate high-quality behavior in most
environments. For example, there are many action sequences that will get the taxi to its
destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper
than others. A utility function maps a state (or a sequence of states) onto a real number,
which describes the associated degree of happiness.
A learning agent can be divided into four conceptual components. The most important
distinction is between the learning element, which is re-ELEMENT possible for making
improvements, and performance element, which is responsible for selecting external actions.
The performance element is what we have previously considered to be the entire agent: it
takes in percepts and decides on actions. The learning element uses CRITIC feedback from
the critic on how the agent is doing and determines how the performance element should be
modified to do better in the future.
Goal formulation
Problem formulation
Search
Search Algorithm
Execution phase
The process of looking for sequences actions from the current state to reach the goal
state is called search. The search algorithm takes a problem as input and returns a solution in
the form of action sequence. Once a solution is found, the execution phase consists of
carrying out the recommended action.
15. What are the components of well-defined problems?