Unit 1 1
Unit 1 1
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the time.
These are given below:
o Goal-based agents
o Utility-based agent
o Learning agent
o The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current percepts and ignore the rest of the percept history.
o The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
o The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
o These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
3. Goal-based agents
o The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
o These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
b. Critic: Learning element takes feedback from critic which describes that how well the agent
is doing with respect to a fixed performance standard.
d. Problem generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.
Performance measures in artificial intelligence refer to the measures used to assess the
progress of a strong framework. These actions give a quantitative or subjective approach to
evaluating how well the framework plays out the errands relegated to it.
There are different sorts of performance measures, contingent upon the idea of the artificial
intelligence framework and its particular assignments. Some normal exhibition measures
incorporate exactness, accuracy, review, F1-score, mistake rate, and proficiency.
Performance measures assume a critical part in artificial intelligence framework plan and
improvement. They guide the optimization process, helping AI (computer-based intelligence)
designs calibrate the framework to accomplish improved results. Furthermore, execution
estimates empower the correlation of various artificial intelligence models and calculations,
helping with the choice of the most reasonable methodology for a given issue.
Types of Environments
Artificial intelligence frameworks can work in different kinds of conditions, going from
controlled and deterministic to dynamic and unusual. Some artificial intelligence applications,
like advanced mechanics, work in actual conditions, while others, similar to normal language
handling, manage virtual or computerized spaces.
o Static: In a static environment, the climate doesn't change while the specialist is pondering. It
stays consistent over the long run.
o Discrete: In a discrete environment, both the state space and activity space are limited and
countable.
o Continuous: In a constant environment, either the state space, activity space, or both are
consistent, meaning they are addressed by a scope of values as opposed to discrete qualities.
Actuators in artificial intelligence are parts or components liable for doing the activities or
reactions created by the clever framework. They are the means through which the artificial
intelligence framework interfaces with the environment. Actuators come in assorted shapes
and arrangements, dependent upon the specific use case.
Types of Actuators
In the field of Artificial intelligence, actuators can be characterized into different classes as
per their functional attributes. For example, in advanced mechanics, actuators can be
engines or servos that control the development of robot attachments. In virtual conditions,
actuators can be programming parts answerable for creating text, discourse, or visual results.
Actuators are the extension between the artificial intelligence framework's dynamic cycles and
its effect on the environment. They execute the activities or orders created by the artificial
intelligence system, in bright of how its might interpret the environment and the desired
performance measures.
Artificial intelligence sensors are instrumental parts that assemble information and data from
the environmental elements. They outfit the environment intelligence framework with urgent
info, enabling it to see and grasp its current circumstance.
Sensors are fundamental to the working of artificial intelligence system, as they give the crude
information that drives dynamic cycles. The exactness and unwavering quality of sensors are
basic elements, as any blunders or mistakes in sensor information can prompt flawed activities
by the artificial intelligence system. Calibration and sensor blend techniques are often used to
enhance sensor accuracy.
It is the efficient integration of the PEAS components that enables AI systems to exhibit
intelligent behaviour. The AI system's decision-making is guided by performance metrics, and
its comprehension of the environment enables it to adjust to changing conditions.
Consider the example of a self-driving automobile to demonstrate the idea of PEAS integration.
In this case, the car's ability to get to its destination quickly and safely may be the performance
metric. The weather, traffic, and the road are all part of the environment.
AI PEAS Examples
Driverless Cars
o Performance Measure: To guarantee passenger safety and punctual arrivals, safe navigation
and effective route planning are the measures for driverless cars.
o Environment: The automobile must navigate through the environment, which consists of
roads, traffic patterns, pedestrians, and weather.
o Actuators: Actuators are the parts of the car's braking, steering, and accelerating systems
that carry out the actions prescribed by the AI.
o Sensors: Real-time data about the car's surrounds is gathered by sensors including cameras,
LiDAR, GPS, and radar, which let the vehicle perceive and react to its surroundings.
Agent Environment in AI
An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.
Features of Environment
As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
o If an agent sensor can sense or access the complete state of an environment at each point in
time then it is a fully observable environment, it is partially observable. For reference, Imagine
a chess-playing agent. In this case, the agent can fully observe the state of the chessboard at
all times. Its sensors (in this case, vision or the ability to access the board's state) provide
complete information about the current position of all pieces. This is a fully observable
environment because the agent has perfect information about the state of the world.
o A fully observable environment is easy as there is no need to maintain the internal state to
keep track of the history of the world. For reference, Consider a self-driving car navigating a
busy city. While the car has sensors like cameras, lidar, and radar, it can't see everything at all
times. Buildings, other vehicles, and pedestrians can obstruct its sensors. In this scenario, the
car's environment is partially observable because it doesn't have complete and constant
access to all relevant information. It needs to maintain an internal state and history to make
informed decisions even when some information is temporarily unavailable.
o An agent with no sensors in all environments then such an environment is called unobservable.
For reference, think about an agent designed to predict earthquakes but placed in a sealed,
windowless room with no sensors or access to external data. In this situation, the environment
is unobservable because the agent has no way to gather information about the outside world.
It can't sense any aspect of its environment, making it completely unobservable.
2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state of the
environment, then such an environment is called a deterministic environment. For reference,
Chess is a classic example of a deterministic environment. In chess, the rules are well-defined,
and each move made by a player has a clear and predictable outcome based on those rules. If
you move a pawn from one square to another, the resulting state of the chessboard is entirely
determined by that action, as is your opponent's response. There's no randomness or
uncertainty in the outcomes of chess moves because they follow strict rules. In a deterministic
environment like chess, knowing the current state and the actions taken allows you to
completely determine the next state.
o In a deterministic, fully observable environment, an agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current percept
is required for the action. For example, Tic-Tac-Toe is a classic example of an episodic
environment. In this game, two players take turns placing their symbols (X or O) on a 3x3 grid.
Each move by a player is independent of previous moves, and the goal is to form a line of three
symbols horizontally, vertically, or diagonally. The game consists of a series of one-shot actions
where the current state of the board is the only thing that matters for the next move. There's
no need for the players to remember past moves because they don't affect the current move.
The game is self-contained and episodic.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called a single-agent environment. For example, Solitaire is a classic example
of a single-agent environment. When you play Solitaire, you're the only agent involved. You
make all the decisions and actions to achieve a goal, which is to arrange a deck of cards in a
specific way. There are no other agents or players interacting with you. It's a solitary game
where the outcome depends solely on your decisions and moves. In this single-agent
environment, the agent doesn't need to consider the actions or decisions of other entities.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such an environment
is called a dynamic environment it is called a static environment.
o Static environments are easy to deal with because an agent does not need to continue looking
at the world while deciding on an action. For reference, A crossword puzzle is an example of a
static environment. When you work on a crossword puzzle, the puzzle itself doesn't change
while you're thinking about your next move. The arrangement of clues and empty squares
remains constant throughout your problem-solving process. You can take your time to
deliberate and find the best word to fill in each blank, and the puzzle's state remains unaltered
during this process. It's a static environment because there are no changes in the puzzle based
on your deliberations.
o However, for a dynamic environment, agents need to keep looking at the world at each action.
For reference, Taxi driving is an example of a dynamic environment. When you're driving a taxi,
the environment is constantly changing. The road conditions, traffic, pedestrians, and other
vehicles all contribute to the dynamic nature of this environment. As a taxi driver, you need to
keep a constant watch on the road and adapt your actions in real time based on the changing
circumstances. The environment can change rapidly, requiring your continuous attention and
decision-making. It's a dynamic environment because it evolves while you're deliberating and
taking action.
6. Discrete vs Continuous:
o If in an environment, there are a finite number of percepts and actions that can be performed
within it, then such an environment is called a discrete environment it is called a continuous
environment.
o Chess is an example of a discrete environment. In chess, there are a finite number of distinct
chess pieces (e.g., pawns, rooks, knights) and a finite number of squares on the chessboard.
The rules of chess define clear, discrete moves that a player can make. Each piece can be in a
specific location on the board, and players take turns making individual, well-defined moves.
The state of the chessboard is discrete and can be described by the positions of the pieces on
the board.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
o In a known environment, the results of all actions are known to the agent. While in an
unknown environment, an agent needs to learn how it works in order to perform an action.
o The opening theory in chess can be considered as a known environment for experienced chess
players. Chess has a vast body of knowledge regarding opening moves, strategies, and
responses. Experienced players are familiar with established openings, and they have studied
various sequences of moves and their outcomes. When they make their initial moves in a
game, they have a good understanding of the potential consequences based on their
knowledge of known openings.
o Imagine a scenario where a rover or drone is sent to explore an alien planet with no prior
knowledge or maps of the terrain. In this unknown environment, the agent (rover or drone)
has to explore and learn about the terrain as it goes along. It doesn't have prior knowledge of
the landscape, potential hazards, or valuable resources. The agent needs to use sensors and
data it collects during exploration to build a map and understand how the terrain works. It
operates in an unknown environment because the results and consequences of its actions are
not initially known, and it must learn from its experiences.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's environment, then
such an environment is called an Accessible environment else it is called inaccessible.
o For example, Imagine an empty room equipped with highly accurate temperature sensors.
These sensors can provide real-time temperature measurements at any point within the room.
An agent placed in this room can obtain complete and accurate information about the
temperature at different locations. It can access this information at any time, allowing it to
make decisions based on the precise temperature data. This environment is accessible
because the agent can acquire complete and accurate information about the state of the room,
specifically its temperature.
o For example, Consider a scenario where a satellite in space is tasked with monitoring a specific
event taking place on Earth, such as a natural disaster or a remote area's condition. While the
satellite can capture images and data from space, it cannot access fine-grained information
about the event's details. For example, it may see a forest fire occurring but cannot determine
the exact temperature at specific locations within the fire or identify individual objects on the
ground. The satellite's observations provide valuable data, but the environment it is
monitoring (Earth) is vast and complex, making it impossible to access complete and detailed
information about all aspects of the event. In this case, the Earth's surface is an inaccessible
environment for obtaining fine-grained information about specific events.
Turing Test in AI
In 1950, Alan Turing introduced a test to check whether a machine can think like a human or
not, this test is known as the Turing Test. In this test, Turing proposed that the computer can
be said to be an intelligent if it can mimic human response under specific conditions.
Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?"
The Turing test is based on a party game "Imitation game," with some modifications. This game
involves three players in which one player is Computer, another player is human responder,
and the third player is a human Interrogator, who is isolated from other two players and his
job is to find that which player is machine among two of them.
The conversation between all players is via keyboard and screen so the result would not
depend on the machine's ability to convert words as speech.
The test result does not depend on each correct answer, but only how closely its responses
like a human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
PlayerA (Computer): No
In this game, if an interrogator would not be able to identify which is a machine and which is
human, then the computer passes the test successfully, and the machine is said to be
intelligent and can think like a human.
"In 1991, the New York businessman Hugh Loebner announces the prize competition, offering
a $100,000 prize for the first computer to pass the Turing test. However, no AI program to till
date, come close to passing an undiluted Turing test".
A simple example of a Turing Test is to have two participants: a human and a machine (AI),
both communicating through a text interface. There is also a third participant, the
interrogator, who knows that one of the participants is human and the other is a machine.
The interrogator's job is to ask questions to both and determine which one is the machine,
solely based on their responses.
1. Setup: The interrogator does not know which is the human or the AI, and they communicate
via text.
2. Questioning: The interrogator asks both the human and the AI a series of questions, trying to
determine which is which.
3. Goal: If the interrogator cannot reliably distinguish between the human and the AI after a set
of questions, the AI is said to have "passed" the Turing Test.
Example Interaction
• Human: "Paris."
• AI: "Paris."
• AI: "I don't have personal experiences or preferences, but I know 'Inception' is a popular
movie."
The goal of the machine is to give responses that are as human-like as possible, while the
interrogator tries to tell them apart.
The Turing Test, introduced by Alan Turing in 1950, is a crucial milestone in the history of
artificial intelligence (AI). It came to light in his paper titled 'Computing Machinery and
Intelligence.' Turing aimed to address a profound question: Can machines mimic human-like
intelligence?
This curiosity arose from Turing's fascination with the concept of creating thinking machines
that exhibit intelligent behavior. He proposed the Turing Test as a practical method to
determine if a machine can engage in natural language conversations convincingly, making a
human evaluator believe it's human.
Turing's work on this test laid the foundation for AI research and spurred discussions about
machine intelligence. It provided a framework for evaluating AI systems. Over time, the Turing
Test has evolved and remains a topic of debate and improvement. Its historical importance in
shaping AI is undeniable, continuously motivating AI researchers and serving as a benchmark
for gauging AI advancements.
1. Total Turing Test: This extended version of the Turing Test goes beyond text-based
conversations. It assesses the machine's capacity to comprehend and respond to not just
words but also visual and physical cues presented by the interrogator. This includes recognizing
objects shown to it and taking requested actions in response. Essentially, it examines if the AI
can interact with the world in a way that reflects a deeper level of understanding.
2. Reverse Turing Test: In a twist on the traditional Turing Test, the roles are reversed here. In
this variation, it's the machine that plays the role of the interrogator. Its task is to differentiate
between humans and other machines based on the responses it receives. This reversal
challenges the AI to evaluate the intelligence of others, highlighting its ability to detect artificial
intelligence.
3. Multimodal Turing Test: In a world where communication takes many forms, the Multimodal
Turing Test assesses AI's capability to understand and respond to various modes of
communication concurrently. It examines whether AI can seamlessly process and respond to
text, speech, images, and potentially other modes simultaneously. This variation
acknowledges the diverse ways we communicate and tests if AI can keep up with our
multifaceted interactions.
ELIZA: ELIZA was a Natural language processing computer program created by Joseph
Weizenbaum. It was created to demonstrate the ability of communication between machine
and humans. It was one of the first chatterbots, which has attempted the Turing Test.
Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to
simulate a person with Paranoid schizophrenia(most common chronic mental disorder). Parry
was described as "ELIZA with attitude." Parry was tested using a variation of the Turing Test in
the early 1970s.
Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg in 2001.
This bot has competed in the various number of Turing Test. In June 2012, at an event,
Goostman won the competition promoted as largest-ever Turing test content, in which it has
convinced 29% of judges that it was a human.Goostman resembled as a 13-year old virtual
boy.
There were many philosophers who really disagreed with the complete concept of Artificial
Intelligence. The most famous argument in this list was "Chinese Room."
In the year 1980, John Searle presented "Chinese Room" thought experiment, in his paper
"Mind, Brains, and Program," which was against the validity of Turing's Test. According to his
argument, "Programming a computer may make it to understand a language, but it will not
produce a real understanding of language or consciousness in a computer."
He argued that Machine such as ELIZA and Parry could easily pass the Turing test by
manipulating keywords and symbol, but they had no real understanding of language. So it
cannot be described as "thinking" capability of a machine such as a human.
o Automated reasoning: To use the previously stored information for answering the questions.
o Machine learning: To adapt new changes and can detect generalized patterns.
o Vision (For total Turing test): To recognize the interrogator actions and other objects during a
test.
o Motor Control (For total Turing test): To act upon objects if requested.
o Not a True Measure of Intelligence: Passing the Turing Test doesn't guarantee genuine
machine intelligence or consciousness. Critics, like John Searle's "Chinese Room" argument,
contend that a computer can simulate human-like responses without understanding or
consciousness.
o Simplicity of Test Scenarios: The Turing Test primarily focuses on text-based interactions,
which might not fully assess a machine's capacity to comprehend and respond to the
complexities of the real world.