CH 1 Introduction
CH 1 Introduction
History of AI:
1. Pre-AI Era (Before 1950):
● Early thoughts of AI were found in mythology and mechanical
automatons.
● Philosophers like Aristotle explored logic and reasoning.
2. Foundation Years (1950–1956):
● 1950: Alan Turing proposed the concept of a machine that could
simulate any human intelligence. He introduced the Turing Test.
● 1956: The term "Artificial Intelligence" was first used by John McCarthy
at the Dartmouth Conference, marking the birth of AI as a field.
3. Early Enthusiasm (1956–1974):
● AI programs like Logic Theorist and General Problem Solver (GPS)
were developed.
● Researchers believed AI would soon match human intelligence.
4. The First AI Winter (1974–1980):
● Funding decreased due to the slow progress and high expectations.
● Limited computing power and poor results caused disappointment.
5. Expert Systems Era (1980–1987):
● Development of expert systems like MYCIN and XCON which used rule-
based logic to solve specific problems.
● Commercial interest in AI revived.
6. The Second AI Winter (1987–1993):
● Again, AI failed to deliver on high expectations.
● Decline in interest due to high costs and low adaptability.
7. Modern AI (1993–Present):
● Rapid growth in computing power and big data.
● AI applications in machine learning, deep learning, NLP, computer
vision, robotics, and more.
● Examples: Google Assistant, Siri, ChatGPT, self-driving cars, etc.
Applications of AI:
● Healthcare – Disease prediction, diagnosis.
● Finance – Fraud detection, algorithmic trading.
● Robotics – Autonomous robots, drones.
● Transportation – Self-driving cars.
● Education – Smart tutors, learning analytics.
● Entertainment – Personalized recommendations.
Conclusion:
AI has evolved from basic logic programs to powerful systems that can learn and
adapt. With continuous advancements, AI is becoming an essential part of various
industries and everyday life.
Simple Definition:
AI is the ability of a machine to mimic or simulate human intelligence and
behavior.
Definition by Experts:
● John McCarthy (1956):
"Artificial Intelligence is the science and engineering of making intelligent
machines."
Types of AI:
. Artificial Narrow Intelligence (ANI):
○ Definition:
ANI, also known as Weak AI, refers to AI systems that are designed to
perform a specific task. It is the most common type of AI today.
○ Capabilities:
It can only handle one narrow task and cannot think beyond that
task.
○ Examples:
◆ Google Assistant – It can help with setting reminders or
faces in photos.
○ Current Status:
AGI, also known as Strong AI, refers to AI that can perform any
intellectual task a human can. It is designed to have human-like
intelligence.
○ Capabilities:
AGI can learn, understand, and reason in different domains, just like
a human.
○ Examples:
◆ A robot that can learn to cook, drive, solve math problems, and
AGI is still under development and does not exist yet. Researchers
are working to make it a reality.
. Artificial Super Intelligence (ASI):
○ Definition:
ASI refers to AI that is more intelligent than the best human minds. It
would have advanced problem-solving abilities, creativity, and
○
Conclusion:
● ANI is the most common and works on specific tasks (like voice
assistants or image recognition).
● AGI is the next level, where AI can perform all human-like tasks, but it
doesn’t exist yet.
● ASI is a future concept where AI is more intelligent than humans and
could change the world, but it is still a theoretical idea.
2. Agents:
An agent is anything that can perceive its environment and act upon it to
achieve a goal.
Types of Agents:
. Simple Reflex Agent – Reacts based on current input.
Example: Automatic door opens when motion is detected.
. Model-Based Reflex Agent – Uses memory to handle partially
observable environments.
. Goal-Based Agent – Takes actions to achieve a goal.
. Utility-Based Agent – Tries to maximize performance or happiness
(utility).
. Learning Agent – Can learn from experience and improve over time.
Agent = Sensor + Actuator + Decision Making
3. Environment:
The environment is everything an agent interacts with. It provides inputs
(percepts) and receives outputs (actions) from the agent.
Types of Environments:
. Fully Observable vs. Partially Observable
○ Full: Agent can see the whole environment.
○ Partial: Limited information available.
Summary:
Concept Description
Production Rule-based decision making (IF-
THEN)
Agent Entity that senses and acts
Environment The world in which the agent
operates
Concept Description
Production Rule-based decision making (IF-
THEN)
Agent Entity that senses and acts
Environment The world in which the agent
operates
. Perception
○ Uses sensors to observe the environment.
. Reactivity
○ Responds to changes in the environment.
. Adaptability / Learning
○ Learns from experience to improve future performance.
. Social Ability
○ Interacts and communicates with other agents or humans.
2. Concept of Rationality
Rationality refers to doing the right thing to maximize performance or achieve
goals, based on:
● Knowledge
● Percepts (current inputs)
● Possible actions
● Expected outcomes
Key Points:
● A rational agent always selects the best possible action.
● Rationality is different from perfection (limited information or time may
lead to suboptimal decisions).
● Depends on:
○ Performance measure
○ Agent’s prior knowledge
○ Available actions
○ Percepts received
Example: A self-driving car choosing the safest and fastest route based on
traffic data.
3. Nature of Environments
An environment is the external world in which an agent operates. The nature of
the environment affects how the agent functions.
Properties of Environments:
Property Description
Observable Full or Partial (can the agent see the
whole state?)
Deterministic Is the outcome of actions
predictable?
Episodic Does the current action depend on
previous ones?
Static Does the environment change while
agent is thinking?
Discrete Are there a limited number of
actions and states?
Single-agent Is the agent acting alone or with
others?
Examples:
Environment Type
Chess Fully observable, deterministic,
discrete, single-agent
Self-driving car Partially observable, stochastic,
dynamic, multi-agent
Summary Table:
Topic Key Idea
Intelligent Agent Senses, reasons, acts, and learns
Rationality Choosing the best possible action
Nature of Environment Properties that define the agent's
world