0% found this document useful (0 votes)
132 views55 pages

Artificial Intelligence

This document provides an overview of artificial intelligence (AI), including definitions, history, applications, and controversies. It discusses how AI is the study of making computers intelligent through problem solving, reasoning, and learning. The document outlines key events in the development of AI from its origins in the 1940s to current trends. It also examines different perspectives on what constitutes intelligence and rational thought. Examples of AI agents and their sensors and effectors are provided.

Uploaded by

yuvaraaj aleti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views55 pages

Artificial Intelligence

This document provides an overview of artificial intelligence (AI), including definitions, history, applications, and controversies. It discusses how AI is the study of making computers intelligent through problem solving, reasoning, and learning. The document outlines key events in the development of AI from its origins in the 1940s to current trends. It also examines different perspectives on what constitutes intelligence and rational thought. Examples of AI agents and their sensors and effectors are provided.

Uploaded by

yuvaraaj aleti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

ARTIFICIAL INTELLIGENCE

Artificial Intelligence in the Movies


Artificial Intelligence in Real Life
A young science (≈ 50 years old)
– Exciting and dynamic field, lots of uncharted territory left
– Impressive success stories
– “Intelligent” in specialized domains
– Many application areas

Face detection Formal verification


Why the interest in AI?

Search engines
Science

Medicine/
Diagnosis
Labor
Appliances What else?
What is artificial intelligence?
• There is no clear consensus on the definition of AI
• John McCarthy coined the phrase AI in 1956
http://www.formal.stanford.edu/jmc/whatisai/whatisai.html
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially
intelligent computer programs. It is related to the similar task of using
computers to understand human or other intelligence, but AI does not have to
confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world.
Varying kinds and degrees of intelligence occur in people, many animals and
some machines.
What is AI? (Cont’d)
Other possible AI definitions
• AI is a collection of hard problems which can be solved by humans
and other living things, but for which we don’t have good
algorithms for solving.
– e. g., understanding spoken natural language, medical
diagnosis, circuit design, learning, self-adaptation, reasoning,
chess playing, proving math theories, etc.
• Russsell & Norvig: a program that
– Acts like human (Turing test)
– Thinks like human (human-like patterns of thinking steps)
– Acts or thinks rationally (logically, correctly)
• Some problems used to be thought of as AI but are now
considered not
– e. g., compiling Fortran in 1955, symbolic mathematics in 1965,
pattern recognition in 1970, what for the future?

What is the scientific method hypothesis behind AI?


One Working Definition of AI

Artificial intelligence is the study of how to make


computers do things that people are better at or would be
better at if:
• they could extend what they do to a World Wide
Web-sized amount of data and
• not make mistakes.
AI Purposes

"AI can have two purposes. One is to use the power of


computers to augment human thinking, just as we use
motors to augment human or horse power. Robotics
and expert systems are major branches of that. The
other is to use a computer's artificial intelligence to
understand how humans think. In a humanoid way. If
you test your programs not merely by what they can
accomplish, but how they accomplish it, they you're
really doing cognitive science; you're using AI to
understand the human mind."
- Herb Simon
What’s easy and what’s hard?
• It’s been easier to mechanize many of the high level cognitive
tasks we usually associate with “intelligence” in people
– e. g., symbolic integration, proving theorems, playing chess,
some aspect of medical diagnosis, etc.
• It’s been very hard to mechanize tasks that animals can do easily
– walking around without running into things
– catching prey and avoiding predators
– interpreting complex sensory information (visual, aural, …)
– modeling the internal states of other animals from their
behavior
– working as a team (ants, bees)
• Is there a fundamental difference between the two categories?
• Why are some complex problems (e.g., solving differential
equations, database operations) are not subjects of AI?
History of AI
• AI has roots in a number of scientific disciplines
– computer science and engineering (hardware and software)
– philosophy (rules of reasoning)
– mathematics (logic, algorithms, optimization)
– cognitive science and psychology (modeling high level
human/animal thinking)
– neural science (model low level human/animal brain activity)
– linguistics
• The birth of AI (1943 – 1956)
– McCulloch and Pitts (1943): simplified mathematical model of
neurons (resting/firing states) can realize all propositional logic
primitives (can compute all Turing computable functions)
– Alan Turing: Turing machine and Turing test (1950)
– Claude Shannon: information theory; possibility of chess playing
computers
– Boole, Aristotle, Euclid (logics, syllogisms)
• Early enthusiasm (1952 – 1969)
– 1956 Dartmouth conference
John McCarthy (Lisp);
Marvin Minsky (first neural network machine);
Alan Newell and Herbert Simon (GPS);
– Emphasis on intelligent general problem solving
GSP (means-ends analysis);
Lisp (AI programming language);
Resolution by John Robinson (basis for automatic
theorem proving);
heuristic search (A*, AO*, game tree search)
• Emphasis on knowledge (1966 – 1974)
– domain specific knowledge is the key to overcome existing
difficulties
– knowledge representation (KR) paradigms
– declarative vs. procedural representation
• Knowledge-based systems (1969 – 1999)
– DENDRAL: the first knowledge intensive system (determining 3D
structures of complex chemical compounds)
– MYCIN: first rule-based expert system (containing 450 rules for
diagnosing blood infectious diseases)
EMYCIN: an ES shell
– PROSPECTOR: first knowledge-based system that made
significant profit (geological ES for mineral deposits)
• AI became an industry (1980 – 1989)
– wide applications in various domains
– commercially available tools
– AI winter
• Current trends (1990 – present)
– more realistic goals
– more practical (application oriented)
– distributed AI and intelligent software agents
– resurgence of natural computation - neural networks and
emergence of genetic algorithms – many applications
– dominance of machine learning (big apps)
AI is Controversial
• AI Winter – too much promised
• 1966: the failure of machine translation,
• 1970: the abandonment of connectionism,
• 1971−75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon
University
• 1973: the large decrease in AI research in the United Kingdom in response to the Lighthill report,
• 1973−74: DARPA's cutbacks to academic AI research in general,
• 1987: the collapse of the Lisp machine market,
• 1988: the cancellation of new spending on AI by the Strategic Computing Initiative
• 1993: expert systems slowly reaching the bottom
• 1990s: the quiet disappearance of the fifth-generation computer project's original goals,

• AI will cause
– social ills, unemployment
– End of humanity
Thinking Humanly: Cognitive Science
• 1960 “Cognitive Revolution”: information-processing
psychology replaced behaviorism

• Cognitive science brings together theories and experimental


evidence to model internal activities of the brain
– What level of abstraction? “Knowledge” or “Circuits”?
– How to validate models?
• Predicting and testing behavior of human subjects (top-down)
• Direct identification from neurological data (bottom-up)
• Building computer/machine simulated models and reproduce results
(simulation)
Thinking Rationally: Laws of Thought
• Aristotle (~ 450 B.C.) attempted to codify “right thinking”
What are correct arguments/thought processes?
• E.g., “Socrates is a man, all men are mortal; therefore Socrates is mortal”

• Several Greek schools developed various forms of logic:


notation plus rules of derivation for thoughts.

• Problems:
1) Uncertainty: Not all facts are certain (e.g., the flight might be delayed).
2) Resource limitations: There is a difference between solving a problem in principle and
solving it in practice under various resource limitations such as time, computation,
accuracy etc. (e.g., purchasing a car)
ArtificialIntelligence

Intelligent Agents
Outline
• Agents and environments
• Rationality
• Intelligence Agent
• PEAS(Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon
that environment through actuators

– Operates in an environment
– Perceive its environment through sensors
– Acts upon its environment through actuators/effectors
– Has Goals
Agents andenvironments

• The agent function maps from percept histories to


actions:
[f: P*  A]
• The agent program runs on the physical architecture
to produce f
• agent = architecture + program
Sensor andEffectors
• An agent perceives its environment through sensors
 The complete set of inputs at a given timeis called a
percept
 The current percept or Sequence of percepts caninfluence
the action of anagent

• It can change the environment through actuators/


effectors
 An operation involving an actuator is called anaction
 Action can be grouped into actionsequences
Examples ofAgents

Humans Programs Robots


senses keyboard, mouse, dataset cameras, pads
body parts monitor, speakers, files motors, limbs
Examples of Agents(cont..)
Ex:
 Human agent: eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts foractuators

 Robotic agent: cameras and infrared range finders for


sensors;
– various motors, wheels, and speakers for actuators

 A software agent: function (Input) assensors


—function as actuators (output-Screening)
Vacuum-CleanerWorld

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOp
A vacuum-cleaner agent
• \input{tables/vacuum-agent-function-table}
Rationality
What is rational at any given time dependson
four things:
– The performance measure that defines the
criterion of success.
– The agent's prior knowledge of the environment.
– The actions that the agent canperform.
– The agent's percept sequence to date.
Rationalagents
• A rational agent is one that does the right thing.
• For each possible percept sequence, a rational agent
should select an action that is expected to maximize its
performance measure, given the evidence provided by
the percept sequence and whatever built-in knowledge
the agent has.
• An agent's percept sequence is the complete history of
everything the agent has ever perceived
Rational agents(cont..)
• Rationality is distinct from omniscience(all-
knowing with infinite knowledge)
• Agents can perform actions in order to modify
future percepts so as to obtain useful
information (information gathering,
exploration)
Autonomy inAgents
The autonomy of an agent is the extent to which
its behaviour is determined by its ownexperience
(with ability to learnand adapt)

• Extremes
– No autonomy – ignores environment/data
– Complete autonomy – must act randomly/no program
• Example: baby learning to crawl
• Ideal: design agents to have some autonomy
– Possibly good to become more autonomous in time
IntelligenceAgent
• Must sense
• Must act
• Must autonomous (to someextend)
• Must rational
PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting forintelligent agent
design
• Consider, e.g., the task of designing an automated
taxi driver:

– Performance measure
– Environment
– Actuators
– Sensors
PEAS
• The task ofdesigning an automated taxi driver:

– Performance measure: Safe, fast, legal,


comfortable trip, maximize profits
– Environment: Roads, other traffic,
pedestrians, customers
– Actuators: Steering wheel, accelerator,
brake, signal, horn
– Sensors: Cameras, sonar, speedometer,
GPS, odometer, engine sensors, keyboard
PEAS
Agent: Medical diagnosis system

• Performance measure: Healthy patient,


minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions, tests,
diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
PEAS
Agent: Part-picking robot
• Performance measure: Percentage of partsin
correct bins
• Environment: Conveyor belt with parts,bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard
Types of theEnvironment
Environments –
Accessible vs. inaccessible
• An accessible environment is one in which the agent can
obtain complete, accurate, up-to-date information about
the environment’s state
• Most moderately complex environments (for example,
the everyday physical world and the Internet) are
inaccessible
• The more accessible an environment is, the simpler it is
to build agents tooperate in it

23
Environments –
Deterministic vs. non-deterministic
• A deterministic environment is one in which the next
state of the environment is completely determined by
the current state and the action executed by theagent.

• The physical world can to all intents and purposesbe


regarded as non-deterministic
• Non-deterministic environments present greater
problems for the agentdesigner

24
Environments –
Episodic vs. non-episodic
• In an episodic environment, the performance of an
agent is dependent on a number of discrete episodes,
with no link between the performance of an agent in
different scenarios
• Episodic environments are simpler from the agent
developer’s perspective because the agent can
decide what action to perform based only on the
current episode —it need not reason about the
interactions between this and future episodes

25
Environments –
Static vs. dynamic
• A static environment is unchanged while an agent is
reflecting.
• A dynamic environment is one that has other
processes operating on it, and which hence changesin
ways beyond the agent’s control
• Other processes can interfere with the agent’sactions
(as in concurrent systems theory)
• The physical world is a highly dynamic environment

26
Environments –
Discrete vs. continuous
• An environment is discrete if there are a fixed,finite
number of actions and percepts in it
– Ex: chess game
• Continuous environments have a certain level of
mismatch with computer systems
– Ex: taxi driving
• Discrete environments could in principle be handled
by a kind of “lookup table”

27
?
Agent types
 Four basic types in order of increasing
generality:
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents

All these can be turned into learning agents

23
Simple reflex agents

24
function Reflex-Vacuum-Agent( [location,status]) re-
turns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left

function SIMPLE-REFLEX-AGENT (percept) returns an


action
Persistent: Rules, a set of condition-action rules

State<- UPDATE-STATE(state,action,percept,model)
Rule<-RULE-MATCH(state, rules)
Action<-(rule,ACTION)
return action
Model-based reflex agents

It keeps track of the current state of the world


using an internal model

25
function MODEL-BASED-REFLEX-AGENT (percept) returns an action
Persistent: state, the agent’s current conception of the world state
Model, a description of how the next state depends on current state and action
Rules, a set of condition-action rules
Action, the most recent action, initially none

State<- UPDATE-STATE(state,action,percept,model)
Rule<-RULE-MATCH(state, rules)
Action<-(rule,ACTION)
return action
Goal-based agents

It keeps track of the current state of the world


and a set of goals to achieve.

26
Utility-based agents

27
Learning agents

28
Summary
 Agents interact with environments through actuators and sensors
 The agent function describes what the agent does in all
circumstances
 The performance measure evaluates the environment sequence
 A perfectly rational agent maximizes expected performance
 Agent programs implement (some) agent functions
 PEAS descriptions define task environments
 Environments are categorized along several dimensions:
 observable? deterministic? episodic? static? discrete? single-
agent?
 Several basic agent architectures exist:
 reflex, model-based, goal-based, utility-based
29
?
Discussion
ReviewQuestion
• Define in your own words the followingterms:
– agent, agent function, agent program, rationality,autonomy,
• For each of the following agents, develop aPEAS
description of the taskenvironment
– Part-picking robot, TaxiDriver
• Describe the following types of theenvironment
– Static vs. Dynamic
– Deterministic vs. non-deterministic
• List 4 most complex environments in which constructing
agent is very difficult.
• What is Omniscience?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy