0% found this document useful (0 votes)
17 views62 pages

Ai ML Unit 1

Uploaded by

priyatadi1522
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views62 pages

Ai ML Unit 1

Uploaded by

priyatadi1522
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

UNITI

Introduction:What Is AI?, The Foundations of


Artificial Intelligence, The History of Artificial
Intelligence, The State of the Art, Agents and
Environments, Good Behavior: The Concept of
Rationality, The Nature of Environments, The
Structure of Agents.
ArtificialIntelligence: AIis study ofmaking computer do things
intelligently.
Example: 1.Chess game
2.Driverless car
3.Robotics
Artificial:Any thing created
byhuman.Intelligence:Capacity tounderstand, think
and learn.
Human intelligence behaviour is to be simulated to machines to
make intelligence.
AI programs can be simple AI programs to expert AI programs
What is ArtificialIntelligence?

 Definitions of AI vary
 Artificial Intelligence is the studyof
systems that
think like humansthink rationally

act like humans act rationally

4
WHATISAI?
 A systemisrational if itdoes the“right thing,” givenwhat itknows.
1)Actinghumanly:TheTuringTestapproach

 The TuringTest,proposed by AlanTuring (1950), wasdesigned to


provide asatisfactory operationaldefinition ofintelligence.

The computer would needto possess the following capabilities:

a)naturallanguageprocessingto enable it to communicate successfully in English.

b)knowledgerepresentationto store whatitknowsorhears.

c)automatedreasoningto use the stored information toanswer questions andto


drawnewconclusions.

d)machinelearningto adapttonewcircumstancesand to detect andextrapolate


patterns
To pass the totalTuring Test, thecomputer willneed

e)computervisionto perceive objects, and


f)roboticsto manipulate objects and moveabout.

These six disciplines compose most ofAI.


2)Thinkinghumanly:Thecognitivemodelingapproach

 If we are going tosay that agivenprogram thinks likea human, wemust


have some wayofdetermining how humansthink.

The interdisciplinaryfield ofcognitivesciencebrings


together computermodels from AI andexperimental
techniques from psychologyto constructprecise and
testable theories ofthe humanmind.
3)Thinkingrationally:The“lawsofthought”approach

 yieldcorrect conclusionswhengivencorrectpremises

for example,“Socrates isa man; allmen aremortal;


therefore, Socrates ismortal.”
4)Actingrationally:Therationalagentapproach

 An agent is just something thatacts

Arationalagentis one that actsso astoachieve the


bestoutcomeor, when thereisuncertainty, the best
expected outcome.
THEFOUNDATIONSOFARTIFICIALINTELLIGENCE

1.Philosophy

2.Mathematics

3.Economics

4.Neuroscience

5.Psychology

6.Computerengineering

7.Controltheoryandcybernetics

8.Linguistics
1.Philosophy

•Can formal rules beused to drawvalid conclusions?


•How does the mindarise from aphysical brain?
•Where does knowledge
comefrom?•How does knowledge
leadto action?

Materialism which holds that the brain’s operation accordingto the lawsof physics
constitutes the mind.

The confirmationtheory of Carnap and Carl Hempel (1905–1997) attemptedto


analyze the acquisition of knowledgefrom experience.
2.Mathematics
What are the formalrules to drawvalid conclusions?
•What can be computed?
•How do we reasonwith uncertain information?

Besides logic and computation,the third greatcontribution of mathematics toAI is the


theory of probability.

Thomas Bayes (1702–1761),proposed arule for updating probabilitiesin the lightof new
evidence.

Bayes’ rule underliesmost modern approaches to uncertainreasoning in AIsystems.

3.Economics
How should we makedecisions so asto maximize payoff?
•How should we dothis when othersmay not goalong?
•How should we dothis when thepayoff may befar in the future?
4.Neuroscience
How do brains processinformation?

 Neuroscienceis the study ofthe nervous system,particularly the brain.

 brain consistsof nerve cells, or neurons


 Brains and digital computers havesomewhat different properties.

 Figure 1.3 shows that computers have a cycle timethat is amillion times faster than
a brain.
5.Psychology
 How do humans and animalsthink and act?

6.Computerengineering

 For artificial intelligence tosucceed, we needtwo things: intelligenceand


an artifact.The computerhas beenthe artifact ofchoice.
7.Controltheoryandcybernetics
How can artifacts operateunder their owncontrol?

8.Linguistics

 Understanding language requires an understandingofthe subject matter and


context, not just anunderstanding of thestructure of sentences.
TheHistoryOfArtificialIntelligence
1)Thegestationofartificialintelligence(1943–1955)

2)Thebirthofartificialintelligence(1956)

3)Earlyenthusiasm,greatexpectations(1952–1969)

4)Adoseofreality(1966–1973)

5)Knowledge-basedsystems:Thekeytopower?(1969–1979)

6)AIbecomesanindustry(1980–present)

7)Thereturnofneuralnetworks(1986–present)

8)AIbecomesascientific(1987–present)

9)Theemergenceofintelligentagents(1995–present)
THESTATEOFTHEART

fewapplicationsofAI:

1)Autonomousplanningandscheduling:

2.Gameplaying:

3.Autonomouscontrol:

4.Diagnosis:

5.LogisticsPlanning:

6.Robotics:

7.Languageunderstandingandproblemsolving:
1)Autonomousplanningandscheduling:

NASA's RemoteAgent program became the first on-boardautonomous


planning programto control thescheduling ofoperations for
aspacecraft.

Remote Agent generated plansfrom high-level goals specified from


the ground, and it monitored the operation ofthe spacecraftas the
plans were executed-detecting, diagnosing, and recovering from
problems as they occurred.

2)Gameplaying:

 IBM'sDeep Blue became the first computer program to defeat the


world champion in a chess match when it bested Garry Kasparov by a
score of 3.5 to 2.5 in an exhibitionmatch.
3.Autonomouscontrol:
The ALVINN computer vision system was trained to steer acar to keep it
following a lane.
it was placed in CMU's NAVLAB computer-controlled minivan
and used to navigate across the United States-for 2850 miles it was in
control of steeringthe vehicle98% of thetime.

4.Diagnosis:

Medical diagnosis programs based on probabilistic analysis have been


able toperform at the level of an expert physician in several areas of
medicine.
5.LogisticsPlanning:

During the Persian Gulf crisis of 1991, U.S. forces deployeda


Dynamic Analysis and Replanning Tool, DART (Cross and Walker,
1994), to do automated logisticsplanning and scheduling for
transportation.

6.Robotics:Many surgeons now use robot assistants in


microsurgery.

7.Languageunderstandingandproblemsolving:
PROVERB (Littmanetal.,1999) is a computer program that solves
crossword puzzles better than most humans, usingconstraints on
possible word fillers, a large database of past puzzles, and a variety of
informationsources including dictionaries and online databases such as
a list of movies and the actors thatappear inthem.
AGENTSANDENVIRONMENTS
 Anagentis anything that can be viewed as perceiving
itsenvironmentthrough sensorsand actingupon that
environmentthrough actuators.
 A human agent has eyes, ears, and otherorgansforsensorsand
hands, legs, mouth, and otherbodypartsforactuators.

 Arobotic agent might


havecamerasandinfraredrangefindersforsensorsand various
motorsforactuators.
 A software
agentreceiveskeystrokes,filecontents,andnetworkpack
etsassensoryinputsand acts on the environment by
displayingon thescreen, writingfiles, and sending network
packets.
 We use the termpercept torefer to theagent’s perceptual inputsat any
given instant.
 An agent’sperceptsequenceis the complete history of everythingthe
agent has ever perceived.
An agent’s behavioris described bytheagentfunction that maps any given
percept sequenceto an action.
 Internally, the agent function for an artificial agent will be implemented by an
agentprogram.
 To illustrate these ideas, weuse a verysimple example—the vacuum-cleanerworld
shown in Figure 2.2.
Twolocations:squaresAandB.
The vacuum agent perceiveswhich square itis in and whether there is dirt inthe square.

It can choose tomove left, moveright, suck up thedirt, ordo nothing.

Oneverysimpleagentfunctionisthefollowing: if the current square is dirty,thensuck;


otherwise, move to theother square.

Apartial tabulationof this agentfunction isshown in Figure 2.3and an agent programthat


implements it appears inFigure 2.8
GoodBehavior:TheConceptofRationality

1.rationalagent

2.Performancemeasures

3.Rationality

4.Omniscience,learning,andautonomy
 Arationalagentis one thatdoes the rightthing—conceptuallyspeaking
everyentry inthetablefor the agentfunction is filledout correctly.

 The right action is the one that will cause the agent tobe most
successful.

 Therefore, we will need some way to measure success.


Aperformancemeasureembodies the criterion for success of an
agent'sbehaviour.

Ruleforperformancemeasure
 Asageneralrule,itisbettertodesignperformancemeasuresaccording
towhat one actually wants inthe
environment,ratherthanaccordingtohow one thinks the agent
should behave.
Rationality

What is rational at any given timedependsonfourthings:

1. Theperformancemeasurethat defines the criterion of success.


2. Theagent's prior knowledgeof the environment.
3. Theactionsthat the agent can perform.
4. Theagent'sperceptsequenceto date.

This leads to a definitionofarationalagent:

 Foreachpossibleperceptsequence,arationalagentshouldselectanact
ionthatisexpectedtomaximizeitsperformancemeasure,giventheevid
enceprovidedbytheperceptsequenceandwhateverbuilt-
inknowledgetheagenthas.
The performance measure awards one pointfor each clean square
at each timestep, over a "lifetime" of 1000 time steps.

The "geography" of the environment is knowna priori (Figure 2.2)


but the dirtdistribution andthe initial location of the agent are not.

The only availableactions areLeft, Right, Suck, and NoOp(do


nothing).

The agent correctlyperceives its location and whether that


locationcontains dirt.
Omniscience,learning,andautonomy
Meaning
Omniscience = having unlimited knowledge:the
state of knowing everything:

 Anomniscient agentknows theactual outcome of


itsactionsand can actaccordingly.
 Our definition requires a rational agent not only to gather
information, but also tolearnas much as possible from what it
perceives.

 A rational agent shouldbe autonomous-it should learn what it


canto compensate for partial or incorrect prior knowledge.

 For example, a vacuum-cleaning agent that learns to foresee where


and when additional dirtwill appear will do better than one that does
not.
TheNatureofEnvironments
taskenvironments whichare essentiallythe "problems"
rationalagentswhich are the "solutions."

1.Specifyingthetaskenvironment

2.Propertiesoftaskenvironments
a)Fullyobservablevs.partiallyobservable.
b)Deterministicvs. stochastic.

c)Episodicvssequential

d)Staticvsdynamic.

e)Discretevscontinuous.

f)Singleagent vsmultiagent.
1.Specifyingthetaskenvironment
 group alltogether under the heading of thetaskenvironmentcallthisas
PEAS(Performance,Environment,Actuators,Sensors)description.
PEASelementsforanumberofadditionalagenttypes
2.Propertiesoftaskenvironments

a)Fullyobservablevs.partiallyobservable.

Fullyobservable
Ifan agent's sensors give itaccesstothecompletestateoftheenvironmentat each point
in time, then we say that thetask environment is fullyobservable.

Ataskenvironment iseffectivelyfullyobservableif the


sensorsdetectallaspectsthatarerelevanttothechoiceofaction.

partiallyobservable

An environment might bepartiallyobservablebecauseof noisy and


inaccuratesensors or
becausepartsofthestatearesimplymissingfromthesensordata.
 forexample, a vacuum agent with only a local dirt sensorcannot tellwhether thereis
dirt in other squares, and anautomated taxi cannot see whatother
driversarethinking.
b)Deterministicvs. stochastic.
 If
thenextstateoftheenvironmentiscompletelydeterminedbythecurrentstateand
the actionexecutedbytheagent, then we say the
environmentisdeterministic;otherwise, itisstochastic.

 If theenvironment is partially observable, however, thenit could


appearto be stochastic.

 Taxidriving is clearlystochasticin this sense, because one


canneverpredict the behaviourof traffic exactly.
c)Episodicvssequential
 In an episodic task environment, theagent'sexperienceisdivided
intoatomicepisodes. Each episode consists of the
agentperceivingandthenperformingasingleaction. Crucially,
thenext episodedoes not depend on the actions taken in previous
episodes.

In episodic environments,
thechoiceofactionineachepisodedependsonlyontheepisodeitse
lf.

Sequential
 In sequential environments, on the other hand, thecurrent
decisioncould affect all future decisions.

Chessandtaxidrivingaresequential: in bothcases, short-


termactions can have long-term consequences.
 Episodic environments are much simplerthan sequential
environments because theagentdoesnotneedtothinkahead.
d)Staticvsdynamic.
 If the environment can changewhileanagentisdeliberating, then
we say the environment is dynamic for that agent; otherwise, it is
static.

If the environment itself does notchange with the passage of timebut


the agent's performance score does, then we say the environment
is semidynamic.

Chess, whenplayedwithaclock,issemidynamic.

Taxidriving is clearlydynamic

Chess, Crosswordpuzzles are static.


e)Discretevscontinuous.
The discrete/continuous distinction can be applied to thestateof the
environment,to theway timeis handled, and to thepercepts
andactions of the agent.

For example, a discrete-stateenvironment such as a chess game has


afinite number of distinct states.
Chess also has a discrete set of percepts and actions.

Taxidrivingis a continuousstateandcontinuous-timeproblem:
Singleagentvsmultiagent.

 For example, anagentsolvingacrosswordpuzzleby itself isclearly


ina single-agent environment, whereas an agent playingchess is
in a two-agent environment.
TheStructureofAgents

 The job of AIis to design the agentprogramthatimplements


theagent function mapping percepts to actions.

 Weassume this program will run on some sort of


computingdevicewithphysicalsensorsandactuators-
wecallthisthearchitecture:

agent=architecture+program
TypesofAgents:-
1)Simplereflexagents;

2)Model-basedreflexagents;

3)Goal-basedagents;and

4)Utility-basedagents.

We then explain in general terms how to convert all these


into learningagents.
1)Simplereflexagents

 The simplest kind of agent is thesimplereflexagent.

 These agents select actions on the basis of the currentpercept,


ignoring the rest of the percept history.

For example, thevacuumagentwhoseagent function is tabulated in


Figure 2.3 is a simplereflexagent, because its decision is based only on
the currentlocationand on whether that contains dirt.
 Figure 2.9 givesthe structure of this general program in schematic
form, showing how thecondition-actionrulesallow the
agenttomaketheconnectionfrompercepttoaction.

 We userectangles to denote the current internalstateofthe agent's


decision process andovals to represent the background information
usedin the process.
 Imagine yourself as the driver of the automated taxi.
If the car in front brakes, and its brake lights come on, then you should
notice this and initiate braking.
In other words,some processingis done on the visual input to
establish the condition we call"Thecarinfrontisbraking."

Then, this triggers some established connection in the agent program to


the action"initiatebraking."We call such a connection acondition-
actionrule: written as

ifcar-in-front-is-brakingtheninitiate-braking.
The agent in Figure 2.10 will work onlyifthecorrectdecision
canbemadeonthebasisofonlythecurrentpercept-
thatis,onlyiftheenvironrnenitsfullyobservable.
2)Model-basedreflexagents
The most effective way to handlepartialobservabilityis for the agent
to keeptrackofthepartoftheworlditcan'tseenow.That is, the agent should
maintain some sort of internalstatethat depends on thepercept history
and thereby reflects at least some of the unobserved aspects of the
current state.

•Key difference (wrt simple reflex agents):

•Agentshaveinternalstate, which is used to keep track of


past states of the world.

•Agents have the


abilitytorepresentchangeintheWorld.
Figure 2.11.gives the structure of themodelreflexagentwith
internalstate, showing howthe current percept is combined with the old
internal state to generate the updateddescription of the current state.

Figure2.11Amodel-basedreflexagent.
 function UPDATE-STATE, which is responsible for creating the new
internal state description.
Goal-basedagents

 Knowing about the current state of the environment is not always


enough to decidewhat to do.

 Forexample, at a road junction, the taxi can turn left, turn right, or
go straight on.

 The correct decision depends on where the taxi is trying to get to. In
other words, aswell as a current state description, the agent needs
some sort of goal information thatdescribes situations that are
desirable-for example, being at thepassenger'sdestination.
Agentkeepstrackoftheworldstateaswellassetofgoalsit’stryingto
achieve:chooses
actionsthatwill(eventually)leadtothegoal(s).
•Key difference wrt Model-Based Agents:

• In addition to state information,


havegoalinformationthat
• describesdesirablesituationstobeachieved.

•Agents of this kind takefutureevents into


consideration.
•WhatsequenceofactionscanItaketoachievecerta
ingoals?

•Choose actions so as to (eventually) achieve a


(given or computed) goal.
Utility-basedagents
 Goals alone are not really enough to generate high-quality
behaviour in most environments.

 For example, there are many action sequences that will get the taxi
to its destination (there by achievingthe goal) but some are
quicker, safer, more reliable, or cheaper thanothers.

 Whentherearemultiplepossiblealternatives,howtodecidewhic
honeisbest?

 Usedecisiontheoreticmodels:e.g.,fastervs.safer.
Learningagents

Figure2.15Ageneralmodeloflearningagents.
Learning Agents
Four conceptual components
 Learning element
 Making improvement
 Performance element
 Selecting external actions
 Critic
 Tells the Learning element howwell the agentis doing with
respect to fixed performancestandard.
(Feedback from user orexamples, good ornot?)
 Problem generator
 Suggest actions that will leadto new andinformative
experiences.
Adaptandimproveovertime

theyhavetheabilitytoimproveperformancethroughlearning

A learning agent can be divided intofourconceptualcomponents, as shown


inFigure 2.15.

The most important distinction is between thelearningelement,which is


responsible for making improvements, and theperformanceelement,which is
responsiblefor selecting external actions.

The performance element is what we have previously considered to be


the entire agent: it takes in percepts and decides on actions.

The learning elementuses feedback from the criticon how the agent is doing
and determines how theperformance elementshould bemodified to do better
in the future.
The critic tells the learning element how well the agent is doing with
respect to a fixed performancestandard.

The critic is necessary because the percepts themselves provide no


indication of the agent's success.

The last component of the learning agent is theproblemgenerator. It


is responsible forsuggesting actions that will lead to new and
informative experiences.

The point isthat ifthe performance element had its way, it would keep
doing the actions that are best,given what it knows.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy