0% found this document useful (0 votes)
4 views59 pages

Lecture 1

Uploaded by

tammanamadeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views59 pages

Lecture 1

Uploaded by

tammanamadeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

AI Definitions

• The study of how to make programs/computers do


things that people do better
• The study of how to make computers solve problems Thinking
which require knowledge and intelligence machines or
• The exciting new effort to make computers think … machine
machines with minds intelligence
• The automation of activities that we associate with
human thinking (e.g., decision-making, learning…)
• The art of creating machines that perform functions
that require intelligence when performed by people Studying
• The study of mental faculties through the use of cognitive
computational models faculties
• A field of study that seeks to explain and emulate
intelligent behavior in terms of computational
processes Problem
• The branch of computer science that is concerned Solving and
with the automation of intelligent behavior CS
What is Intelligence?
• Is there a “holistic” definition for intelligence?
• Here are some definitions:
– the ability to comprehend; to understand and profit from
experience
– a general mental capability that involves the ability to reason,
plan, solve problems, think abstractly, comprehend ideas and
language, and learn
– is effectively perceiving, interpreting and responding to the
environment
• None of these tells us what intelligence is but we can
enumerate a list of elements that intelligence must perform
– perceive, reason, solve problems, learn/adapt, common
sense, analogy, recall, intuition, emotional states,
achieve self-awareness
• Which of these are necessary for intelligence? Which are
sufficient?
• Artificial Intelligence – should we define this in terms of
human intelligence?
Physical Symbol System Hypothesis
• A physical symbol system (PSS)
– consists of symbols (patterns) which are combinable into
expressions
– there are processes which operate on these symbols to create
new symbols and expressions
• English is a physical symbol system: symbols are the alphabet,
expressions are words and sentences
• The PSS Hypothesis states that a PSS has the necessary
and sufficient means for intelligent action
– a computer is a PSS
– if the PSS Hypothesis is true, then it should be
possible to program a computer to produce
intelligent actions
– this is the (or a) goal of AI
• if the PSS Hypothesis is not true, what are the consequences for AI?
The Turing Test
• 1950 – Alan Turing
devised a test for
intelligence called the
Imitation Game
– Ask questions of two
Questions
entities, receive answers Answers Answers
from both
– If you can’t tell which of
the entities is human and
which is a computer
program, then you are Which is the person?
fooled and we should Which is the computer?
Solving the Turing Test: Eliza
• Created by Joseph Weizenbaum
• This system would act like a Rogerian
psychoanalyst by holding a dialog with a person
– the dialog would be consist of the doctor (Eliza)
asking questions, the human responding, and the
doctor using the response to ask another question
• Weizenbaum wrote the program so that it would
generate an English response/question based on
a group of patterns
– If the user sentence matched a pattern, this pattern
would be used to generate the next sentence/question
– A more recent, and much superior version is called
– H: Men are all alike
– E: In what way?
– H: They’re always bugging us about something or
another.
– E: Can you be more specific? Example
– H: Well, my boyfriend made me come here.
– E: Your boyfriend made you come here? Eliza
– H: He says I’m depressed much of the time.
– E: I am sorry to hear you are depressed.
Dialog
– H: It’s true. I am unhappy.
– E: Do you think coming here will help you not to be
unhappy?
– H: I need some help, that much seems certain.
– E: What would it mean to you if you got some help?
– H: Perhaps I could learn to get along with my mother.
– E: Tell me more about your family
(?* ?x) hello (?* ?y) 
(How do you do. Please state your problem.)
(?* ?x) I want (?* ?y) 
(What would it mean if you got ?y)
(Why do you want ?y)
(Suppose you got ?y soon)
(?* ?x) if (?* ?y)  Eliza
(Do you really think its likely that ?y)
(Do you wish that ?y) Rules
(What do you think about ?y) (Really-- if ?y)
(?* ?x) no (?* ?y)  (Why not?)
(You are being a bit negative) Eliza also has a
(Are you saying "NO" just to be negative?) rule to swap
(?* ?x) I was (?* ?y)  (Were you really?) ‘I’ and ‘my’
(Perhaps I already knew you were ?y) to ‘you’ and ‘your’
(Why do you tell me you were ?y now?)
(?* ?x) I feel (?* ?y)  (Do you often feel ?y ?)
(?* ?x) I felt (?* ?y) 
(What other feelings do you have?)
Eliza Pattern Syntax
pat  var match any one expression to a variable
constant or to a constant (see below)
segment-pat match against a sequence
single-pat match against one expression
(pat . pat) match the first and the rest of a list
single-pat 
(?is var predicate) test predicate on one expression
(?or pat1 pat2 …) match on any of the patterns
(?and pat1 pat2 …) match on every of the expressions
(?not pat) match if expression does not match
segment-pat 
((?* var) …) match on zero or more expressions
((?+ var) …) match on one or more expressions
((?? var) …) match zero or one expression
((?if expr) …) test if expression is true
var  ?chars variables of the form ?name
constant  atom symbols, numbers, chars
ALICE Demo
Jerry: my name is jerry.
ALICE: Hey jerry.
Jerry: do you know Turing?
ALICE: A brilliant and tragic figure in the history of computer science.
Jerry: who else do you know?
ALICE: Hugh Loebner, Robby Garner, and Steven Spielberg
Jerry: which film do you like?
ALICE: My favorite movie is Teknolust. Have you seen it?
Jerry: no.
ALICE: I would recommend you check it out.
Jerry: what is it about?
ALICE: Science fiction.
Jerry: do you remember my name?
ALICE: Your name is jerry, seeker.

Notice unlike Eliza, ALICE has a memory, can handle references


(have you seen it? no. I would recommend it.)
ALICE vs. Eliza
ALICE talking to ELIZA
ELIZA: I am ELIZA.
ALICE: I am very pleased to meet you ELIZA.
ELIZA: How can I help you?
ALICE: Just keep talking to me as long as you can.
ELIZA: Do you wish to as long as me?
ALICE: OK, I will as long as you.
ELIZA: Do you wish to will as long as me?
ALICE: OK, I will will as long as you.
ELIZA: Do you wish to will will as long as me?
ALICE: OK, I will will will as long as you.
ELIZA: Do you wish to will will will as long as me?
ALICE: OK, I will will will will as long as you.
ELIZA: Do you wish to will will will will as long as me?
ALICE: OK, I will will will will will as long as you.
ELIZA: Do you wish to will will will will will as long as me?

Eliza gets stuck on the phrase “I will” and then ALICE gets stuck
on the same phrase
How Useful is the Turing Test?
• With Eliza or Alice like rules, we can eventually solve
the Turing Test – it just takes writing enough rules
• Does the system understand what it is responding to?
– No, neither Eliza nor Alice understand the text, its just that
Alice has better, more in depth and wider ranging rules
• We could build a representation that models
some real-world domain and knowledge base
– The system can fill in information from the conversation
• Questions can be responded to by looking up the
stored data
– In this way, the system is responding, not based merely on
“canned” knowledge, but on knowledge that it has “learned”
• Does this imply that the system knows what it is
discussing?
– What does it mean to know something?
Table-Lookup vs. Reasoning
• Consider two approaches to programming a Tic-Tac-Toe
player
– Solution 1: pre-enumerated list of best moves given board
configuration
– Solution 2: rules to evaluate board configuration and generate
best move
• Solution 1 is similar to how Eliza works
– This is not practical for most types of problems
• Solution 2 will reason out the best move
– Such a player might even be able to “explain” why it chose the
move it did
• We can (potentially) build a program that can pass the
Turing Test using table-lookup even though it would be
a large undertaking
The Chinese Room Problem
• From John Searle, Philosopher, in an attempt to
demonstrate that computers cannot be intelligent
– The room consists of you, a book, a storage area (optional),
and a mechanism for moving information to and from the
room to the outside
• a Chinese speaking individual provides a question for you in writing
• you are able to find a matching set of symbols in the book (and
storage) and write a response, also in Chinese

Question (Chinese) Answer


(Chinese)

Storage You Book of Chinese Symbols


Chinese Room:
An Analogy for a Computer

User Input I/O pathway (bus) Output

Memory Program/Data
(Script) CPU (SAM)
Searle’s Question
• You were able to solve the problem of communicating with
the person/user and thus you/the room passes the Turing Test
• But did you understand the Chinese messages being
communicated?
– since you do not speak Chinese, you did not understand the symbols in
the question, the answer, or the storage
– can we say that you actually used any intelligence?
• By analogy, since you did not understand the symbols that
you interacted with, neither does the computer understand
the symbols that it interacts with (input, output, program
code, data)
• He defines to categories of AI:
– strong AI – the pursuit of machine intelligence
– weak AI – the pursuit of machines solving problems in an intelligent
way
Where is the Intelligence
Coming From?
• The System’s Response:
– the hardware by itself is not intelligent, but a
combination of the hardware, software and storage is
intelligent
– in a similar vein, we might say that a human brain that
has had no opportunity to learn anything cannot be
intelligent, it is just the hardware
• The Robot Response:
– a computer is void of senses and therefore symbols are
meaningless to it, but a robot with sensors can tie its
symbols to its senses and thus understand symbols
• The Brain Simulator Response:
– if we program a computer to mimic the brain (e.g., with
a neural network) then the computer will have the same
ability to understand as a human brain
Two AI Assumptions
• We can understand and model cognition without
understanding the underlying mechanism
– it is the model of cognition that is important not the physical
mechanism that implements it
– if true, we should be able to create cognition (mind) out of a
computer or a brain or other devices such as mechanical
devices
• this is the assumption made by symbolic AI researchers
• Cognition will emerge from the proper mechanism
– the right device, fed with the right inputs, can learn and
perform the problem solving that we, as observers, call
intelligence
– cognition will arise as the result (or side effect) of the
hardware
• this is the assumption made by connectionist AI researchers
• Notice that while the two assumptions differ, neither is
necessarily mutually exclusive and both support the idea
A Brief History of AI: 1950s
• Computers were thought of as an electronic brains
• Term “Artificial Intelligence” coined by John McCarthy
– John McCarthy also created Lisp in the late 1950s
• Alan Turing defines intelligence as passing the Imitation
Game (Turing Test)
• AI research largely revolves around toy domains
– Computers of the era didn’t have enough power or memory to
solve useful problems
– Problems being researched include
• games (e.g., checkers)
• primitive machine translation
• blocks world (planning and natural language understanding within the
toy domain)
• early neural networks researched: the perceptron
• automated theorem proving and mathematics problem solving
The 1960s
• AI attempts to move beyond toy domains
• Syntactic knowledge alone does not work, domain
knowledge required
– Early machine translation could translate English to Russian
(“the spirit is willing but the flesh is weak” becomes “the
vodka is good but the meat is spoiled”)
• Earliest expert system created: Dendral
• Perceptron research comes to a grinding halt when it is
proved that a perceptron cannot learn the XOR operator
• US sponsored research into AI targets specific areas –
not including machine translation
• Weizenbaum creates Eliza to demonstrate the futility of
AI
1970s
• AI researchers address real-world problems and solutions through
expert (knowledge-based) systems
– Medical diagnosis
– Speech recognition
– Planning
– Design
• Uncertainty handling implemented
– Fuzzy logic
– Certainty factors
– Bayesian probabilities
• AI begins to get noticed due to these successes
– AI research increased
– AI labs sprouting up everywhere
– AI shells (tools) created
– AI machines available for Lisp programming
• Criticism: AI systems are too brittle, AI systems take too much
time and effort to create, AI systems do not learn
1980s: AI Winter
• Funding dries up leading to the AI Winter
– Too many expectations were not met
– Expert systems took too long to develop, too much money to
invest, the results did not pay off
• Neural Networks to the rescue!
– Expert systems took programming, and took dozens of man-
years of efforts to develop, but if we could get the computer
to learn how to solve the problem…
– Multi-layered back-propagation networks got around the
problems of perceptrons
– Neural network research heavily funded because it promised
to solve the problems that symbolic AI could not
• By 1990, funding for neural network research was
slowly disappearing as well
– Neural networks had their own problems and largely could
not solve a majority of the AI problems being investigated
– Panic! How can AI continue without funding?
1990s: ALife
• The dumbest smart thing you can do is staying alive
– We start over – lets not create intelligence, lets just create
“life” and slowly build towards intelligence
• Alife is the lower bound of AI
– Alife includes
• evolutionary learning techniques (genetic algorithms)
• artificial neural networks for additional forms of learning
• perception, motor control and adaptive systems
• modeling the environment
– Problems: genetic algorithms are useful in solving some
optimization problems and some search-based problems,
but not very useful for expert problems
– Perceptual problems are among the most difficult being
solved, very slow progress
Today: The New (Old) AI
• AI researchers today are not doing “AI”, they are doing
– Intelligent agents, multi-agent systems/collaboration, ontologies
– Machine learning and data mining
– Adaptive and perceptual systems
– Robotics, path planning
– Search engines, filtering, recommendation systems
• Areas of current research interest
– NLU/Information Retrieval, Speech Recognition
– Planning/Design, Diagnosis/Interpretation
– Sensor Interpretation, Perception, Visual Understanding
– Robotics
• Approaches
– Knowledge-based
– Ontologies
– Probabilistic (HMM, Bayesian Nets)
– Neural Networks, Fuzzy Logic, Genetic Algorithms
So What Does AI Do?
• Most AI research has fallen into one of two categories
– Select a specific problem to solve
• study the problem (perhaps how humans solve it)
• come up with the proper representation for any knowledge needed to
solve the problem
• acquire and codify that knowledge
• build a problem solving system
– Select a category of problem or cognitive activity (e.g.,
learning, natural language understanding)
• theorize a way to solve the given problem
• build systems based on the model behind your theory as experiments
• modify as needed
• Both approaches require
– one or more representational forms for the knowledge
– some way to select proper knowledge, that is, search
Knowledge Representations
• One large distinction between an AI system and a
normal piece of software is that an AI system
must reason using worldly knowledge
– What types of knowledge?
• Facts
• Axioms
• Statements (which may or may not be true)
• Rules
• Cases
• Experiences
• Associations (which may not be truth preserving)
• Descriptions
• Probabilities and Statistics
Types of Representations
• Early systems used either
– semantic networks or predicate calculus to represent knowledge
– or used simple search spaces if the domain/problem had very limited
amounts of knowledge (e.g., simple planning as in blocks world)
• With the early expert systems in the 70s, a significant shift took
place to production systems, which combined representation and
process (chaining) and even uncertainty handling (certainty factors)
– later, frames (an early version of OOP) were introduced
• Problem-specific approaches were introduced such as scripts and
CDs for language representation
• In the 1980s, there was a shift from rules to model-based
approaches
• Since the 1990s, Bayesian networks and hidden Markov Models
have become popular
• First, we will take a brief look at some of the representations
Search Spaces
• Given a problem expressed as a state space (whether
explicitly or implicitly)
• Formally, we define a search space as [N, A, S, GD]
– N = set of nodes or states of a graph
– A = set of arcs (edges) between nodes that correspond to the
steps in the problem (the legal actions or operators)
– S = a nonempty subset of N that represents start states
– GD = a nonempty subset of N that represents goal states
• Our problem becomes one of traversing the graph from a
node in S to a node in GD
• Example:
– 3 missionaries and 3 cannibals are on one side of the river with
a boat that can take exactly 2 people across the river
• how can we move the 3 missionaries and 3 cannibals across the
river such that the cannibals never outnumber the missionaries on
either side of the river (lest the cannibals start eating the
missionaries!)
M/C Solution
• We can represent a state as a 6-item tuple:
(a, b, c, d, e, f)
– a/b = number of missionaries/cannibals on left shore
– c/d = number of missionaries/cannibals in boat
– e/f = number of missionaries/cannibals on right shore
– where a + b + c + d + e + f = 6
– a >= b (unless a = 0), c >= d (unless c = 0), and e >= f
(unless e = 0)
• Legal operations (moves) are
– 0, 1, 2 missionaries get into boat
– 0, 1, 2 missionaries get out of boat
– 0, 1, 2 cannibals get into boat
– 0, 1, 2 missionaries get out of boat
– boat sails from left shore to right shore
– boat sails from right shore to left shore
Search Spaces and Types of Search
• The search space consists of all possible states of the problem
as it is being solved
– A search space is often viewed as a tree and can very well consist of
an exponential number of nodes making the search process
intractable
– Search spaces might be pre-enumerated or generated during the
search process
– Some search algorithms may search the entire space until a solution
is found, others will only search parts of the space, possibly
selecting where to search through a heuristic
• Search spaces include
– Game trees like the tic-tac-toe game
– Decision trees (see next slides)
– Combinations of rules to select in a production system
– Networks of various forms (see next slides)
Search Algorithms and Representations
• Breadth-first • Production systems
• Depth-first – If-then rules
• Best-first (Heuristic Search) – Predicate calculus rules
• – Operators
A*
• Hill Climbing • Semantic networks
• Limiting the number of Plies • Frames
• Minimax • Scripts
• Alpha-Beta Pruning • Knowledge groups
• Adding Constraints • Models, cases
• Genetic Algorithms • Agents
• Forward vs Backward • Ontologies
Chaining
Relationships
• We often know stuff about objects (whether physical or
abstract)
– These objects have attributes (components, values) and/or
relationships with other things
• One way to represent knowledge is to enumerate the
objects and describe them through their attributes and
relationships
• Common forms of such relationship representations are
– semantic networks – a network consists of nodes which are
objects and values, and edges (links/arcs) which are annotated to
include how the nodes are related
– predicate calculus – predicates are often relationships and
arguments for the predicates are objects
– frames – in essence, objects (from object-oriented programming)
where attributes are the data members and the values are the
specific values stored in those members – in some cases, they are
pointers to other objects
Representations With Relationships

Here, we see the same information being


represented using two different representational
techniques – a semantic network (above) and
predicates (to the left)
Another Example: Blocks World

Here we see a real-world situation of three blocks and a predicate


calculus representation for expressing this knowledge

We equip our system with rules such as the below rule to reason
over how to draw conclusions and manipulate this block’s world

This rule says “if there does not exist a Y


that is on X, then X is clear
Semantic Networks
• Collins and Quillian were the first to use semantic
networks in AI by storing in the network the
objects and their relationships
– their intention was to represent English sentences
– edges would typically be annotated with these
descriptors or relations
• isa – class/subclass
• instance – the first object is an
instance of the class
• has – contains or has this as a
physical property
• can – has the ability to
• made of, color, texture, etc
A semantic network to represent the
sentences “a canary can sing/fly”, “a canary
is a bird/animal”, “a canary is a canary”,
“a canary has skin”
Frames
• The semantic network requires a graph representation which
may not be a very efficient use of memory
• Another representation is the frame
– the idea behind a frame was originally that it would represent
a “frame of memory” – for instance, by capturing the objects
and their attributes for a given situation or moment in time
– a frame would contain slots where a slot could contain
• identification information (including whether this frame is a
subclass of another frame)
• relationships to other frames
• descriptors of this frame
• procedural information on how to use this frame (code to be
executed)
• defaults for slots
• instance information (or an identification of whether the frame
represents a class or an instance)
Frame Example

Here is a partial frame


representing a hotel room

The room contains a chair,


bed, and phone where the bed
contains a mattress and a bed
frame (not shown)
Production Systems
• A production system is
– a set of rules (if-then or condition-action statements)
– working memory
• the current state of the problem solving, which includes new
pieces of information created by previously applied rules
– inference engine (the author calls this a “recognize-act” cycle)
• forward-chaining, backward-chaining, a combination, or some
other form of reasoning such as a sponsor-selector, or agenda-
driven scheduler
– conflict resolution strategy
• when it comes to selecting a rule, there may be several
applicable rules, which one should we select? the choice may be
based on a conflict resolution strategy such as “first rule”, “most
specific rule”, “most salient rule”, “rule with most actions”,
“random”, etc
Chaining
• The idea behind a production system’s reasoning is that
rules will describe steps in the problem solving space
where a rule might
– be an operation in a game like a chess move
– translate a piece of input data into an intermediate conclusion
– piece together several intermediate conclusions into a specific
conclusion
– translate a goal into substeps
• So a solution using a production system is a collection of
rules that are chained together
– forward chaining – reasoning from data to conclusions where
working memory is sought for conditions that match the left-
hand side of the given rules
– backward chaining – reasoning from goals to operations where
an initial goal is unfolded into the steps needed to solve that
goal, that is, the process is one of subgoaling
Two Example Production Systems
Example System: Water Jugs
• Problem: given a 4-gallon jug (X) and a 3-gallon jug
(Y), fill X with exactly 2 gallons of water
– assume an infinite amount of water is available
• Rules/operators
– 1. If X = 0 then X = 4 (fill X)
– 2. If Y = 0 then Y = 3 (fill Y)
– 3. If X > 0 then X = 0 (empty X)
– 4. If Y > 0 then Y = 0 (empty Y)
– 5. If X + Y >= 3 and X > 0 then X = X – (3 – y) and Y = 3 (fill
Y from X)
– 6. If X + Y >= 4 and Y > 0 then X = 4 and Y = Y – (4 – X) (fill
X from Y)
– 7. If X + Y <= 3 and X > 0 then X = 0 and Y = X + Y (empty X
into Y)
– 8. If X + Y <= 4 and Y > 0 then X = X + Y and Y = 0 (empty Y
into X)
• rule numbers used on the next slide
Conflict Resolution Strategies
• In a production system, what happens when more than
one rule matches?
– a conflict resolution strategy dictates how to select from
between multiple matching rules
• Simple conflict resolution strategies include
– random
– first match
– most/least recently matched rule
– rule which has matched for the longest/shortest number of
cycles (refractoriness)
– most salient rule (each rule is given a salience before you run
the production system)
• More complex resolution strategies might
– select the rule with the most/least number of conditions
(specificity/generality)
– or most/least number of actions (biggest/smallest change to the
state)
MYCIN
• By the early 1970s, the production system approach was
found to be more than adequate for constructing large
scale expert systems
– in 1971, researchers at Stanford began constructing MYCIN, a
medical diagnostic system
– it contained a very large rule base
– it used backward chaining
– to deal with the uncertainty of medical knowledge, it
introduced certainty factors (sort of like probabilities)
– in 1975, it was tested against medical experts and performed
as well or better than the doctors it was compared to
(defrule 52
if (site culture is blood) If the culture was taken from the patient’s
(gram organism is neg) blood and the gram of the organism is
(morphology organism is rod) negative and the morphology of the organism
(burn patient is serious) is rods and the patient is a serious burn
then .4 patient, then conclude that the identity of the
(identity organism is pseudomonas))organism is pseudomonas (.4 certainty)
MYCIN in Operation
• Mycin’s process starts with “diagnose-and-treat”
– repeat
• identify all rules that can provide the conclusion currently sought
• match right hand sides (that is, search for rules whose right hand
sides match anything in working memory)
• use conflict resolution to identify a single rule
• fire that rule
– find and remove a piece of knowledge which is no longer
needed
– find and modify a piece of knowledge now that more specific
information is known
– add a new subgoal (left-hand side conditions that need to be
proved)
– until the action done is added to working memory
• Mycin would first identify the illness, possibly ordering
more tests to be performed, and then given the illness,
generate a treatment
– Mycin consisted of about 600 rules
R1/XCON
• Another success story is DEC’s R1
– later renamed XCON
• This system would take customer orders and configure
specific VAX computers for those orders including
– completing the order if the order was incomplete
– how the various components (drive and tape units, mother
board(s), etc) would be placed inside the mainframe cabinet)
– how the wiring would take place among the various
components
• R1 would perform forward chaining over about 10,000
rules
– over a 6 year period, it configured some 80,000 orders with a
95-98% accuracy rating
– ironically, whereas planning/design is viewed as a backward
chaining task, R1 used forward chaining because, in this
particular case, the problem is data driven, starting with user
input of the computer system’s specifications
• R1’s solutions were similar in quality to human solutions
R1 Sample Rules
• Constraint rules
– if device requires battery then select battery for device
– if select battery for device then pick battery with voltage(battery) =
voltage(device)
• Configuration rules
– if we are in the floor plan stage and there is space for a power supply
and there is no power supply available then add a power supply to
the order
– if step is configuring, propose alternatives and there is an
unconfigured device and no container was chosen and no other
device that can hold it was chosen and selecting a container wasn’t
proposed yet and no problems for selecting containers were
identified then propose selecting a container
– if the step is distributing a massbus device and there is a single port
disk drive that has not been assigned to a massbus and there are no
unassigned dual port disk drives and the number of devices that each
massbus should support is known and there is a massbus that has
been assigned at least one disk drive and that should support
additional disk drives and the type of cable needed to connect the
disk drive is known, then assign the disk drive to this massbus
Strong Slot-n-Filler Structures
• To avoid the difficulties with Frames and Nets, Schank
and Rieger offered two network-like representations that
would have implied uses and built-in semantics:
conceptual dependencies and scripts
– the conceptual dependency was derived as a form of semantic
network that would have specific types of links to be used for
representing specific pieces of information in English
sentences
• the action of the sentence
• the objects affected by the action or that brought about the action
• modifiers of both actions and objects
– they defined 11 primitive actions, called ACTs
• every possible action can be categorized as one of these 11
• an ACT would form the center of the CD, with links attaching the
objects and modifiers
Example CD

• The sentence is “John ate the egg”


• The INGEST act means to ingest an object (eat, drink, swallow)
– the P above the double arrow indicates past test
– the INGEST action must have an object (the O indicates it was the object
Egg) and a direction (the object went from John’s mouth to John’s insides)
– we might infer that it was “an egg” instead of “the egg” as there is nothing
specific to indicate which egg was eaten
– we might also infer that John swallowed the egg whole as there is nothing
to indicate that John chewed the egg!
The CD Theory ACTs

• Is this list complete?


– what actions are missing?
• Could we reduce this list to make it more concise?
– other researchers have developed other lists of primitive actions
including just 3 – physical actions, mental actions and abstract
actions
Example
CD Links
Example CDs
More Examples
Complex Example
• The sentence is “John
prevented Mary from giving
a book to Bill”
• This sentence has two
ACTs, DO and ATRANS
– DO was not in the list of 11,
but can be thought of as
“caused to happen”

• The c/ means a negative conditional, in this case it means that


John caused this not to happen
• The ATRANS is a giving relationship with the object being a
Book and the action being from Mary to Bill – “Mary gave a book
to Bill”
– like with the previous example, there is no way of telling whether it is “a
book” or “the book”
Scripts
• The other structured representation developed by Schank
(along with Abelson) is the script
– a description of the typical actions that are involved in a
typical situation
• they defined a script for going to a restaurant
– scripts provide an ability for default reasoning when
information is not available that directly states that an action
occurred
– so we may assume, unless otherwise stated, that a diner at a
restaurant was served food, that the diner paid for the food, and
that the diner was served by a waiter/waitress
• A script would contain
– entry condition(s) and results (exit conditions)
– actors (the people involved)
– props (physical items at the location used by the actors)
– scenes (individual events that take place)
• The script would use the 11 ACTs from CD theory
Restaurant Script
• The script does not
contain atypical actions
– although there are options
such as whether the
customer was pleased or
not
• There are multiple paths
through the scenes to
make for a robust script
– what would a “going to the
movies” script look like?
would it have similar
props, actors, scenes? how
about “going to class”?
Knowledge Groups
• One of the drawbacks of the knowledge
representations demonstrated thus far is that all
knowledge is grouped into a single, large
collection of representations
– the rules taken as a whole for instance don’t denote what
rules should be used in what circumstance
• Another approach is to divide the representations
into logical groupings
– this permits easier design, implementation, testing and
debugging because you know what that particular group is
supposed to do and what knowledge should go into it
• it should be noted that by distributing the knowledge, we
might use different problem solving agents for each set of
knowledge so that the knowledge is stored using different
representations
Knowledge Sources and Agents
• Which leads us to the idea of having multiple
problem solving agents
– each agent is responsible for solving some specialized
type of problem(s) and knows where to obtain its own
input
– each agent has its own knowledge sources, some
internal, some external
• since external agents may have their own forms of
representation, the agent must know
– how to find the proper agents
– how to properly communicate with these other agents
– how to interpret the information that it receives from these
agents
– how to recover from a situation where the expected agent(s)
is/are not available
What is an Agent?
• Agents are interactive problem solvers that have these
properties
– situated – the agent is part of the problem solving environment
– it can obtain its own input from its environment and it can
affect its environment through its output
– autonomous – the agent operates independently of other agents
and can control its own actions and internal states
– flexible – the agent is both responsive and proactive – it can go
out and find what it needs to solve its problem(s)
– social – the agent can interact with other agents including
humans
• Some researchers also insist that agents have
– mobility – have the ability to move from their current
environment to a new environment (e.g., migrate to another
processor)
– delegation – hand off portions of the problem to other agents
– cooperation – if multiple agents are tasked with the same
problem, can their solutions be combined?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy