AINotes 1
AINotes 1
Defini on of AI:
Ar ficial intelligence (AI) is technology that enables computers and machines to simulate human learning,
Understanding, problem solving, decision making, crea vity and autonomy.
Learning: AI can learn from experience and improve performance when exposed to data sets.
Problem-solving: AI can process large amounts of data at once to find pa erns and solve complex problems.
Decision-making: AI can make recommenda ons and enable faster, more accurate predic ons.
AI can perform a variety of advanced func ons, including: Seeing, Understanding and transla ng spoken and wri en
language, analyzing data, interac ng with the environment, and Exercising crea vity.
Type of AI:
There are three types of AI.
1.Narrow AI(NAI): Narrow AI is a type of AI that is perform a dedicated task with intelligence. The most common
and currently available AI is Narrow AI in the world of Ar ficial Intelligence. Narrow AI cannot perform beyond
(out of) its field or boundaries, because it is trained only for a specific task.
Some Examples of Narrow AI are playing chess, purchasing sugges ons on e-commerce site, self-driving cars,
speech recogni on, and image recogni on.
2. General AI: General AI is a type of intelligence which can perform any intellectual task like a human.
General AI aims to create a system that is smarter and can think on its own like Human. Currently, there is no such
system exist which could come under general AI and can perform any task as perfect as a human.
3.Super AI: Super AI is a level of system intelligence at which machines can exceeds human intelligence, and
perform any task with cogni ve proper es be er than humans. This is the result of general AI.
Some of the key characteris cs of strong AI include the ability to think, reason, solve puzzles, make decisions, plan,
learn, and communicate on its own.
AI Intelligent Agents:
AI Intelligent Agents: An agent is a computer program or system that is designed to understand its environment,
make decisions and take ac ons to achieve a specific goal. it is not directly controlled by a human operator.
Agent is anything that understand its environment through sensors and actuator or effectors.
Sensor: Sensor is a device which detects the changes in the environment and sends the informa on to
other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into mo on. The actuators are
only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms,
fingers, wings, fins, and display screen.
Type of Agents:
There are five types of the Agents in AI.
1. Simple Reflex Agents
2. Model -Based Reflex Agents.
3. Goal-Based Agents.
4. U lity-Based Agents
5. Learning Agents
6. Mul -System Agents
7. Hierarchical Agents
1. Simple Reflex Agents:
Simple reflex Agents ignore the rest of concept history and act only on the basis of current concept.
The concept history is the history of all that an Agent has realized to date.
The agent func on is based on the condi on-ac on rule. A condi on-ac on rule is a rule that maps a state.
If the condi on is true, then the ac on is taken, else not. This agent func on is successful only when the
environment is completely observable
3. Goal-Based Agents:
These kinds of agents take decisions based on how far they are currently from their goal.
The purpose of each of their ac ons is to reduce their distance from the target.
This allows the agent a way to choose among mul ple possibili es, selec ng the one which reaches a goal state.
The goal-based agent’s behavior can easily be changed. Example: A self-driving car naviga ng to a des na on.
4. U lity-Based Agents:
When there are mul ple possible alterna ves (Op ons), then to decide which one is best, u lity-based agents
are used. They choose ac ons based on a preference (u lity) for each state. The agent uses a u lity func on to
evaluate and rank different states or outcomes. The func on assigns numerical values to outcomes, represen ng
their desirability or u lity.
5. Learning Agent:
A learning agent in AI is the type of agent that can learn from its past experiences or it has learning capabili es.
It starts to act with basic knowledge and then is able to act and adapt automa cally through learning. A learning
agent has mainly four conceptual components:
1. Learning element: It is responsible for making improvements by learning from the environment.
2. Cri c: The learning element takes feedback from cri cs which describes how well the agent is doing with respect
to a fixed performance standard.
3. Performance element: It is responsible for selec ng external ac on.
4. Problem Generator: This component is responsible for sugges ng ac ons that will lead to new and informa ve
5. experiences.
7. Hierarchical Agents:
Hierarchical agents are agents that are organized into a hierarchy, with high-level agents overseeing the behavior
of lower-level agents. The high-level agents provide goals and constraints, while the low-level agents carry out
specific tasks. This structure allows for more efficient and organized decision-making in complex environments.
Hierarchical agents can be implemented in a variety of applica ons, including robo cs, manufacturing, and
transporta on systems.
Characteris c of AI:
Ar ficial intelligence (AI) has many characteris cs, including:
Learning: AI can learn from data and improve its performance over me. For example, AI models can improve
their ability to iden fy objects in images by analyzing new data.
Problem-solving: AI can solve complex problems, even those that humans can't.
Reasoning: AI can think and make decisions.
Percep on: AI can sense its environment.
Adaptability: AI can adapt to new situa ons.
Automa on: AI can automate repe ve tasks.
Data handling: AI can process large amounts of data.
Natural language processing (NLP): AI can understand and communicate with humans in natural language.
Self-correc on: AI can improve its accuracy over me.
Efficiency: AI can perform tasks faster, more accurately, and more consistently than humans.
Machine learning
A broader term that includes various techniques, including deep learning. Machine learning algorithms can
process large amounts of data, iden fy pa erns, and predict outcomes.
Deep learning
Uses ar ficial neural networks to process and analyze informa on. Deep learning algorithms are inspired by the
human brain and are used for complex tasks like image classifica on and object detec on
1. Search Strategies:
These are algorithms that explore the possible states or configura ons to find a solu on. Some common search
strategies include:
Type of search algorithm:
There are two types of search algorithms in ar ficial intelligence.
1. Uninformed Search:
This type does not have any addi onal informa on about the goal state.
Uninformed search is also called Blind search.
These algorithms can only generate the successors and differen ate between the goal state and non-goal state.
Examples include:
1. Depth First Search
2. Breadth First Search
3. Uniform Cost Search
1. Breadth-First Search (BFS):
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures.
This strategy explores all the nodes at the present depth level before moving on to the nodes at the next depth
level. It guarantees the shortest path in an unweighted graph.
2. Informed Search: This type uses heuris cs to guide the search process toward the goal more efficiently.
Examples include:
- A* Search: Combines features of BFS and heuris cs to find the least-cost path to the goal.
- Greedy Best-First Search: Chooses the path that seems to be the most promising based on a heuris c.
2. Control Strategies:
These strategies determine the order in which nodes are expanded in the search space and how the search is
conducted. They can be:
1. Sta c Control:
The strategy is fixed before the search begins. For example, always using BFS or DFS.
2. Dynamic Control:
The strategy can change based on the current state of the search process. For example, switching to a different
search method if the current one is not yielding results.
Produc on System:
A produc on system in AI is a type of computer program that uses a set of rules (produc ons) to make decisions
and solve problems. A produc on system is a framework for building AI applica ons that can reason and make
decisions based on a set of rules and a knowledge base. This approach is widely used in expert systems and other
AI applica ons.
3. Inference Engine:
This is the component that applies the produc on rules to the knowledge base to derive new informa on or make
decisions. It processes the rules and determines which ones are applicable based on the current facts.
Example
Problem: Measure 4 liters of water using a 3-liter jug (Jug A) and a 5-liter jug (Jug B).
Steps to Solve:
Ini al State: (0,0) (0, 0) (0,0)
Fill Jug B: (0,5) (0, 5) (0,5)
Pour from Jug B to Jug A: (3,2) (3, 2) (3,2)
Empty Jug A: (0,2) (0, 2) (0,2)
Pour from Jug B to Jug A: (2,0) (2, 0) (2,0)
Fill Jug B: (2,5) (2, 5) (2,5)
Pour from Jug B to Jug A: (3,4) (3, 4) (3,4)
Goal Achieved: Jug B contains exactly 4 liters of water.
Example: (3,3,0) (3, 3, 0) (3,3,0) means 3 missionaries, 3 cannibals, and the boat are on the le bank.
2. Ini al State: (3,3,0) (3, 3, 0) (3,3,0)
3. Goal State: (0,0,1) (0, 0, 1) (0,0,1)
Rules and Constraints
1. The boat can hold one or two people.
2. The boat cannot cross the river without passengers.
3. At no me can cannibals outnumber missionaries on either side of the river unless there are no missionaries on
that side.
Solu on Approach
State-Space Search
1. Generate States: Iden fy all valid transi ons from the current state by moving 1 or 2 people in the boat.
2. Check Validity: Ensure the new state sa sfies all constraints (e.g., missionaries are not outnumbered).
3. Search Strategy:
o Use Breadth-First Search (BFS) to find the shortest sequence of moves.
o Use Depth-First Search (DFS) for any valid sequence.
Example Solu on
Steps:
1. Start at (3,3,0) (3, 3, 0) (3,3,0):
o Move 2 cannibals to the right: (3,1,1)(3, 1, 1)(3,1,1)
2. (3,1,1) (3, 1, 1) (3,1,1):
o Move 1 cannibal back: (3,2,0) (3, 2, 0) (3,2,0)
3. (3,2,0) (3, 2, 0) (3,2,0):
o Move 2 cannibals to the right: (3,0,1) (3, 0, 1)(3,0,1)
4. (3,0,1) (3, 0, 1) (3,0,1):
o Move 1 cannibal back: (3,1,0) (3, 1, 0) (3,1,0)
5. (3,1,0)(3, 1, 0)(3,1,0):
o Move 2 missionaries to the right: (1,1,1)(1, 1, 1)(1,1,1)
6. (1,1,1)(1, 1, 1)(1,1,1):
o Move 1 missionary and 1 cannibal back: (2,2,0)(2, 2, 0)(2,2,0)
7. (2,2,0)(2, 2, 0)(2,2,0):
o Move 2 missionaries to the right: (0,2,1)(0, 2, 1)(0,2,1)
8. (0,2,1)(0, 2, 1)(0,2,1):
o Move 1 cannibal back: (0,3,0)(0, 3, 0)(0,3,0)
9. (0,3,0)(0, 3, 0)(0,3,0):
o Move 2 cannibals to the right: (0,1,1)(0, 1, 1)(0,1,1)
10. (0,1,1)(0, 1, 1)(0,1,1):
o Move 1 cannibal back: (0,2,0)(0, 2, 0)(0,2,0)
11. (0,2,0)(0, 2, 0)(0,2,0):
o Move 2 cannibals to the right: (0,0,1)(0, 0, 1)(0,0,1)
Goal Achieved: All missionaries and cannibals are on the right bank.
Solu on Approach
The solu on involves planning a sequence of ac ons to achieve the goal.
1. Ini al State:
o Monkey is at Loca on A.
o Box is at Loca on B.
o Bananas are hanging from the ceiling at Loca on C.
2. Plan:
1. Move the monkey to Loca on B (where the box is).
2. Push the box to Loca on C (under the bananas).
3. Climb onto the box.
4. Grab the bananas.
Example Solu on
States:
S1: Monkey at A, Box at B, Bananas at C.
S2: Monkey at B, Box at B, Bananas at C (a er moving to the box).
S3: Monkey at C, Box at C, Bananas at C (a er pushing the box under the bananas).
S4: Monkey on the box, Bananas at C (a er climbing the box).
S5: Monkey holding the bananas (goal achieved).
Applica on of AI:
Ar ficial intelligence (AI) has many applica ons, including:
Healthcare: AI can help doctors diagnose diseases and develop new treatments. AI can analyze pa ent data to
iden fy pa erns and rela onships that can help doctors develop treatment plans.
Automo ve: AI can improve safety and efficiency in the automo ve industry. For example, AI systems can enable
self-driving cars to navigate, detect obstacles, and make driving decisions.
Retail: AI can help retailers personalize the shopping experience for customers. AI can analyze customer behavior,
preferences, and purchase history to offer tailored product sugges ons.
So ware development: AI can automate many processes in so ware development, DevOps, and IT. For example,
AI-powered monitoring tools can help flag poten al anomalies in real me.
Virtual assistants: AI can be used in virtual assistants like Siri and Alexa. Google Assistant is an example of a virtual
assistant that uses natural language processing to support both voice and text commands.
Security systems: AI can be used in security systems for image and facial recogni on.
Chatbots: AI can be used in chatbots for customer service.
Recommenda on systems: AI can be used in recommenda on systems used in e-commerce pla orms.
Fraud detec on: AI can be used in financial ins tu ons for fraud detec on.
Unit – 2
(Searching)
Searching for Solu ons in Ar ficial Intelligence (AI) refers to exploring a set of possible ac ons, states, or paths to
iden fy a way to achieve a specified goal. AI systems solve problems by naviga ng through a state space (a set of all
possible states) using search algorithms.
1. Machine Learning:
This involves training algorithms on large datasets to recognize pa erns and make predic ons. Common techniques
include supervised learning, unsupervised learning, and reinforcement learning.
2. Natural Language Processing (NLP):
This area focuses on the interac on between computers and human language. Solu ons include chatbots, language
transla on, sen ment analysis, and text summariza on.
3. Computer Vision:
AI solu ons in this field allow machines to interpret and understand visual informa on from the world.
Applica ons include facial recogni on, object detec on, and image classifica on.
4. Robo cs:
AI is used to enhance the capabili es of robots, enabling them to perform tasks autonomously. This includes
everything from industrial robots in manufacturing to drones for delivery services.
5. Recommenda on Systems:
These systems analyze user behavior and preferences to suggest products, services, or content. They are widely
used in e-commerce and streaming pla orms.
3. Depth-Limited Search:
- Descrip on: This is a variant of DFS that imposes a limit on the depth of the search. If the limit is reached, the
search backtracks.
- Characteris cs: It prevents infinite loops and can be more efficient than standard DFS.
- Use Case: Useful when you have some knowledge about the maximum depth of the solu on.
2. A* Search Algorithm:
The A* algorithm is a pathfinding algorithm used in ar ficial intelligence (AI) to find the most efficient route
between two points. It's a graph traversal algorithm that uses a heuris c func on to es mate the cost of reaching a
goal from a given node.
3. Best-First Search:
- Descrip on: This algorithm selects nodes based solely on the heuris c func on \ (h (n) \), aiming to expand the
node that appears to be closest to the goal.
- Characteris cs: It can be faster than A* in some cases but does not guarantee an op mal solu on.
- Use Case: Useful when a quick solu on is needed, and op mality is not a concern.
4. Hill Climbing:
- Descrip on: This is a local search algorithm that con nuously moves towards the direc on of increasing value (or
decreasing cost) based on the heuris c.
- Characteris cs: It can get stuck in local maxima, plateaus, or ridges, which might prevent finding the global
op mum.
- Use Case: Useful in op miza on problems where the search space is large.
Variables:
Variables in a CSP are the objects that must have values assigned to them in order to sa sfy a par cular set
of constraints. Boolean, integer, and categorical variables are just a few examples of the various types
of variables. Example: Student, Teachers, subject,
Domains:
The range of poten al values that a variable can have been represented by domains. Depending on the
issue, a domain may be finite or limitless. Example: Time slot.
Constraints
The guidelines that control how variables relate to one another are known as constraints. Constraints in a
CSP define the ranges of possible values for variables. Unary constraints, binary constraints, and higher-
order constraints are example of constraint.
Example: 1. Two slots of same not given to teacher/Student.
CSP Algorithms:
The most commonly used CSP algorithms:
1. Backtracking algorithm:
The backtracking algorithm is a basic algorithm used to solve constraint sa sfac on problems (CSPs) in ar ficial
intelligence (AI).
Explana on
2. Forward-Checking Algorithm:
The Forward-Checking Algorithm is a technique used in constraint sa sfac on problems (CSPs), which are problems
where you need to find values for variables that sa sfy certain constraints. This algorithm is par cularly useful in
scenarios like scheduling, map coloring, and puzzle solving.
Here's how the Forward-Checking Algorithm works:
1. Variable Assignment: When you assign a value to a variable, the algorithm checks the constraints that involve that
variable and the unassigned variables.
2. Domain Reduc on: For each unassigned variable that is connected to the assigned variable through a constraint,
the algorithm removes values from its domain that are inconsistent with the assigned value.
3. Early Detec on of Failure: If at any point an unassigned variable has no remaining values in its domain a er the
domain reduc on step, the algorithm detects that the current assignment cannot lead to a solu on. It then
backtracks to try a different assignment for the previous variable.
4. Backtracking: The algorithm con nues this process of assigning values, reducing domains, and backtracking un l
either a solu on is found or all possibili es are exhausted.
Problem in Game-Playing:
Game playing in AI faces several challenges, including:
1. Complexity of Search Space: Many games have an enormous number of possible moves and game states, making
it difficult for the AI to evaluate all possibili es. For example, chess and Go.
2. Real-Time Decision Making: In many games, players must make decisions quickly. The AI needs to balance
between exploring the search space and making mely decisions, which can be challenging.
3. Uncertainty and Incomplete Informa on: In some games, players may not have complete informa on about the
opponent's strategy or moves, adding an extra layer of complexity to the decision-making process.
4. Opponent Modeling: Understanding and predic ng the opponent's behavior is crucial. Different opponents may
have different strategies, and the AI needs to adapt its approach accordingly.
5. Dynamic Environments: In some games, the environment can change rapidly, which requires the AI to
con nuously adapt its strategy in response to new informa on.
6. Heuris c Limita ons: The effec veness of heuris cs (rules of thumb used to evaluate game states) can vary
greatly, and finding the right heuris cs for a specific game can be challenging.
Unit – 3
(Knowledge Representa on)
Defini on of Knowledge:
The knowledge refers to the informa on, understanding, and insights that an AI system has acquired and uses to
perform tasks, make decisions, or solve problems. It is o en structured in ways that allow the system to process,
infer, and act effec vely within its domain.
There are a few key concepts related to knowledge in AI:
1. Knowledge Representa on: This is how informa on is structured and stored within an AI system. It can include
various forms such as rules, ontologies, and seman c networks, which help the AI understand and reason about the
informa on.
2. Knowledge Base: This is a collec on of knowledge that an AI system uses to make inferences and provide answers.
It can be built from structured data (like databases) or unstructured data (like text).
3. Inference: This refers to the process by which AI systems use their knowledge to draw conclusions or make
predic ons based on the informa on they have.
4. Learning: AI systems can acquire new knowledge through learning processes, such as machine learning, where
they improve their performance on tasks by analyzing data and iden fying pa erns.
Types of Knowledge:
There are 5 types of Knowledge such as
2. Declara ve Knowledge.
3. Structured Knowledge.
4. Procedural Knowledge.
5. Meta Knowledge.
6. Heuris c Knowledge.
1. Declara ve Knowledge:
Declara ve Knowledge also known as Descrip ve knowledge, is the type of knowledge which tells the basic
knowledge about something and it is more popular than Procedural Knowledge. It is o en described as "knowing
that". This type of knowledge is explicit and can be easily communicated or ar culated. It includes informa on
such as dates, defini ons, and concepts.
2. Procedural Knowledge:
Procedural Knowledge also known as Interpre ve knowledge, is the type of knowledge in which it clarifies how a
par cular thing can be accomplished. It can be directly applied to any task. It includes rules, strategies,
procedures, agendas, etc. Procedural knowledge depends on the task on which it can be applied.
3.
2. Procedural Knowledge means how While Declara ve Knowledge means basic knowledge about
a par cular thing can be something.
accomplished.
Predicate logic:
Predicate logic in ar ficial intelligence (AI) is a formal system that uses variables and quan fiers to represent
complex statements and rela onships. It's also called first-order logic.
Predicate:
In predicate logic, a predicate is an expression that describes a property of objects or a rela onship between
objects. It can contain variables, and when values are subs tuted for the variables, it becomes a proposi on.
𝑃(𝑥) is a predicate that means "x > 5".
Quan fier
Quan fiers are words or phrases that indicate how many elements sa sfy a predicate in predicate logic. They are
used to modify variables and formalize English words like "all", "some", "any", and "every".
For example,
∀x R(x): "For all x, x is a cat." (Universal quan fica on)
∃y L(John, y): "There exists a y such that John loves y." (Existen al quan fica on)
LISP in AI:
LISP (List Processing) is a programming language that is commonly used in ar ficial intelligence (AI). Every Lisp
procedure is a func on, and when called, it returns a data object as its value. It allows developers to create complex
AI models and algorithms. Lisp is the second-oldest high-level programming language in the world which is invented
by John McCarthy in the year 1958 at the Massachuse s Ins tute of Technology (MIT).
Syntax:
(write-line string)
Example:
; this is a comment
(write-line "Hello Geeks")
PROLOG:
PROLOG is a programming language that uses logic to solve problems. It has important role in ar ficial intelligence.
Unlike many other programming languages. In prolog, logic is expressed as rela ons (called as Facts and Rules).
Facts and Rules: You write down facts (like "Cats are animals") and rules (like "If something is a cat, then it is an
animal"). PROLOG uses these to make conclusions.
Advantage of PROLOG:
1. It contains a database of data structures in language and human thinking.
2. Its execu on is based on the defini on of predicates.
3. It supports pa ern matching and backtracking.
4. Its rules are created using recursive thinking, which is comparable to an applica ve language.
5. It is declara ve, compact, ra onal, interpre ve, and modular by defini on.
6. It uses simple coding to store and operate data lists.
Unit – 4
(Natural Language Processing)
What is NLP:
NLP stands for Natural Language Processing, which is a part of Computer Science, Human language, and Ar ficial
Intelligence. It is the technology that is used by machines to understand, analyse, manipulate, and interpret human's
languages. It helps developers to organize knowledge for performing tasks such as transla on, automa c
summariza on, Named En ty Recogni on (NER), speech recogni on, rela onship extrac on, and topic
segmenta on.
Advantages of NLP:
NLP helps users to ask ques ons about any subject and get a direct response within seconds.
It does not provide unnecessary and unwanted informa on.
NLP helps computers to communicate with humans in their languages.
Components of NLP:
There are following component of NLP:-
1. Natural Language Understanding (NLU):
Natural Language Understanding (NLU) helps the machine to understand and analyse human language by extrac ng
the metadata from content such as concepts, en es, keywords, emo on, rela ons, and seman c roles. The most
basic form of NLU is parsing, which takes text wri en in natural language and converts it into a structured format that
computers can understand.
2. Natural Language Genera on (NLG):
Natural Language Genera on (NLG) acts as a translator that converts the computerized data into natural language
representa on. It mainly involves Text planning, Sentence planning, and Text Realiza on.
Steps of NLP:
The main steps involved in Natural Language Processing (NLP):
1. Text Input: The process starts with inpu ng the text data that needs to be analyzed. This could be anything from a
sentence to a large corpus(Collec on) of text.
2. Text Preprocessing: This step involves cleaning and preparing the text for analysis. Common preprocessing tasks
include:
- Tokeniza on: Breaking down text into smaller units, like words or sentences.
- Lowercasing: Conver ng all text to lowercase to ensure uniformity.
- Removing Stop Words: Elimina ng common words (like "and," "the," etc.) that may not add significant meaning.
- Stemming/Lemma za on: Reducing words to their base or root form (e.g., "running" to "run").
3. Feature Extrac on: This involves conver ng the text into a numerical format that can be processed by machine
learning algorithms. Techniques include:
- Bag of Words: Represen ng text as a set of words and their frequencies.
- TF-IDF (Term Frequency-Inverse Document Frequency): A sta s cal measure that evaluates the importance of a
word in a document rela ve to a collec on of documents.
4. Model Building: In this step, machine learning or deep learning models are trained using the processed text data.
This could involve supervised learning (using labeled data) or unsupervised learning (finding pa erns in unlabeled
data).
5. Text Analysis: Depending on the goal, various NLP tasks can be performed, such as:
- Sen ment Analysis: Determining the sen ment (posi ve, nega ve, neutral) of the text.
- Named En ty Recogni on (NER): Iden fying and classifying key en es (like names, dates, loca ons) in the text.
- Text Classifica on: Categorizing text into predefined classes or topics.
6. Output Genera on: Finally, the results of the analysis are generated. This could be a summary, classifica on labels,
or even a response in a conversa onal AI system.
7. Evalua on: The performance of the NLP model is evaluated using metrics like accuracy, precision, recall, and F1
score to ensure it meets the desired standards.
Discourse Knowledge:
In Natural Language Processing (NLP), "discourse knowledge" refers to the ability of a system to understand the
meaning of a sentence or phrase by considering the broader context of the conversa on or text, including previous
sentences, implied informa on, and overall narra ve flow, allowing for a deeper interpreta on beyond just individual
words or syntax.
Key points about discourse knowledge:
Beyond sentence level:
Unlike basic seman c analysis which focuses on individual sentences, discourse analysis looks at how sentences
connect and build upon each other within a larger piece of text.
Context-dependent meaning:
Discourse knowledge helps iden fy how the meaning of a word or phrase can change depending on the surrounding
context.
Important aspaects:
Anaphora and Cataphora: Iden fying pronouns or noun phrases that refer back to previously men oned en es
in the text (anaphora) or forward to upcoming en es (cataphora).
Topic coherence: Understanding the main topic of a conversa on or document and how different sentences relate
to it.
Implicature: Recognizing implied meaning based on social and cultural norms, even if not explicitly stated.
Pragma c Knowledge:
In Natural Language Processing (NLP), "pragma c knowledge" refers to the ability of a system to understand the
meaning of a sentence based on its context, including the speaker's inten on, social situa on, and surrounding
discourse, going beyond just the literal words used, essen ally allowing the system to interpret the "implied"
meaning within a conversa on or text.
Key points about pragma c knowledge in NLP:
Context-dependent meaning:
Pragma cs focuses on how the meaning of a sentence can change depending on the situa on and who is speaking,
which is crucial for accurate interpreta on in real-world scenarios.
Implicatures:
A key aspect of pragma cs is understanding "conversa onal implicatures," where the speaker conveys a meaning
that is not explicitly stated but can be inferred based on context.
Speech acts:
Analyzing the intended ac on behind a statement, like making a request, giving a command, or asking a ques on, is
another important aspect of pragma c understanding.
Example of pragma c knowledge in NLP:
"It's cold in here":
"Can you pass the salt?":
Complex context modeling:
World knowledge integra on:
How NLP systems can incorporate pragma c knowledge:
Large language models (LLMs):
Training models on large amounts of text data can help them learn contextual nuances and improve pragma c
understanding.
Dialogue state tracking:
Maintaining informa on about the conversa on flow and current topic can aid in interpre ng implied meanings.
Explicit knowledge bases:
Integra ng external knowledge sources about the world and social conven ons can enhance pragma c capabili es.
What is Grammer:
A grammar is a set of rules that define a language as a set of permissible(jus fied) word strings. It serves as a
blueprint for construc ng syntac cally correct sentences or meaningful sequences in a formal language.
Representa on of Grammar
Any Grammar can be represented by 4 tuples – <N, T, P, S>
N – Finite Non-Empty Set of Non-Terminal Symbols.
T – Finite Set of Terminal Symbols.
P – Finite Non-Empty Set of Produc on Rules.
S – Start Symbol (Symbol from where we start producing our sentences or strings).
Produc on rule:
A produc on or produc on rule in computer science is a rewrite rule specifying a symbol subs tu on that can be
recursively performed to generate new symbol sequences. It is of the form α-> β where α is a Non-Terminal
Symbol which can be replaced by β which is a string of Terminal Symbols or Non-Terminal Symbols.
Chomsky hierarchy of Grammer:
The Chomsky hierarchy is a framework in formal language theory that classifies languages (or grammars) into four
categories based on their genera ve power. It provides a systema c way to understand the rela onship between
languages and computa onal models.
Transformational Grammer:
Transformational Grammar is a theory of grammar developed by Noam Chomsky in the 1950s as part of his
groundbreaking work in linguistics. It is a framework that seeks to explain how humans produce and understand
sentences by transforming abstract underlying structures into surface structures (the sentences we actually say or
write). It starts with a simple sentence structure, called the "deep structure." And Rules are applied to change this
deep structure into different forms, creating "surface structures."
Examples: For instance, the deep structure "The cat chased the mouse" can be transformed into "Did the cat chase
the mouse?" by applying specific rules.
Deep Structure vs. Surface Structure:
Deep Structure:
Represents the underlying, abstract syntactic structure of a sentence. It captures the fundamental
grammatical relationships, such as who is doing what to whom.
Surface Structure:
Represents the actual form of the sentence as spoken or written. It is derived from the deep structure
through transformational rules.
1. Semantic Roles: It categorizes the roles that nouns play in relation to the verb in a sentence. These roles are
called "cases."
Semantic Grammar:
Semantic grammar is a linguistic theory that studies the connection between syntax and meaning in a language. In
artificial intelligence (AI), semantic grammar helps machines understand the meaning of words and structures in a
language. It is commonly used in natural language processing (NLP) applications to facilitate understanding.
Top-down paring:
o The top-down parsing is known as recursive parsing or predictive parsing.
o It starts from the start symbol and ends down on the terminals. It uses left most derivation.
o In the top-down parsing, the parsing starts from the start symbol and transform it into the input symbol.
Top-down parser is classified into 2 types:
1. Recursive descent parser.
2. Non-recursive descent parser.
1. Recursive descent parser :
Recursive descent parser is also known as the Brute force parser or the backtracking parser. It basically
generates the parse tree by using brute force and backtracking techniques.
Bottom-Up Parser
Bottom-up Parser is the parser that generates the parse tree for the given input string with the help of grammar
productions by compressing the terminals. It starts from terminals and ends upon the start symbol. It uses the
rightmost derivation in reverse order.
Transition Network:
Transition networks are graph-based structures used in artificial intelligence (AI) to break down complex tasks into
smaller steps. They are used in natural language processing (NLP) and compiler design.
Types of transition networks
Recursive transition networks (RTNs)
Recursive Transition Networks (RTNs) are a type of finite state machine used to describe the syntax of
languages. A type of transition network that can handle nested structures in language. RTNs are used to
represent complex structures, such as recursive elements in language.
Augmented transition networks (ATNs)
Augmented Transition Networks (ATNs) are a type of transition network used for parsing sentences
in natural language processing. A type of transition network that extends finite state machines with
recursive procedures and registers. ATNs can capture hierarchical structures in language, making them
capable of representing complex syntactic constructs. ATNs introduce augmented features which can store
and manipulate extra information as well as permitting recursive transitions into these networks.
Components of RTNs
1. States: Represent points in the parsing process.
2. Transitions: Connect states and are labeled with terminal symbols, non-terminal symbols, or epsilon (ε)
transitions.
3. Recursive Calls: Allow transitions to invoke other RTNs, enabling the representation of recursive grammar
rules.
1. Sentiment Analysis: This is used to determine the emotional tone behind a series of words. Businesses often use
sentiment analysis to understand customer opinions by analyzing reviews and social media mentions.
2. Chatbots and Virtual Assistants: NLP is the backbone of chatbots and virtual assistants like Siri or Google
Assistant. These systems can understand and respond to user queries in a conversational manner.
3. Machine Translation: This application translates text from one language to another. Services like Google
Translate utilize NLP to understand the context and nuances of the source language, ensuring that the translation is as
accurate as possible.
4. Text Summarization: NLP can condense large texts into shorter summaries while retaining the main ideas. This is
useful for quickly grasping the content of lengthy articles or documents without having to read everything in detail.
5. Speech Recognition: This technology converts spoken language into text. It’s used in voice-activated systems and
transcription services, allowing users to interact with devices through voice commands.
6. Information Retrieval: NLP improves search engines by allowing them to better understand user queries and
deliver more relevant results. It helps in interpreting the intent behind search terms, making the search process more
efficient.
7. Text Classification: This involves categorizing text into predefined categories. A common example is spam
detection in emails, where NLP algorithms learn to identify and filter out unwanted messages based on their content.
Eliza System:
The ELIZA system is one of the earliest examples of natural language processing in artificial intelligence. Developed
by Joseph Weizenbaum in the mid-1960s, ELIZA was designed to simulate a conversation with a human by using
pattern matching and substitution techniques.
2. Pattern Matching: ELIZA used simple pattern matching to identify keywords in the user's input and generate
appropriate responses.
3. Limited Understanding: While ELIZA could create the appearance of a meaningful conversation, it did not truly
understand the content of the dialogue. Its responses were based on surface-level patterns rather than
comprehension of the underlying meaning.
4. Influence on AI: ELIZA is significant in the history of AI because it demonstrated the potential for machines to
engage in human-like conversation, even with very limited capabilities. It sparked interest in the field of natural
language processing and set the stage for more advanced conversational agents.
Lunar System:
The LUNAR System was an early artificial intelligence (AI) system developed in the 1970s by William A. Woods
at Bolt, Beranek, and Newman (BBN). It was designed to allow users to interact with a database of geological
information about the Moon using natural language queries. The system's purpose was to demonstrate the potential
of natural language processing (NLP) and query-answering systems in AI.
Here are some simple points:
Lunar system refers to the Moon and its relationship with Earth. Here are some simple points:
1. The Moon: It's the only natural satellite of Earth, about 3,474 km wide, and orbits Earth at around 384,400 km
away.
2. Phases: The Moon goes through different phases like new moon, full moon, and others, based on how much
sunlight it reflects.
3. Tides: The Moon's gravity affects the ocean, causing tides—this means the water level goes up and down
regularly.
4. Exploration: Humans have sent missions to the Moon, like the Apollo missions, to learn more about it.
5. Surface: The Moon has craters, flat dark areas called maria, and highlands that show its history.
Unit – 5
Introduction to Export System
1. User Interface:
it is an interface that helps a non-expert user to communicate with the expert system to find a solution. With
the help of a user interface, the expert system interacts with the user, takes queries as an input in a readable format,
and passes it to the inference engine. After getting the response from the inference engine, it displays the output to
the user.
3. Knowledge base:
o The knowledgebase is a type of storage that stores knowledge acquired from the different experts of the particular
domain. It is considered as big storage of knowledge.
o It is similar to a database that contains information and rules of a particular domain or subject.
1. Define the Problem Domain: Identify the specific area where the expert system will be applied. Understanding
the problem domain is crucial for gathering relevant knowledge.
2. Knowledge Acquisition: This step involves collecting information from human experts in the field. Techniques
for knowledge acquisition can include interviews, questionnaires, and observation. The goal is to gather enough data
to create a robust knowledge base.
3. Knowledge Representation: Once the knowledge is acquired, it needs to be represented in a format that the
expert system can use. Common methods for knowledge representation include:
- Frames: Data structures that hold knowledge in a way similar to objects in programming.
- Semantic Networks: Graph structures for representing knowledge in terms of entities and their relationships.
4. Designing the Inference Engine: The inference engine is the core component that applies logical rules to the
knowledge base. It can use forward chaining or backward chaining methods to derive conclusions based on the
provided facts.
5. User Interface Development: A user-friendly interface is essential for users to interact with the expert system.
This could include input forms, graphical displays, and output reports.
6. Testing and Validation: The system must be rigorously tested to ensure it provides accurate and reliable results.
This involves comparing the system’s output with that of human experts and making adjustments as necessary.
7. Implementation: Once validated, the expert system can be deployed for use. This may involve integrating it with
existing systems or making it available as a standalone application.
8. Maintenance and Updates: An expert system requires ongoing maintenance to ensure it remains accurate and
relevant. This includes updating the knowledge base as new information becomes available and refining the
inference engine as needed.
CaDet (Cancer Decision Support Tool) is used to identify cancer in its earliest stages.
DENDRAL helps chemists identify unknown organic molecules.
DXplain is a clinical support system that diagnoses various diseases.
MYCIN identifies bacteria such as bacteremia and meningitis, and recommends antibiotics and dosages.
Structure of Expert System:
The structure of an expert system in AI typically consists of several key components, each playing a crucial role in
its functionality. Here’s a breakdown of the main components:
1. Knowledge Base: This is the core of the expert system, containing all the domain-specific knowledge. It
includes facts and rules. The knowledge can be represented in various forms, such as:
- Production Rules: IF-THEN statements that describe the relationships between different concepts.
- Frames: Data structures that hold knowledge in a structured format, similar to objects in programming.
- Semantic Networks: Graph structures that represent knowledge in terms of entities and their relationships.
2. Inference Engine: This component applies logical rules to the knowledge base to derive new information or
make decisions. It processes the input data and uses reasoning techniques, such as:
- Forward Chaining: Starting with the available facts and applying rules to infer conclusions.
- Backward Chaining: Starting with a goal and working backward to see if the available facts support it.
3. User Interface: The user interface allows users to interact with the expert system. The interface can include
forms, menus, and graphical displays.
4. Explanation Facility: This component provides users with explanations of the reasoning process. It helps users
understand how the system arrived at a particular conclusion or recommendation.
5. Knowledge Acquisition Module: This is a subsystem that helps in updating and refining the knowledge base.
6. Knowledge Base Management System (KBMS): This component manages the knowledge base, allowing for
storage, retrieval, and updating of knowledge.
Expert systems in AI offer several benefits that make them valuable tools in various fields. Here are some of the key
advantages:
1. Consistency: Expert systems provide consistent responses and decisions based on the same set of rules and
knowledge.
2. Availability: They can operate 24/7 without fatigue, allowing users to access expert knowledge anytime.
3. Scalability: Expert systems can be scaled to handle large amounts of data and complex problems. They can be
updated with new knowledge and rules without significant downtime.
4. Cost-Effective: By automating decision-making processes. They can also help organizations save time and
resources by streamlining processes
5. Training and Education: They can serve as educational tools, providing training to new employees or students
by simulating expert decision-making processes and offering explanations for their reasoning.
7. Complex Problem Solving: Expert systems can analyze complex data and provide solutions that may be difficult
for humans to derive, enhancing decision-making capabilities in intricate scenarios.
An expert system shell in AI refers to the framework or software platform that provides the infrastructure for building
and running expert systems. It serves as a foundation for developing customized expert systems by providing tools
and functionalities to represent knowledge, make inferences, and deliver intelligent advice. It makes development
very easy. This contains all the expert system generic logic system logic required to build an expert system.
Limited creativity
Expert systems are unable to generate innovative solutions that go beyond their programmed rules.
Expert systems are unable to understand context beyond their programmed knowledge.
The accuracy of an expert system's advice depends on the quality of the data in its knowledge base.
Maintenance
Menu-driven expert systems may not be able to handle ambiguous problems well.
Incorrect responses
If the knowledge base contains errors or incorrect information, the expert system may provide incorrect responses.
MYCIN:
MYCIN was an early backward chaining expert system that used artificial intelligence to identify bacteria causing
severe infections, In 1972 work began on MYCIN at Stanford University in California. MYCIN worked with a
knowledge base of about 600 rules and a simple inference engine. It will ask the doctor running the program a long
list of easy yes/no or text-based questions
Knowledge base
MYCIN's knowledge base was made up of production rules that represented the clinical decision criteria of infectious
disease experts.
Inference engine
MYCIN used a simple inference engine to ask doctors questions and rank possible bacteria.
Explanation system
MYCIN could answer questions in simple English to explain its reasoning and recommendations.
Identify bacteria that cause severe infections like bacteremia and meningitis
Diagnose blood clotting diseases
Recommend antibiotics and adjust the dosage for a patient's weight
Explain the reasoning behind its diagnosis and recommendations
Allow experts to teach the system new therapeutic decision rules
DENDRAL:
DENDRAL was an early expert system developed in the mid-1960s at Stanford University by Edward Feigenbaum,
Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It was designed to assist chemists in identifying the
molecular structure of organic compounds based on mass spectrometry and other analytical data.
The main goal of DENDRAL was to automate the process of molecular structure identification. It aimed to
generate hypotheses about unknown chemical compounds from given empirical data.
1. Input: The system received mass spectrometry data of an unknown organic compound.
2. Generation of Hypotheses: It generated possible molecular structures using predefined chemical rules.
3. Evaluation: Applied heuristic rules to analyze and eliminate incorrect structures.
4. Output: Suggested the most probable molecular structure of the compound.