0% found this document useful (0 votes)
47 views21 pages

Unit 1 AIML

Uploaded by

Priyanka jadahav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views21 pages

Unit 1 AIML

Uploaded by

Priyanka jadahav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

1.

INTRODUCTION
What is AI
Artificial intelligence, or AI, is technology that enables computers and machines to
simulate human intelligence and problem-solving capabilities.

Artificial intelligence (AI) is the theory and development of computer systems


capable of performing tasks that historically required human intelligence, such as
recognizing speech, making decisions, and identifying patterns. AI is an umbrella
term that encompasses a wide variety of technologies, including machine
learning, deep learning, and natural language processing (NLP).

Artificial Intelligence

❑ Modeling human cognition or mental faculties using computers


❑ Study of making computers do things which at the moment peopleare
better
❑ Making computers do things which require intelligence

More Formal Definition of AI

◼ AI is a branch of computer science which is concerned with the study and


creation of computer systems that exhibit
❑ some form of intelligence
OR
❑ those characteristics which we associate with intelligence in human
behavior
❑ AI is a broad area consisting of different fields, from machine vision,
expert systems to the creation of machines that can "think".
❑ In order to classify machines as "thinking", it is necessary to define
intelligence.
Artificial intelligence examples
At the simplest level, machine learning uses algorithms trained on data sets to
create machine learning models that allow computer systems to perform tasks like
making song recommendations, identifying the fastest way to travel to a
destination, or translating text from one language to another. Some of the most
common examples of AI in use today include:

 Chat GPT: Uses large language models (LLMs) to generate text in response to
questions or comments posed to it.
 Google Translate: Uses deep learning algorithms to translate text from one
language to another.
 Netflix: Uses machine learning algorithms to create personalized recommendation
engines for users based on their previous viewing history.
 Tesla: Uses computer vision to power self-driving features on their cars.

Foundations of AI
◼ Foundation of AI is based on
❑ Mathematics
❑ Neuroscience
❑ Control Theory
❑ Linguistics

1. Foundations – Mathematics

◼ More formal logical methods


❑ Boolean logic
❑ Fuzzy logic
◼ Uncertainty
❑ The basis for most modern approaches to handle uncertainty in AI applications

✓ Probability theory
can be handled by

✓ Modal and Temporal logics

2. Foundations – Neuroscience

◼ How do the brain works?


❑ Early studies (1824) relied on injured and abnormal people to understand what
parts of brain work
❑ More recent studies use accurate sensors to correlate brain activityto human
thought
◼ By monitoring individual neurons, monkeys can now controla computer mouse
using thought alone
❑ Moore’s law states that computers will have as many gates as humans have
neurons in 2020 ❑ How close are we to have a mechanical brain?
◼ Parallel computation, remapping, interconnections,….

3. Foundations – Control Theory

❑ Machines can modify their behavior in response to the environment


(sense/action loop) ◼ Water-flow regulator, steam engine governor, thermostat
❑ The theory of stable feedback systems (1894)
◼ Build systems that transition from initial state to goal state with minimum
energy
◼ In 1950, control theory could only describe linear systems and AI largely rose
as a response to this shortcoming

4. Foundations – Linguistics

◼ Speech demonstrates so much of human intelligence


❑ Analysis of human language reveals thought taking place in waysnot
understood in other settings
◼ Children can create sentences they have never heard before
◼ Language and thought are believed to be tightly intertwined

5.Two Views of AI Goals

◼ AI is about duplicating what the (human) brain DOES


❑ Cognitive Science ◼ AI is about duplicating what the (human) brain SHOULD
do
❑ Rationality (doing things logically)

6. Cool Stuff in AI

◼ Game playing agents


◼ Machine learning
◼ Speech
◼ Language
◼ Vision
◼ Data Mining
◼ Web agents

7. Useful Stuff

◼ Medical Diagnosis
◼ Fraud Detection
◼ Object Identification
◼ Space Shuttle Scheduling
◼ Information Retrieval

History of Artificial Intelligence


Artificial Intelligence is not a new word and not a new technology for researchers.
This technology is much older than you would imagine. Even there are the myths
of Mechanical men in Ancient Greek and Egyptian Myths. Following are some
milestones in the history of AI which defines the journey from the AI generation to
till date development.
Maturation of Artificial Intelligence (1943-1952)
Between 1943 and 1952, there was notable progress in the expansion of artificial
intelligence (AI). Throughout this period, AI transitioned from a mere concept to
tangible experiments and practical applications. Here are some key events that
happened during this period:

o Year 1943: The first work which is now recognized as AI was done by
Warren McCulloch and Walter pits in 1943. They proposed a model
of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.
o Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing publishes "Computing
Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to
human intelligence, called a Turing test.
o Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial
neural network (ANN) named SNARC. They utilized 3,000 vacuum tubes to
mimic a network of 40 neurons.

The birth of Artificial Intelligence (1952-1956)


From 1952 to 1956, AI surfaced as a unique domain of investigation. During this
period, pioneers and forward-thinkers commenced the groundwork for what would
ultimately transform into a revolutionary technological domain. Here are notable
occurrences from this era:

o Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-
Playing Program, which marked the world's first self-learning program for
playing games.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first
artificial intelligence program"Which was named as "Logic Theorist". This
program had proved 38 of 52 Mathematics theorems, and find new and more
elegant proofs for some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American
Computer scientist John McCarthy at the Dartmouth Conference. For the
first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL


were invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)


The period from 1956 to 1974 is commonly known as the "Golden Age" of
artificial intelligence (AI). In this timeframe, AI researchers and innovators were
filled with enthusiasm and achieved remarkable advancements in the field. Here
are some notable events from this era:

o Year 1958: During this period, Frank Rosenblatt introduced the perceptron,
one of the early artificial neural networks with the ability to learn from data.
This invention laid the foundation for modern neural networks.
Simultaneously, John McCarthy developed the Lisp programming language,
which swiftly found favor within the AI community, becoming highly
popular among developers.
o Year 1959: Arthur Samuel is credited with introducing the phrase "machine
learning" in a pivotal paper in which he proposed that computers could be
programmed to surpass their creators in performance. Additionally, Oliver
Selfridge made a notable contribution to machine learning with his
publication "Pandemonium: A Paradigm for Learning." This work outlined a
model capable of self-improvement, enabling it to discover patterns in
events more effectively.
o Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow
created STUDENT, one of the early programs for natural language
processing (NLP), with the specific purpose of solving algebra word
problems.
o Year 1965: The initial expert system, Dendral, was devised by Edward
Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It
aided organic chemists in identifying unfamiliar organic compounds.
o Year 1966: The researchers emphasized developing algorithms that can
solve mathematical problems. Joseph Weizenbaum created the first chatbot
in 1966, which was named ELIZA. Furthermore, Stanford Research Institute
created Shakey, the earliest mobile intelligent robot incorporating AI,
computer vision, navigation, and NLP. It can be considered a precursor to
today's self-driving cars and drones.
o Year 1968: Terry Winograd developed SHRDLU, which was the pioneering
multimodal AI capable of following user instructions to manipulate and
reason within a world of blocks.
o Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm
known as backpropagation, which enabled the development of multilayer
artificial neural networks. This represented a significant advancement
beyond the perceptron and laid the groundwork for deep learning.
Additionally, Marvin Minsky and Seymour Papert authored the book
"Perceptrons," which elucidated the constraints of basic neural networks.
This publication led to a decline in neural network research and a resurgence
in symbolic AI research.
o Year 1972: The first intelligent humanoid robot was built in Japan, which
was named WABOT-1.
o Year 1973: James Lighthill published the report titled "Artificial
Intelligence: A General Survey," resulting in a substantial reduction in the
British government's backing for AI research.

The first AI winter (1974-1980)


The initial AI winter, occurring from 1974 to 1980, is known as a tough period for
artificial intelligence (AI). During this time, there was a substantial decrease in
research funding, and AI faced a sense of letdown.
o The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time period where computer scientist dealt with a
severe shortage of funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was
decreased.

A boom of AI (1980-1987)
Between 1980 and 1987, AI underwent a renaissance and newfound vitality after
the challenging era of the First AI Winter. Here are notable occurrences from this
timeframe:

o In 1980, the first national conference of the American Association of


Artificial Intelligence was held at Stanford University.
o Year 1980: After AI's winter duration, AI came back with an "Expert
System". Expert systems were programmed to emulate the decision-making
ability of a human expert. Additionally, Symbolics Lisp machines were
brought into commercial use, marking the onset of an AI resurgence.
However, in subsequent years, the Lisp machine market experienced a
significant downturn.
o Year 1981: Danny Hillis created parallel computers tailored for AI and
various computational functions, featuring an architecture akin to
contemporary GPUs.
o Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI
winter" during a gathering of the Association for the Advancement of
Artificial Intelligence. They cautioned the business world that exaggerated
expectations about AI would result in disillusionment and the eventual
downfall of the industry, which indeed occurred three years later.
o Year 1985: Judea Pearl introduced Bayesian network causal analysis,
presenting statistical methods for encoding uncertainty in computer systems.

The second AI winter (1987-1993)


o The duration between the years 1987 to 1993 was the second AI Winter
duration.
o Again Investors and government stopped in funding for AI research as due
to high cost but not efficient result. The expert system such as XCON was
very cost effective.

The emergence of intelligent agents (1993-2011)


Between 1993 and 2011, there were significant leaps forward in artificial
intelligence (AI), particularly in the development of intelligent computer programs.
During this era, AI professionals shifted their emphasis from attempting to match
human intelligence to crafting pragmatic, ingenious software tailored to specific
tasks. Here are some noteworthy occurrences from this timeframe:

o Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by


defeating world chess champion Gary Kasparov, marking the first time a
computer triumphed over a reigning world chess champion. Moreover, Sepp
Hochreiter and Jürgen Schmidhuber introduced the Long Short-Term
Memory recurrent neural network, revolutionizing the capability to process
entire sequences of data such as speech or video.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
o Year 2006: AI came into the Business world till the year 2006. Companies
like Facebook, Twitter, and Netflix also started using AI.
o Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the
paper titled "Utilizing Graphics Processors for Extensive Deep Unsupervised
Learning," introducing the concept of employing GPUs for the training of
expansive neural networks.
o Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and
Jonathan Masci created the initial CNN that attained "superhuman"
performance by emerging as the victor in the German Traffic Sign
Recognition competition. Furthermore, Apple launched Siri, a voice-
activated personal assistant capable of generating responses and executing
actions in response to voice commands.
Deep learning, big data and artificial general intelligence
(2011-present)
From 2011 to the present moment, significant advancements have unfolded within
the artificial intelligence (AI) domain. These achievements can be attributed to the
amalgamation of deep learning, extensive data application, and the ongoing quest
for artificial general intelligence (AGI). Here are notable occurrences from this
timeframe:

o Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had
to solve complex questions as well as riddles. Watson had proved that it
could understand natural language and can solve tricky questions quickly.
o Year 2012: Google launched an Android app feature, "Google Now", which
was able to provide information to the user as a prediction. Further, Geoffrey
Hinton, Ilya Sutskever, and Alex Krizhevsky presented a deep CNN
structure that emerged victorious in the ImageNet challenge, sparking the
proliferation of research and application in the field of deep learning.
o Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling
the speed of the world's leading supercomputers to reach 33.86 petaflops. It
retained its status as the world's fastest system for the third consecutive time.
Furthermore, DeepMind unveiled deep reinforcement learning, a CNN that
acquired skills through repetitive learning and rewards, ultimately surpassing
human experts in playing games. Also, Google researcher Tomas Mikolov
and his team introduced Word2vec, a tool designed to automatically discern
the semantic connections among words.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a
competition in the infamous "Turing test." Whereas Ian Goodfellow and his
team pioneered generative adversarial networks (GANs), a type of machine
learning framework employed for producing images, altering pictures, and
crafting deepfakes, and Diederik Kingma and Max Welling introduced
variational autoencoders (VAEs) for generating images, videos, and text.
Also, Facebook engineered the DeepFace deep learning facial recognition
system, capable of identifying human faces in digital images with accuracy
nearly comparable to human capabilities.
o Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go
player Lee Sedol in Seoul, South Korea, prompting reminiscence of the
Kasparov chess match against Deep Blue nearly two decades
earlier.Whereas Uber initiated a pilot program for self-driving cars in
Pittsburgh, catering to a limited group of users.
o Year 2018: The "Project Debater" from IBM debated on complex topics
with two master debaters and also performed extremely well.
o Google has demonstrated an AI program, "Duplex," which was a virtual
assistant that had taken hairdresser appointments on call, and the lady on the
other side didn't notice that she was talking with the machine.
o Year 2021: Open AI unveiled the Dall-E multimodal AI system, capable of
producing images based on textual prompts.
o Year 2022: In November, OpenAI launched Chat GPT, offering a chat-
oriented interface to its GPT-3.5 LLM.

Now AI has developed to a remarkable level. The concept of Deep learning, big
data, and data science are now trending like a boom. Nowadays companies like
Google, Facebook, IBM, and Amazon are working with AI and creating amazing
devices. The future of Artificial Intelligence is inspiring and will come with high
intelligence.

Philosophy of artificial intelligence


The philosophy of artificial intelligence is a branch of the philosophy of
mind and the philosophy of computer science] that explores artificial
intelligence and its implications for knowledge and understanding
of intelligence, ethics, consciousness, epistemology, and free will.

Furthermore, the technology is concerned with the creation of artificial animals or


artificial people (or, at least, artificial creatures; see artificial life) so the discipline
is of considerable interest to philosophers.

These factors contributed to the emergence of the philosophy of artificial


intelligence.

The philosophy of artificial intelligence attempts to answer such questions as


follows:
 Can a machine act intelligently? Can it solve any problem that a person would
solve by thinking?
 Are human intelligence and machine intelligence the same? Is the human
brain essentially a computer?
 Can a machine have a mind, mental states, and consciousness in the same sense
that a human being can? Can it feel how things are? (i.e does it have qualia?)
Questions like these reflect the divergent interests of AI researchers, cognitive
scientists and philosophers respectively. The scientific answers to these questions
depend on the definition of "intelligence" and "consciousness" and exactly which
"machines" are under discussion.

AI Future
The future of Artificial Intelligence (AI) holds tremendous promise and raises
significant challenges across various domains. Here are some key aspects of AI's
future:

1. Advancements in Machine Learning: AI algorithms, particularly in the


realms of deep learning and reinforcement learning, are expected to become
more sophisticated. This will likely lead to AI systems that can learn faster,
generalize better across different tasks, and require less labeled data.
2. AI in Healthcare: AI is poised to revolutionize healthcare through
applications such as personalized medicine, disease prediction, and medical
imaging analysis. AI-driven diagnostics and treatment recommendations
could enhance patient outcomes and reduce healthcare costs.
3. Autonomous Vehicles: The development of self-driving cars and other
autonomous vehicles continues to progress. AI is crucial for enhancing
safety, efficiency, and navigation capabilities, although regulatory and
ethical considerations remain significant hurdles.
4. Natural Language Processing: AI's ability to understand and generate
human language is improving rapidly. Future advancements could lead to
more natural interactions with virtual assistants, better language translation,
and enhanced sentiment analysis.
5. AI in Industry and Automation: AI-powered automation is expected to
transform industries such as manufacturing, logistics, and agriculture.
Autonomous robots and AI-driven systems could optimize production
processes, improve quality control, and reduce operational costs.
6. Ethical and Regulatory Challenges: As AI becomes more pervasive,
concerns about ethical implications, including bias, privacy, accountability,
and job displacement, will require careful consideration and robust
regulatory frameworks.
7. AI and Creativity: AI is already demonstrating capabilities in creative
fields such as art, music, and literature. The future may see AI systems
collaborating with humans in creative endeavors or even producing original
works independently.
8. AI Governance and Policy: International collaboration will be crucial in
establishing standards and guidelines for the responsible development and
deployment of AI technologies. Issues like intellectual property rights, cyber
security, and geopolitical implications will need to be addressed.
9. AI in Education and Training: AI-powered personalized learning
platforms could revolutionize education by adapting teaching methods to
individual student needs and providing advanced tutoring systems.
10.Super intelligent AI: Speculations about the development of artificial
general intelligence (AGI) and super intelligent AI raise existential concerns
and require careful consideration of safety measures and ethical guidelines.

In summary, the future of AI holds immense potential to transform industries,


improve quality of life, and tackle complex global challenges. However, realizing
this potential will require addressing technical, ethical, and regulatory challenges
to ensure AI development is aligned with human values and societal needs.

Stages of AI

The stages of AI can be broadly categorized into three levels: Artificial Narrow
Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super
intelligence (ASI). Here's a breakdown of each stage with examples:

1. Artificial Narrow Intelligence (ANI):


o Definition: ANI refers to AI systems that are designed and trained for
specific narrow tasks or domains. These systems excel at performing
specific tasks but lack general cognitive abilities.
o Examples:
 Image Recognition: AI systems like Google's image search or
facial recognition software on smart phones.
 Natural Language Processing (NLP): Chat bots like Apple's
Siri, Amazon's Alexa, or Google Assistant that can understand
and respond to voice commands for specific tasks.
 Gaming: AI programs that play games like chess (e.g., IBM's
Deep Blue) or Go (e.g., Google's Alpha Go).
2. Artificial General Intelligence (AGI):
o Definition: AGI refers to AI systems that have human-like cognitive
abilities and can understand, learn, and apply knowledge across a
wide range of tasks. AGI can generalize from one domain to another
and exhibit common-sense reasoning.
o Examples:
 AGI remains theoretical and has not been achieved yet. It
would represent a significant leap beyond current ANI systems,
potentially capable of performing tasks that require human-
level intelligence across various domains.
3. Artificial Super intelligence (ASI):
o Definition: ASI goes beyond human intelligence in all aspects,
including creativity, problem-solving, and social skills. It represents a
level of intelligence that far surpasses the best human minds in every
field.
o Examples:
 ASI is also theoretical at present and is the subject of
speculation and debate. Examples in fiction include sentient AI
entities depicted in movies or literature that exceed human
capabilities in every conceivable way.

It's important to note that while ANI is currently prevalent in various applications
today, achieving AGI and potentially ASI remains a subject of ongoing research
and speculation. The development of AGI and ASI raises profound ethical,
societal, and existential questions, prompting discussions about the implications
and risks associated with creating machines that surpass human intelligence.

Intelligent Agent
AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.

What are Agent and Environment?


An agent is anything that can perceive its environment through sensors and acts
upon that environment through effectors.
 A human agent has sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors.
 A robotic agent replaces cameras and infrared range finders for the sensors,
and various motors and actuators for effectors.
 A software agent has encoded bit strings as its programs and actions.

Agent Terminology

 Performance Measure of Agent − It is the criteria, which determines how


successful an agent is.
 Behavior of Agent − It is the action that agent performs after any given
sequence of percepts.
 Percept − It is agent’s perceptual inputs at a given instance.
 Percept Sequence − It is the history of all that an agent has perceived till
date.
 Agent Function − It is a map from the precept sequence to an action.
Learn Artificial Intelligence in-depth with real-world projects through
our Artificial Intelligence certification course. Enroll and become a certified expert
to boost your career.

Rationality
Rationality is nothing but status of being reasonable, sensible, and having good
sense of judgment.
Rationality is concerned with expected actions and results depending upon what
the agent has perceived. Performing actions with the aim of obtaining useful
information is an important part of rationality.

What is Ideal Rational Agent?

An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of −

 Its percept sequence


 Its built-in knowledge base

Rationality of an agent depends on the following −

 The performance measures, which determine the degree of success.


 Agent’s Percept Sequence till now.
 The agent’s prior knowledge about the environment.
 The actions that the agent can carry out.

A rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept sequence.
The problem the agent solves is characterized by Performance Measure,
Environment, Actuators, and Sensors (PEAS).

Structure of Intelligent
The Structure of Intelligent Agents

Agent’s structure can be viewed as −

 Agent = Architecture + Agent Program


 Architecture = the machinery that an agent executes on.
 Agent Program = an implementation of an agent function.

1)Simple Reflex Agents

 They choose actions only based on the current percept.


 They are rational only if a correct decision is made only on the basis of
current precept.
 Their environment is completely observable.
Condition-Action Rule − It is a rule that maps a state (condition) to an action.

2)Model Based Reflex Agents

They use a model of the world to choose their actions. They maintain an internal
state.

Model − knowledge about “how the things happen in the world”.


Internal State − It is a representation of unobserved aspects of current state
depending on percept history.
Updating the state requires the information about −
 How the world evolves.
 How the agent’s actions affect the world.
3)Goal Based Agents

They choose their actions in order to achieve goals. Goal-based approach is more
flexible than reflex agent since the knowledge supporting a decision is explicitly
modeled, thereby allowing for modifications.

Goal − It is the description of desirable situations.


4)Utility Based Agents

They choose actions based on a preference (utility) for each state.

Goals are inadequate when −

 There are conflicting goals, out of which only few can be achieved.
 Goals have some uncertainty of being achieved and you need to weigh
likelihood of success against the importance of a goal.

The Nature of Environments


Some programs operate in the entirely artificial environment confined to
keyboard input, database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich,
unlimited softbots domains. The simulator has a very detailed, complex
environment. The software agent needs to choose from a long array of actions in
real time. A softbot designed to scan the online preferences of the customer and
show interesting items to the customer works in the real as well as
an artificial environment.
The most famous artificial environment is the Turing Test environment, in
which one real and other artificial agents are tested on equal ground. This is a very
challenging environment as it is highly difficult for a software agent to perform as
well as a human.

Turing Test

The success of an intelligent behavior of a system can be measured with Turing


Test.

Two persons and a machine to be evaluated participate in the test. Out of the two
persons, one plays the role of the tester. Each of them sits in different rooms. The
tester is unaware of who is machine and who is a human. He interrogates the
questions by typing and sending them to both intelligences, to which he receives
typed responses.

This test aims at fooling the tester. If the tester fails to determine machine’s
response from the human response, then the machine is said to be intelligent.

Properties of Environment

The environment has multifold properties −

 Discrete / Continuous − If there are a limited number of distinct, clearly


defined, states of the environment, the environment is discrete (For example,
chess); otherwise it is continuous (For example, driving).
 Observable / Partially Observable − If it is possible to determine the
complete state of the environment at each time point from the percepts it is
observable; otherwise it is only partially observable.
 Static / Dynamic − If the environment does not change while an agent is
acting, then it is static; otherwise it is dynamic.
 Single agent / Multiple agents − The environment may contain other agents
which may be of the same or different kind as that of the agent.
 Accessible / Inaccessible − If the agent’s sensory apparatus can have access
to the complete state of the environment, then the environment is accessible
to that agent.
 Deterministic / Non-deterministic − If the next state of the environment is
completely determined by the current state and the actions of the agent, then
the environment is deterministic; otherwise it is non-deterministic.
 Episodic / Non-episodic − In an episodic environment, each episode
consists of the agent perceiving and then acting. The quality of its action
depends just on the episode itself. Subsequent episodes do not depend on the
actions in the previous episodes. Episodic environments are much simpler
because the agent does not need to think ahead.
Print Page

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy