0% found this document useful (0 votes)
24 views16 pages

Planning and Machine Learning

The document discusses planning systems in artificial intelligence, focusing on Forward State Space Planning (FSSP) and Backward State Space Planning (BSSP), along with their advantages and disadvantages. It also explains the Block World Problem as a practical example of planning, introduces STRIPS as a foundational automated planning system, and outlines the significance of machine learning, including its types: supervised, unsupervised, and reinforcement learning. Additionally, it highlights the importance of machine learning in data processing, innovation, and automation across various sectors.

Uploaded by

Harshitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views16 pages

Planning and Machine Learning

The document discusses planning systems in artificial intelligence, focusing on Forward State Space Planning (FSSP) and Backward State Space Planning (BSSP), along with their advantages and disadvantages. It also explains the Block World Problem as a practical example of planning, introduces STRIPS as a foundational automated planning system, and outlines the significance of machine learning, including its types: supervised, unsupervised, and reinforcement learning. Additionally, it highlights the importance of machine learning in data processing, innovation, and automation across various sectors.

Uploaded by

Harshitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

PLANNING AND MACHINE LEARNING

Planning System
For any planning system, we need the domain description, action specification, and
goal description. A plan is assumed to be a sequence of actions and each action has
its own set of preconditions to be satisfied before performing the action and also some
effects which can be positive or negative.
So, we have Forward State Space Planning (FSSP) and Backward State Space Planning
(BSSP) at the basic level.
1. Forward State Space Planning (FSSP)
FSSP behaves in a similar fashion like forward state space search. It says that given a
start state S in any domain, we perform certain actions required and acquire a new
state S’ (which includes some new conditions as well) which is called progress and this
proceeds until we reach the goal state. The actions have to be applicable in this case.
Disadvantage: Large branching factor
Advantage: Algorithm is Sound
2. Backward State Space Planning (BSSP)
BSSP behaves in a similar fashion like backward state space search. In this, we move
from the goal state g towards sub-goal g’ that is finding the previous action to be done
to achieve that respective goal. This process is called regression (moving back to the
previous goal or sub-goal). These sub-goals have to be checked for consistency as
well. The actions have to be relevant in this case.
Disadvantage: Not a sound algorithm (sometimes inconsistency can be found)
Advantage: Small branching factor (very small compared to FSSP)

PLANNING
Planning refers to the process of computing several steps of a problem solving before
executing any of them.Planning is useful as a problem solving technique for non
decomposable problem.
Components of Planning System:
In any general problem solving systems, elementary techniques to perform following
functions are required
 Choose the best rule (based on heuristics) to be applied
 Apply the chosen rule to get new problem state
 Detect when a solution has been found
 Detect dead ends so that new directions are explored.
To choose the rules ,
 first isolate a set of differences between the desired goal state and current state,
PLANNING AND MACHINE LEARNING

 identify those rules that are relevant to reducing these difference,


 if more rules are found then apply heuristic information to choose out of them.
To apply rules,
In simple problem solving system,
 applying rules was easy as each rule specifies the problem state that would result
from its application.
 In complex problem we deal with rules that specify only a small part of the complete
problem state.
Let us consider the famous problem name as Block World Problem, which helps to
understand the importance of planning in artificial intelligent system.
The block world environment has ,
 Square blocks of same size
 Blocks can be stacked one upon another.
 Flat surface (table) on which blocks can be placed.
 Robot arm that can manipulate the blocks. It can hold only one block at a time.
In block world problem, the state is described by a set of predicates representing the
facts that were true in that state.One must describe for every action, each of the
changes it makes to the state description.In addition, some statements that
everything else remains unchanged is also necessary.We are having four types of
operations done by robot in block world environment .They are
UNSTACK (X, Y) : [US (X, Y)]
 Pick up X from its current position on block Y. The arm must be empty and X has no
block on top of it.
STACK (X, Y): [S (X, Y)]
 Place block X on block Y. Arm must holding X and the top of Y is clear.
PICKUP (X): [PU (X) ]
 Pick up X from the table and hold it. Initially the arm must be empty and top of X is
clear.
PUTDOWN (X): [PD (X)]
 Put block X down on the table. The arm must have been holding block X.
Along with the operations ,some predicates to be used to describe an environment
clearly.Those predicates are,

 ON(X, Y) - Block X on block Y.


 ONT() - Block X on the table.
PLANNING AND MACHINE LEARNING

 CL(X) - Top of X clear.


 HOLD(X) - Robot-Arm holding X.
 ARMEMPTY - Robot-arm empty.

Initial State

Armempty
clear(block2)
ontable(block2)
ontable(block1)
clear(block1)

Goal State

Armempty
ontable(block2)
on(block1, block2)
clear(block1)
We have to generate a plan to reach goal state from initial state given.In this example
the initial state has two blocks Block1 and Block 2.Both is placed on table.To reach the
goal state first we have to
PLANNING AND MACHINE LEARNING

PICKUP(Block 1)

We need to check whether we reach goal state or not ,after completion of each and
every operation.Here the environment looks like,

Hold(block1)
Clear(Block2)
OnTable(Block2)

This is not the goal state .so ,we have to continue the process. Next the block 1 needs
to be place on block 2,to achieve this do the operation STACK(Block1,Block2). After
this operation the environment looks like,
PLANNING AND MACHINE LEARNING

ArmEmpty,on(Block1,Block2),Clear(Block1),OnTable(Block2)

We reach the goal state,the plan for reaching goal state is


PICKUP(Block1) and Stack(Block1,Block2)

Basic plan generation systems


STRIPS
STRIPS, or Stanford Research Institute Planning System, is a programming language
and algorithm for automated planning in artificial intelligence. It was developed by
Richard Fikes and Nils Nilsson at the Stanford Research Institute in 1969. STRIPS uses
a state-space representation of the world to plan actions that will achieve a given
goal. The system represents the current state of the world as a set of propositions, or
facts, and defines actions as transformations between states. It then uses a search
algorithm to find a sequence of actions that will lead from the initial state to the
desired goal state. STRIPS has been widely used in robotics, game playing, and other
applications where automated planning is required.
key features of STRIPS
The key features of STRIPS include:
 State-space representation: STRIPS represents the world as a set of propositions
that describe the current state of the world.
 Actions: STRIPS defines actions as transformations between states, with each action
having preconditions and effects that define how it changes the state of the world.
 Search algorithm: STRIPS uses a search algorithm to find a sequence of actions that
will lead from the initial state to the desired goal state.
PLANNING AND MACHINE LEARNING

 Planning domain: STRIPS defines a planning domain as a set of propositions,


actions, and constraints that define the possible states and actions in a particular
problem domain.
 Problem instance: A problem instance is a specific instance of a planning domain,
with an initial state and a desired goal state.
 Plan generation: STRIPS generates a plan by finding a sequence of actions that will
transform the initial state into the desired goal state.
 Plan execution: Once a plan has been generated, STRIPS executes it by performing
each action in turn until the goal state is reached.
How does STRIPS work?
STRIPS works by representing the world as a set of propositions that describe the
current state of the world. It defines actions as transformations between states, with
each action having preconditions and effects that define how it changes the state of
the world.
To plan a sequence of actions to achieve a given goal, STRIPS uses a search algorithm
that explores the possible states and actions in the planning domain. The algorithm
starts from the initial state and applies each action in turn, checking whether the
resulting state satisfies the preconditions of the next action in the sequence. If it does,
the algorithm continues to apply the next action until the goal state is reached.
If the algorithm encounters a state that cannot be transformed into the goal state by
any available actions, it backtracks and tries a different sequence of actions. This
process continues until a plan is found that will transform the initial state into the
desired goal state.
Once a plan has been generated, STRIPS executes it by performing each action in turn
until the goal state is reached. If an unexpected event occurs during execution, such
as a failure to perform an action or a change in the world state, STRIPS can replan and
generate a new sequence of actions that will achieve the goal state under the new
circumstances.
PLANNING AND MACHINE LEARNING

Advanced plan generation systems


K-STRIPS
Modal Operator K :
We are familiar with the use of connectives ∧ and V in logics. Thinking of these
connectives as operators that construct more complex formulas from simpler
components. Here, we want to construct a formula whose intended meaning is that a
certain agent knows a certain proposition.
The components consist of a term denoting the agent and a formula denoting a
proposition that the agent knows. To accomplish this, modal operator K is introduced.
For example, to say that Robot (name of agent) know that block A is on block B, then
write,
K( Robot, On(A,B))
The sentence formed by combining K with the term Robot and the formula On(A,B)
gets a new formula, the intended meaning of which is “Robot knows that block A is on
block B”.
The words “knows” and “belief” is different in meaning. That means an agent can
believe a false proposition, but it cannot know anything that is false.
Some examples,
PLANNING AND MACHINE LEARNING

K(Agent1, K(Agent2, On(A,B) ) ], means Agent1 knows that Agent1 knows that A is on
B.
K(Agent1, On(A,B)) V K(Agent1, On(A,C) ) means that either Agent1 knows that A is on
B or it knows that A is on C.
K(Agent1, On(A,B)) V K(Agent1, ¬On(A,B) ) means that either Agent1 knows whether
or not A is on B.
Advertisement
Knowledge Axioms:
The operators ∧ and V have compositional semantics (depends on truth value) , but
the semantics of K are not compositional. The truth value of K(Agent1, On(A,B) ) for
example, cannot necessarily be determined from the properties of K, the denotation of
Agent1 and the truth value of On(A,B). K Operator is said to be referentially opaque.
Example in Planning Speech Action:
We can treat speech acts just like other agent systems. Our agent can use a plan-
generating system to make plans comprising speech acts and other actions. To do so,
it needs a model of the effects of these actions.
Consider for example, Tell( A, φ ) , where A is Agent and φ is true.
We could model the effects of that action by the STRIPS rule :
Tell( A, φ ) :
Precondition : Next_to(A) ∧ φ ∧ ¬K(A, φ)
Delete : ¬K(A, φ)
Add : K(A, φ)
The precondition Next_to(A) ensures that our agent is close to agent A to enable
communication.
The precondition φ is imposed to ensure that our agent actually believes φ before it
can inform another agent about the truth.
The precondition ¬K(A, φ) ensure that our agent does not communicate redundant
information.

Machine Learning
Machine learning is a subset of AI, which uses algorithms that learn from data to make
predictions. These predictions can be generated through supervised learning, where
algorithms learn patterns from existing data, or unsupervised learning, where they
discover general patterns in data.
The Importance of Machine Learning
Here are some reasons why it’s so essential in the modern world:
 Data processing. One of the primary reasons machine learning is so important is its
ability to handle and make sense of large volumes of data. With the explosion of
digital data from social media, sensors, and other sources, traditional data analysis
PLANNING AND MACHINE LEARNING

methods have become inadequate. Machine learning algorithms can process these
vast amounts of data, uncover hidden patterns, and provide valuable insights that can
drive decision-making.
 Driving innovation. Machine learning is driving innovation and efficiency across
various sectors. Here are a few examples:
 Healthcare. Algorithms are used to predict disease outbreaks, personalize patient
treatment plans, and improve medical imaging accuracy.
 Finance. Machine learning is used for credit scoring, algorithmic trading, and fraud
detection.
 Retail. Recommendation systems, supply chains, and customer service can all benefit
from machine learning.
 The techniques used also find applications in sectors as diverse as agriculture,
education, and entertainment.
 Enabling automation. Machine learning is a key enabler of automation. By learning
from data and improving over time, machine learning algorithms can perform
previously manual tasks, freeing humans to focus on more complex and creative
tasks. This not only increases efficiency but also opens up new possibilities for
innovation.

Types of Machine Learning

Machine learning can be broadly classified into three types based on the nature of the
learning system and the data available: supervised learning, unsupervised learning,
and reinforcement learning. Let's delve into each of these:

Supervised learning

Supervised learning is the most common type of machine learning. In this approach,
the model is trained on a labeled dataset. In other words, the data is accompanied by
a label that the model is trying to predict. This could be anything from a category
label to a real-valued number.

The model learns a mapping between the input (features) and the output (label)
during the training process. Once trained, the model can predict the output for new,
unseen data.

Common examples of supervised learning algorithms include linear regression for


regression problems and logistic regression, decision trees, and support vector
machines for classification problems. In practical terms, this could look like an image
recognition process, wherein a dataset of images where each picture is labeled as
"cat," "dog," etc., a supervised model can recognize and categorize new images
accurately.

Unsupervised learning
PLANNING AND MACHINE LEARNING

Unsupervised learning, on the other hand, involves training the model on an


unlabeled dataset. The model is left to find patterns and relationships in the data on
its own.

This type of learning is often used for clustering and dimensionality reduction.
Clustering involves grouping similar data points together, while dimensionality
reduction involves reducing the number of random variables under consideration by
obtaining a set of principal variables.

Common examples of unsupervised learning algorithms include k-means for clustering


problems and Principal Component Analysis (PCA) for dimensionality reduction
problems. Again, in practical terms, in the field of marketing, unsupervised learning is
often used to segment a company's customer base. By examining purchasing
patterns, demographic data, and other information, the algorithm can group
customers into segments that exhibit similar behaviors without any pre-existing
labels.

Reinforcement learning

Reinforcement learning is a type of machine learning where an agent learns to make


decisions by interacting with its environment. The agent is rewarded or penalized
(with points) for the actions it takes, and its goal is to maximize the total reward.

Unlike supervised and unsupervised learning, reinforcement learning is particularly


suited to problems where the data is sequential, and the decision made at each step
can affect future outcomes.

Common examples of reinforcement learning include game playing, robotics, resource


management, and many more.

Supervised Learning Unsupervised Learning


Supervised learning algorithms are trained Unsupervised learning algorithms are
using labeled data. trained using unlabeled data.
Supervised learning model takes direct Unsupervised learning model does not
feedback to check if it is predicting correct take any feedback.
output or not.
Supervised learning model predicts the Unsupervised learning model finds the
output. hidden patterns in data.
In supervised learning, input data is In unsupervised learning, only input
provided to the model along with the data is provided to the model.
output.
The goal of supervised learning is to train The goal of unsupervised learning is to
the model so that it can predict the output find the hidden patterns and useful
when it is given new data. insights from the unknown dataset.
Supervised learning needs supervision to Unsupervised learning does not need
train the model. any supervision to train the model.
Supervised learning can be categorized Unsupervised Learning can be classified
in Classification and Regression proble in Clustering and Associations proble
ms. ms.
Supervised learning can be used for those Unsupervised learning can be used for
cases where we know the input as well as those cases where we have only input
corresponding outputs. data and no corresponding output data.
PLANNING AND MACHINE LEARNING

Supervised learning model produces an Unsupervised learning model may give


accurate result. less accurate result as compared to
supervised learning.
Supervised learning is not close to true Unsupervised learning is more close to
Artificial intelligence as in this, we first the true Artificial Intelligence as it
train the model for each data, and then learns similarly as a child learns daily
only it can predict the correct output. routine things by his experiences.
It includes various algorithms such as It includes various algorithms such as
Linear Regression, Logistic Regression, Clustering, KNN, and Apriori algorithm.
Support Vector Machine, Multi-class
Classification, Decision tree, Bayesian
Logic, etc.

Adaptive Learning
The fourth generation of machine intelligence, adaptive learning, creates the first truly
integrated human and machine learning environment. For text analytics, this has
given us the most accurate analytics to date, allowing us to get actionable information
in many areas for the first time. In the examples we will share here, we show that
adaptive learning is 95% accurate in predicting people’s intention to purchase a car.
Adaptive learning correlates with actual sales, unlike any previous approach to
Machine Intelligence.
Adaptive learning combines the previous generations of rule-based, simple machine
learning, and deep learning approaches to machine intelligence. Human analysts are
optimally engaged in making the machine intelligence smarter, faster, and easier to
interpret, building on a network of the previous generations of machine intelligence.
The first generation of machine intelligence meant that people manually created
rules. For example, in text analytics someone might create a rule that the word “Ford”
followed by “Focus” meant that “Ford” referred to a car, and they would create a
separate rule that “Ford” preceded by “Harrison” meant that “Ford” referred to a
person.
The rule-based approach is very time consuming and not very accurate. Even after an
analyst has exhausted all the words and phrases they can think of, there are always
other contexts and new innovations that aren’t captured. For one of our clients, their
experts analysts were only able to capture 11% of the documents they wanted to
analyze using rules: this clearly is too limited.
The dominant form of machine intelligence today is simple machine learning. Simple
machine learning uses statistical methods to make decisions about data processing.
For example, a sentence might have the word “Ford” labeled as a car, and the
machine learning algorithm will learn by itself that the following word “Focus” is
evidence that “Ford” is a car in this context.
Simple machine learning can be fast, provided that you already have labeled
examples for ‘supervised learning’. It also tends to be more accurate, because
statistics are usually better than human intuition in deciding which features (like
words and phrases) matter. The major drawback for supervised machine learning is
that you need the labeled examples: if you have too few labels or the labels aren’t
PLANNING AND MACHINE LEARNING

representative of the entire data set, then the accuracy is low or limited to a specific
domain.
There has been a recent rise in the use of machine learning that learns more
sophisticated relationships between features, known as deep learning. For example, if
you had the sentence “We Will Let Harrison Ford Focus on Star Wars”, there is
conflicting evidence between “Harrison” and “Focus” about whether “Ford” is a person
or a car.
Deep learning can automatically learn how to use combinations of features when
making a decision. For simple machine learning, a human has to tell the algorithm
which combination of features to consider. Deep learning often cuts down on the
amount of human time needed and typically gets up to 5% more accurate results than
simple machine learning for text analytics–although only when applied to data from
the same sources as it learned from.
Adaptive learning brings human analysts into the process at every step. This is in
contrast to rule-based, simple machine learning and deep learning approaches, where
the humans only create rules and label data at the start of the process. For example, if
you had the sentence “We Will Help Tom Ford Escape from New York”, and your
system hadn’t seen any examples of “Tom Ford” or “Ford Escape”, you will need
human input to build the knowledge.
Adaptive learning systems require the least human effort because they only require
human input when it matters most and continually expand their knowledge when new
information is encountered. As we show here, they are also the most accurate. They
combine the three other types of machine intelligence, adding new types of
‘unsupervised machine learning’ and methods for optimizing the input from multiple,
possibly disagreeing, humans.

Explanation-Based Learning
Explanation-based learning in artificial intelligence is a problem-solving method that
involves agent learning by analyzing specific situations and connecting them to
previously acquired information. Also, the agent applies what he has learned to solve
similar issues. Rather than relying solely on statistical analysis, EBL algorithms
incorporate logical reasoning and domain knowledge to make predictions and identify
patterns.
Key Characteristics of EBL
1. Use of Domain Knowledge: EBL relies heavily on pre-existing domain
knowledge to explain why a particular example is a valid instance of a concept.
This knowledge helps the system to generalize the learned concept to new,
similar situations.
2. Focused Learning: EBL focuses on understanding the essential features of an
example that are necessary to achieve a goal or solve a problem. This contrasts
with other learning methods that may treat all features equally or rely on
statistical correlations.
PLANNING AND MACHINE LEARNING

3. Efficiency: Since EBL can learn from a single example by generalizing from it, it
is computationally efficient compared to other learning methods that require
large datasets for training.
How Explanation-Based Learning Works?
Explanation-Based Learning follows a systematic process that involves the following
steps:
1. Input Example: The learning process begins with a single example that the
system needs to learn from. This example is typically a positive instance of a
concept that the system needs to understand.
2. Domain Knowledge: The system uses domain knowledge, which includes
rules, concepts, and relationships relevant to the problem domain. This
knowledge is crucial for explaining why the example is valid.
3. Explanation Generation: The system generates an explanation for why the
example satisfies the concept. This involves identifying the relevant features
and their relationships that make the example a valid instance.
4. Generalization: Once the explanation is generated, the system generalizes it
to form a broader concept that can apply to other similar examples. This
generalization is typically in the form of a rule or a set of rules that describe the
concept.
5. Learning Outcome: The outcome of EBL is a generalized rule or concept that
can be applied to new situations. The system can now use this rule to identify or
solve similar problems in the future.
Example of Explanation-Based Learning in AI
Scenario: Diagnosing a Faulty Component in a Car Engine
Context: Imagine you have an AI system designed to diagnose problems in car
engines. One day, the system is given a specific example where the engine fails to
start. After analyzing the case, the system learns that the failure was due to a faulty
ignition coil.
Step 1: Input Example
The system is provided with a scenario where a car engine fails to start. The
diagnostic information indicates that the cause is a faulty ignition coil.
Step 2: Use of Domain Knowledge
The AI system has pre-existing domain knowledge about car engines. It knows how
the ignition system works, the role of the ignition coil, and the conditions under which
an engine would fail to start.
Step 3: Explanation Generation
Using this domain knowledge, the system generates an explanation for why the
engine failure occurred:
PLANNING AND MACHINE LEARNING

 Ignition System Knowledge: The system understands that the ignition coil is
responsible for converting the battery’s low voltage to the high voltage needed
to create a spark in the spark plugs.
 Faulty Coil Impact: It explains that if the ignition coil is faulty, it will fail to
generate the necessary high voltage, resulting in no spark, which prevents the
engine from starting.
Step 4: Generalization
The system then generalizes this explanation to form a rule:
 General Rule: “If the engine fails to start and the ignition coil is faulty, then the
cause of the failure is likely due to the ignition coil not providing the necessary
voltage to the spark plugs.”
Step 5: Learning Outcome
The AI system has now learned a new diagnostic rule that can be applied to future
cases:
 Future Application: In future diagnostics, if the system encounters a similar
scenario where the engine fails to start, it can use this learned rule to quickly
check the ignition coil as a potential cause.
Applications of Explanation-Based Learning
Explanation-Based Learning is particularly useful in domains where understanding the
reasoning behind decisions is critical.
Some of the notable applications of EBL include:
 Medical Diagnosis: EBL can be used in medical diagnosis systems to learn
from specific cases and generalize the underlying principles for diagnosing
similar conditions in other patients.
 Legal Reasoning: In legal systems, EBL can help in understanding the
principles behind legal precedents and applying them to new cases with similar
circumstances.
 Automated Planning: EBL is useful in automated planning systems, where it
can learn from successful plans and generalize the steps required to achieve
similar goals in different contexts.
 Natural Language Processing: EBL can be applied in natural language
processing tasks where understanding the structure and meaning behind
language is more important than statistical correlations.
Advantages of Explanation-Based Learning
 Efficiency in Learning: EBL can learn effectively from a single example,
making it efficient in situations where data is scarce or expensive to obtain.
 Understanding and Generalization: EBL focuses on understanding the
rationale behind examples, leading to more robust generalizations that can be
applied to a wide range of situations.
PLANNING AND MACHINE LEARNING

 Interpretable Models: The rules or concepts learned through EBL are often
more interpretable than those learned through other methods, making it easier
to understand and trust the system’s decisions.
Challenges and Limitations
 Dependency on Domain Knowledge: EBL relies heavily on accurate and
comprehensive domain knowledge. If the domain knowledge is incomplete or
incorrect, the system may generate flawed explanations and generalizations.
 Limited to Well-Defined Problems: EBL is most effective in well-defined
problem domains where the rules and relationships are clear. It may struggle in
more complex or ambiguous domains.
 Complexity of Explanation Generation: Generating explanations can be
computationally intensive, especially in domains with complex relationships and
a large number of features.

 Inductive learning, also known as inductive reasoning or inductive inference, is a


type of learning that involves generalizing from specific instances or examples
to make broader generalizations or predictions. It is a fundamental approach in
machine learning and is closely related to the concept of inductive bias.
 In inductive learning, the learner seeks to infer general patterns or rules from a
set of observed examples or data points. The process involves identifying
common features or properties among the examples and using them to induce a
hypothesis or model that can accurately classify or predict new, unseen
instances.
 The inductive learning process typically follows these steps:
 Data collection: Gathering a set of labeled examples or instances that
represent the problem domain. For example, in a spam email classification task,
the data would consist of emails labeled as spam or non-spam.
 Hypothesis space: Defining the set of possible hypotheses or models that the
learner can consider. This is often determined by the chosen learning algorithm
and its associated inductive bias.
 Hypothesis generation: Constructing potential hypotheses based on the
observed examples. The learner examines the features or attributes of the
instances and generates hypotheses that explain the relationships or patterns
observed in the data.
PLANNING AND MACHINE LEARNING

 Hypothesis evaluation: Assessing the generated hypotheses using evaluation


metrics or validation techniques. This involves testing the hypotheses on new,
unseen examples to measure their predictive accuracy.
 Hypothesis refinement: Iteratively refining the hypotheses based on
feedback from the evaluation step. The learner updates or revises the
hypotheses to improve their performance and generalize better to new
instances.
 Generalization: Applying the learned hypothesis or model to classify or predict
new, unseen instances that were not part of the training data.
 The key challenge in inductive learning is finding a good balance between
overfitting and underfitting. Overfitting occurs when the learner creates a
hypothesis that fits the training data too closely but fails to generalize well to
new instances. Underfitting, on the other hand, happens when the learner’s
hypothesis is too simplistic and fails to capture the underlying patterns in the
data.
 Inductive learning algorithms, such as decision trees, naive Bayes, and support
vector machines, leverage inductive bias and follow a principled approach to
generalize from specific instances to make accurate predictions on new data.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy