0% found this document useful (0 votes)
17 views15 pages

Module Iii

The document discusses various machine learning classification techniques including logistic regression, naive bayes, k-nearest neighbors, decision trees, support vector machines, and k-NN classification. It provides details on how each technique works and potential advantages and disadvantages.

Uploaded by

guruprasad93927
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views15 pages

Module Iii

The document discusses various machine learning classification techniques including logistic regression, naive bayes, k-nearest neighbors, decision trees, support vector machines, and k-NN classification. It provides details on how each technique works and potential advantages and disadvantages.

Uploaded by

guruprasad93927
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

MODULE III

CLASSIFICATION TECHNIQUES
 Classification is a technique in which similar data sets are classified in one set based on their characteristics. In
classification, a classifier or model is made to predict the class label attributes. The basic task of classification is to forecast
the target class for every kind of data.
 Classification is the process of recognizing, understanding, and grouping ideas and objects into preset categories or “sub-
populations.” Using pre-categorized training datasets, machine learning programs use a variety of algorithms to classify
future datasets into categories.
 Classification algorithms in machine learning use input training data to predict the likelihood that subsequent data will fall
into one of the predetermined categories. One of the most common uses of classification is filtering emails into “spam” or
“non-spam.”
 In short, classification is a form of “pattern recognition,” with classification algorithms applied to the training data to find
the same pattern (similar words or sentiments, number sequences, etc.) in future sets of data.

Classification Algorithms:
 Logistic Regression
 Naive Bayes
 K-Nearest Neighbors
 Decision Tree
 Support Vector Machines

Logistic Regression
Logistic regression is a calculation used to predict a binary outcome: either something happens, or does not. This can be exhibited as
Yes/No, Pass/Fail, Alive/Dead, etc.
Independent variables are analyzed to determine the binary outcome with the results falling into one of two categories. The
independent variables can be categorical or numeric, but the dependent variable is always categorical. Written like this:
P(Y=1|X) or P(Y=0|X)
It calculates the probability of dependent variable Y, given independent variable X.
This can be used to calculate the probability of a word having a positive or negative connotation (0, 1, or on a scale between). Or it
can be used to determine the object contained in a photo (tree, flower, grass, etc.), with each object given a probability between 0
and 1.

Naive Bayes
Naive Bayes calculates the possibility of whether a data point belongs within a certain category or does not. In text analysis, it can be
used to categorize words or phrases as belonging to a preset “tag” (classification) or not. For example:
To decide whether or not a phrase should be tagged as “sports,” you need to calculate:

Or… the probability of A, if B is true, is equal to the probability of B, if A is true, times the probability of A being true, divided by the
probability of B being true.

o K-nearest Neighbors
o K-nearest neighbors (k-NN) is a pattern recognition algorithm that uses training datasets to find the k closest relatives in
future examples.
o When k-NN is used in classification, you calculate to place data within the category of its nearest neighbor. If k = 1, then it
would be placed in the class nearest 1. K is classified by a plurality poll of its neighbors.

Decision Tree

o A decision tree is a supervised learning algorithm that is perfect for classification problems, as it’s able to order classes on
a precise level. It works like a flow chart, separating data points into two similar categories at a time from the “tree trunk”
to “branches,” to “leaves,” where the categories become more finitely similar. This creates categories within categories,
allowing for organic classification with limited human supervision.

Random Forest
o The random forest algorithm is an expansion of decision tree, in that you first construct a multitude of decision trees with
training data, then fit your new data within one of the trees as a “random forest.”
o It, essentially, averages your data to connect it to the nearest tree on the data scale. Random forest models are helpful as
they remedy for the decision tree’s problem of “forcing” data points within a category unnecessarily.

K-NN Classification

o K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique.
o K-NN algorithm assumes the similarity between the new case/data and available cases and put the new case into the
category that is most similar to the available categories.
o K-NN algorithm stores all the available data and classifies a new data point based on the similarity. This means when new
data appears then it can be easily classified into a well suite category by using K- NN algorithm.
o K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the Classification
problems.
o K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data.
o It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the
dataset and at the time of classification, it performs an action on the dataset.
o KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a
category that is much similar to the new data.
o Example: Suppose, we have an image of a creature that looks similar to cat and dog, but we want to know either it is a cat
or dog. So for this identification, we can use the KNN algorithm, as it works on a similarity measure. Our KNN model will
find the similar features of the new data set to the cats and dogs images and based on the most similar features it will put
it in either cat or dog category.

Why do we need a K-NN Algorithm?

Suppose there are two categories, i.e., Category A and Category B, and we have a new data point x1, so this data point will lie in which of
these categories. To solve this type of problem, we need a K-NN algorithm. With the help of K-NN, we can easily identify the category or
class of a particular dataset. Consider the below diagram:
How does K-NN work?

The K-NN working can be explained on the basis of the below algorithm:

o Step-1: Select the number K of the neighbors


o Step-2: Calculate the Euclidean distance of K number of neighbors
o Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
o Step-4: Among these k neighbors, count the number of the data points in each category.
o Step-5: Assign the new data points to that category for which the number of the neighbor is maximum.
o Step-6: Our model is ready.

Suppose we have a new data point and we need to put it in the required category. Consider the below image:

Support Vector Machines

A support vector machine (SVM) uses algorithms to train and classify data within
degrees of polarity, taking it to a degree beyond X/Y prediction.
For a simple visual explanation, we’ll use two tags: red and blue, with two data
features: X and Y, then train our classifier to output an X/Y coordinate as
either red or blue.
o Firstly, we will choose the number of neighbors, so we will choose the k=5.
o Next, we will calculate the Euclidean distance between the data points. The Euclidean distance is the distance between two points,
which we have already studied in geometry. It can be calculated as:

o By calculating the Euclidean distance we got the nearest neighbors, as three


nearest neighbors in category A and two nearest neighbors in category B.
Consider the below image:
o As we can see the 3 nearest neighbors are from category A, hence this new data point must belong to category A.

Advantages of KNN Algorithm:


o It is simple to implement.
o It is robust to the noisy training data
o It can be more effective if the training data is large.

Disadvantages of KNN Algorithm:


o Always needs to determine the value of K which may be complex some time.
o The computation cost is high because of calculating the distance between the data points for all the training samples.

The SVM then assigns a hyperplane that best separates the tags. In two dimensions
this is simply a line. Anything on one side of the line is red and anything on the other
side is blue. In sentiment analysis, for example, this would be positive and negative.

In order to maximize machine learning, the best hyperplane is the one with the largest
distance between each tag:
However, as data sets become more complex, it may not be possible to draw a single line to classify the data into two
camps:

Using SVM, the more complex the data, the more accurate the predictor will become.
Imagine the above in three dimensions, with a Z-axis added, so it becomes a circle.

Mapped back to two dimensions with the best hyperplane, it looks like this:

Decision Tree

o Decision Tree is a Supervised learning technique that can be used for both classification and Regression problems, but mostly it
is preferred for solving Classification problems. It is a tree-structured classifier, where internal nodes represent the features of a
dataset, branches represent the decision rules and each leaf node represents the outcome.
o In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node. Decision nodes are used to make any
decision and have multiple branches, whereas Leaf nodes are the output of those decisions and do not contain any further
branches.
o The decisions or the test are performed on the basis of features of the given dataset.
o It is a graphical representation for getting all the possible solutions to a problem/decision based on given conditions.
o It is called a decision tree because, similar to a tree, it starts with the root node, which expands on further branches and constructs
a tree-like structure.
o In order to build a tree, we use the CART algorithm, which stands for Classification and Regression Tree algorithm.
o A decision tree simply asks a question, and based on the answer (Yes/No), it further split the tree into subtrees.
o Below diagram explains the general structure of a decision tree:

Note: A decision tree can contain categorical data (YES/NO) as well as numeric data.
Why use Decision Trees?

There are various algorithms in Machine learning, so choosing the best algorithm for the given dataset and problem is the main point to
remember while creating a machine learning model. Below are the two reasons for using the Decision tree:

o Decision Trees usually mimic human thinking ability while making a decision, so it is easy to understand.
o The logic behind the decision tree can be easily understood because it shows a tree-like structure.

Decision Tree Terminologies


 Root Node: Root node is from where the decision tree starts. It represents the entire dataset, which
further gets divided into two or more homogeneous sets.

 Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further after
getting a leaf node.

 Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes according to
the given conditions.

 Branch/Sub Tree: A tree formed by splitting the tree.

 Pruning: Pruning is the process of removing the unwanted branches from the tree.

 Parent/Child node: The root node of the tree is called the parent node, and other nodes are called
the child nodes.

How does the Decision Tree algorithm Work?

In a decision tree, for predicting the class of the given dataset, the algorithm starts from the root node of the tree. This algorithm compares
the values of root attribute with the record (real dataset) attribute and, based on the comparison, follows the branch and jumps to the next
node.

For the next node, the algorithm again compares the attribute value with the other sub-nodes and move further. It continues the process
until it reaches the leaf node of the tree. The complete process can be better understood using the below algorithm:

o Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
o Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).
o Step-3: Divide the S into subsets that contains possible values for the best attributes.
o Step-4: Generate the decision tree node, which contains the best attribute.
o Step-5: Recursively make new decision trees using the subsets of the dataset created in step -3. Continue this process until a stage
is reached where you cannot further classify the nodes and called the final node as a leaf node.

Example: Suppose there is a candidate who has a job offer and wants to decide whether he should accept the offer or Not. So, to solve this
problem, the decision tree starts with the root node (Salary attribute by ASM). The root node splits further into the next decision node
(distance from the office) and one leaf node based on the corresponding labels. The next decision node further gets split into one decision
node (Cab facility) and one leaf node. Finally, the decision node splits into two leaf nodes (Accepted offers and Declined offer). Consider the
below diagram:

Attribute Selection Measures

While implementing a Decision tree, the main issue arises that how to select the best attribute for the root node and for sub-nodes. So, to
solve such problems there is a technique which is called as Attribute selection measure or ASM. By this measurement, we can easily select
the best attribute for the nodes of the tree. There are two popular techniques for ASM, which are:

o Information Gain
o Gini Index

1. Information Gain:

o Information gain is the measurement of changes in entropy after the segmentation of a dataset based on an attribute.
o It calculates how much information a feature provides us about a class.
o According to the value of information gain, we split the node and build the decision tree.
o A decision tree algorithm always tries to maximize the value of information gain, and a node/attribute having the highest
information gain is split first. It can be calculated using the below formula:

Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)

2. Gini Index:

o Gini index is a measure of impurity or purity used while creating a decision tree in the CART(Classification and Regression Tree)
algorithm.
o An attribute with the low Gini index should be preferred as compared to the high Gini index.
o It only creates binary splits, and the CART algorithm uses the Gini index to create binary splits.
o Gini index can be calculated using the below formula:

Gini Index= 1- ∑jPj2


Naïve Bayes Classifier Algorithm

o Naïve Bayes algorithm is a supervised learning algorithm, which is based on Bayes


theorem and used for solving classification problems.
o It is mainly used in text classification that includes a high-dimensional training dataset.
o Naïve Bayes Classifier is one of the simple and most effective Classification algorithms
which helps in building the fast machine learning models that can make quick
predictions.
o It is a probabilistic classifier, which means it predicts on the basis of the probability
of an object.
o Some popular examples of Naïve Bayes Algorithm are spam filtration, Sentimental
analysis, and classifying articles.

Why is it called Naïve Bayes?


o The Naïve Bayes algorithm is comprised of two words Naïve and Bayes, Which can be described as:

o Naïve: It is called Naïve because it assumes that the occurrence of a certain feature is independent of the occurrence of other

features. Such as if the fruit is identified on the bases of color, shape, and taste, then red, spherical, and sweet fruit is recognized as

an apple. Hence each feature individually contributes to identify that it is an apple without depending on each other.

o Bayes: It is called Bayes because it depends on the principle of Bayes' Theorem.

Bayes' Theorem:
o Bayes' theorem is also known as Bayes' Rule or Bayes' law, which is used to determine the probability of a hypothesis with prior

knowledge. It depends on the conditional probability.

o The formula for Bayes' theorem is given as:


Where,

P(A|B) is Posterior probability: Probability of hypothesis A on the observed event B.

P(B|A) is Likelihood probability: Probability of the evidence given that the probability of a hypothesis is true.

Working of Naïve Bayes' Classifier:

Working of Naïve Bayes' Classifier can be understood with the help of the below example:

Suppose we have a dataset of weather conditions and corresponding target variable "Play". So using this dataset we need to decide that
whether we should play or not on a particular day according to the weather conditions. So to solve this problem, we need to follow the below
steps:

1. Convert the given dataset into frequency tables.


2. Generate Likelihood table by finding the probabilities of given features.
3. Now, use Bayes theorem to calculate the posterior probability.

Problem: If the weather is sunny, then the Player should play or not?

Solution: To solve this, first consider the below dataset:

Play
Outlook

0 Rainy Yes

1 Sunny Yes

2 Overcast Yes

3 Overcast Yes

4 Sunny No

5 Rainy Yes

6 Sunny Yes

7 Overcast Yes

8 Rainy No

9 Sunny No

10 Sunny Yes

11 Rainy No

12 Overcast Yes

13 Overcast Yes

Frequency table for the Weather Conditions:

Weather Yes No

Overcast 5 0
Rainy 2 2

Sunny 3 2

Total 10 5

Applying Bayes'theorem:

P(Yes|Sunny)= P(Sunny|Yes)*P(Yes)/P(Sunny)

P(Sunny|Yes)= 3/10= 0.3

P(Sunny)= 0.35

P(Yes)=0.71

So P(Yes|Sunny) = 0.3*0.71/0.35= 0.60

P(No|Sunny)= P(Sunny|No)*P(No)/P(Sunny)

P(Sunny|NO)= 2/4=0.5

P(No)= 0.29

P(Sunny)= 0.35

So P(No|Sunny)= 0.5*0.29/0.35 = 0.41

So as we can see from the above calculation that P(Yes|Sunny)>P(No|Sunny)

Hence on a Sunny day, Player can play the game.

Advantages of Naïve Bayes Classifier:

o Naïve Bayes is one of the fast and easy ML algorithms to predict a class of datasets.
o It can be used for Binary as well as Multi-class Classifications.
o It performs well in Multi-class predictions as compared to the other Algorithms.
o It is the most popular choice for text classification problems.

Disadvantages of Naïve Bayes Classifier:

o Naive Bayes assumes that all features are independent or unrelated, so it cannot learn the relationship between features.

Applications of Naïve Bayes Classifier:

o It is used for Credit Scoring.


o It is used in medical data classification.
o It can be used in real-time predictions because Naïve Bayes Classifier is an eager learner.
o It is used in Text classification such as Spam filtering and Sentiment analysis.

Types of Naïve Bayes Model:

There are three types of Naive Bayes Model, which are given below:

o Gaussian: The Gaussian model assumes that features follow a normal distribution. This means if predictors take continuous values
instead of discrete, then the model assumes that these values are sampled from the Gaussian distribution.
o Multinomial: The Multinomial Naïve Bayes classifier is used when the data is multinomial distributed. It is primarily used for
document classification problems, it means a particular document belongs to which category such as Sports, Politics, education,
etc.
The classifier uses the frequency of words for the predictors.
o Bernoulli: The Bernoulli classifier works similar to the Multinomial classifier, but the predictor variables are the independent
Booleans variables. Such as if a particular word is present or not in a document. This model is also famous for document
classification tasks.

Random Forest Algorithm


Random Forest is a popular machine learning algorithm that belongs to the supervised learning technique. It can be used for both
Classification and Regression problems in ML. It is based on the concept of ensemble learning, which is a process of combining multiple
classifiers to solve a complex problem and to improve the performance of the model.

As the name suggests, "Random Forest is a classifier that contains a number of decision trees on various subsets of the given dataset
and takes the average to improve the predictive accuracy of that dataset." Instead of relying on one decision tree, the random forest
takes the prediction from each tree and based on the majority votes of predictions, and it predicts the final output.

The greater number of trees in the forest leads to higher accuracy and prevents the problem of overfitting.

The below diagram explains the working of the Random Forest algorithm:

Note: To better understand the Random Forest Algorithm, you should have knowledge of the Decision Tree Algorithm.

Assumptions for Random Forest

Since the random forest combines multiple trees to predict the class of the dataset, it is possible that some decision trees may predict the
correct output, while others may not. But together, all the trees predict the correct output. Therefore, below are two assumptions for a better
Random forest classifier:
o There should be some actual values in the feature variable of the dataset so that the classifier can predict accurate results rather
than a guessed result.
o The predictions from each tree must have very low correlations.

Why use Random Forest?

Below are some points that explain why we should use the Random Forest algorithm:

<="" li="">

o It takes less training time as compared to other algorithms.


o It predicts output with high accuracy, even for the large dataset it runs efficiently.
o It can also maintain accuracy when a large proportion of data is missing.

How does Random Forest algorithm work?

Random Forest works in two-phase first is to create the random forest by combining N decision tree, and second is to make predictions for
each tree created in the first phase.

The Working process can be explained in the below steps and diagram:

Step-1: Select random K data points from the training set.

Step-2: Build the decision trees associated with the selected data points (Subsets).

Step-3: Choose the number N for decision trees that you want to build.

Step-4: Repeat Step 1 & 2.

Step-5: For new data points, find the predictions of each decision tree, and assign the new data points to the category that wins the majority
votes.

The working of the algorithm can be better understood by the below example:

Example: Suppose there is a dataset that contains multiple fruit images. So, this dataset is given to the Random forest classifier. The dataset
is divided into subsets and given to each decision tree. During the training phase, each decision tree produces a prediction result, and when a
new data point occurs, then based on the majority of results, the Random Forest classifier predicts the final decision. Consider the below
image:
Applications of Random Forest

There are mainly four sectors where Random forest mostly used:

1. Banking: Banking sector mostly uses this algorithm for the identification of loan risk.
2. Medicine: With the help of this algorithm, disease trends and risks of the disease can be identified.
3. Land Use: We can identify the areas of similar land use by this algorithm.
4. Marketing: Marketing trends can be identified using this algorithm.

Advantages of Random Forest


o Random Forest is capable of performing both Classification and Regression tasks.
o It is capable of handling large datasets with high dimensionality.
o It enhances the accuracy of the model and prevents the overfitting issue.

Disadvantages of Random Forest


o Although random forest can be used for both classification and regression tasks, it is not more suitable for Regression tasks.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy