0% found this document useful (0 votes)
8 views34 pages

5 SVM

The Support Vector Machine (SVM) algorithm aims to create the optimal hyperplane that separates different classes in n-dimensional space, using support vectors to determine this boundary. SVM can handle both linearly separable data with Linear SVM and non-linearly separable data with Non-Linear SVM by transforming the data into higher dimensions. Key concepts include hyperplanes, support vectors, margins, and kernels, which facilitate effective classification and handling of outliers.

Uploaded by

sknihal.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views34 pages

5 SVM

The Support Vector Machine (SVM) algorithm aims to create the optimal hyperplane that separates different classes in n-dimensional space, using support vectors to determine this boundary. SVM can handle both linearly separable data with Linear SVM and non-linearly separable data with Non-Linear SVM by transforming the data into higher dimensions. Key concepts include hyperplanes, support vectors, margins, and kernels, which facilitate effective classification and handling of outliers.

Uploaded by

sknihal.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

The goal of the SVM algorithm is to create the best line or

decision boundary that can segregate n-dimensional space

into classes so that we can easily put the new data point in the

correct category in the future. This best decision boundary is

called a hyperplane.
SVM chooses the extreme points/vectors
that help in creating the hyperplane. These
extreme cases are called as support
vectors, and hence algorithm is termed as
Support Vector Machine.

Consider the below diagram in which there are two


different categories that are classified using a
decision boundary or hyperplane:
Suppose we see a strange cat that also has some features of dogs, so if we want a model
that can accurately identify whether it is a cat or dog, so such a model can be created by
using the SVM algorithm.

We will first train our model with lots of images of cats and dogs so that it can learn about
different features of cats and dogs, and then we test it with this strange creature.
So as support vector creates a decision boundary between these two data (cat and dog) and
choose extreme cases (support vectors), it will see the extreme case of cat and dog. On the
basis of the support vectors, it will classify it as a cat.
Types of SVM
Linear SVM: Linear SVM is used for linearly separable data, which means if a dataset can be
classified into two classes by using a single straight line, then such data is termed as linearly
separable data, and classifier is used called as Linear SVM classifier.

Non-linear SVM: Non-Linear SVM is used for non-linearly separated data, which means if a
dataset cannot be classified by using a straight line, then such data is termed as non-linear
data and classifier used is called as Non-linear SVM classifier.
Hyperplane and Support Vectors in the SVM algorithm:

Hyperplane: There can be multiple lines/decision boundaries to


segregate the classes in n-dimensional space, but we need to find out the
best decision boundary that helps to classify the data points. This best
boundary is known as the hyperplane of SVM.
Hyperplane and Support Vectors in the SVM algorithm:

Hyperplane: There can be multiple lines/decision boundaries to


segregate the classes in n-dimensional space, but we need to find out the
best decision boundary that helps to classify the data points. This best
boundary is known as the hyperplane of SVM.

The dimensions of the hyperplane depend on the features present in


the dataset, which means if there are 2 features, then hyperplane will be
a straight line. And if there are 3 features, then hyperplane will be a 2-
dimension plane.
Non-Linear SVM:
If data is linearly arranged, then we can separate it by using a straight
line, but for non-linear data, we cannot draw a single straight line.
Consider the below image:
Non-Linear SVM:
If data is linearly arranged, then we can separate it by using a straight
line, but for non-linear data, we cannot draw a single straight line.
Consider the below image:
So to separate these data points, we
need to add one more dimension. For
linear data, we have used two
dimensions x and y, so for non-linear
data, we will add a third dimension z. It
can be calculated as:
Since we are in 3-d Space, hence it is looking like a
plane parallel to the x-axis. If we convert it in 2d
space with z=1, then it will become as:
Support Vector Machine
Terminology

Hyperplane: The hyperplane is the decision boundary used to separate data points of
different classes in a feature space. For linear classification, this is a linear equation
represented as wx+b=0
Support Vector Machine
Terminology

Hyperplane: The hyperplane is the decision boundary used to separate data points of
different classes in a feature space. For linear classification, this is a linear equation
represented as wx+b=0

Support Vectors: Support vectors are the closest data points to the hyperplane. These points are
critical in determining the hyperplane and the margin in Support Vector Machine (SVM)
Support Vector Machine
Terminology

Hyperplane: The hyperplane is the decision boundary used to separate data points of
different classes in a feature space. For linear classification, this is a linear equation
represented as wx+b=0

Support Vectors: Support vectors are the closest data points to the hyperplane. These points are
critical in determining the hyperplane and the margin in Support Vector Machine (SVM)

Margin:refers to the distance between the support vector and the hyperplane. The primary goal
of the SVM algorithm is to maximize this margin, as a wider margin typically results in better
classification performance.
Support Vector Machine
Terminology

Kernel: The kernel is a mathematical function used in SVM to map input data into a higher-dimensional feature
space. This allows the SVM to find a hyperplane in cases where data points are not linearly separable in the
original space. Common kernel functions include linear, polynomial, radial basis function (RBF), and sigmoid.

Hard Margin: A hard margin refers to the maximum-margin hyperplane that perfectly separates
the data points of different classes without any misclassifications.

Soft Margin: When data contains outliers or is not perfectly separable, SVM uses the soft
margin technique. This method introduces a slack variable for each data point to allow some
misclassifications
uation for the linear hyperplane can be written as:
uation for the linear hyperplane can be written as:
distance between a data point x_i and the decision boundary can be calculated as:
distance between a data point x_i and the decision boundary can be calculated as:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy