0% found this document useful (0 votes)
63 views5 pages

Gaussian Naive Bayes - Jaikrishna 3

Uploaded by

jaikrishna2602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views5 pages

Gaussian Naive Bayes - Jaikrishna 3

Uploaded by

jaikrishna2602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 5

GAUSSIAN NAIVE BAYES

UNIT-III S4 SLO-1
INTRODUCTION TO NAIVE BAYES
CLASSIFIER
Definition: Naive Bayes is a probabilistic classification algorithm based on Bayes’ Theorem

with an assumption of independence among features.

Bayes’ Theorem: Provides a way to update the probability estimate for a hypothesis as more

evidence is acquired.

Naivety Assumption: Assumes that the features are independent given the class label.

illustrating Bayes’ Theorem: P(C∣X)=P(X∣C)⋅P(C)P(X)P(C|X) = \frac{P(X|C) \cdot P(C)}

{P(X)}P(C∣X)=P(X)P(X∣C)⋅P(C)​where CCC is the class and XXX represents features.

Type: Probabilistic classifier.

Independence Assumption: Features are independent given the class


GAUSSIAN NAIVE BAYES - OVERVIEW

Gaussian Naive Bayes: A variant of Naive Bayes where the features are assumed to follow a

Gaussian (normal) distribution.

Feature Distribution: For each class, the features are modeled using a Gaussian distribution:

Xi∼N(μi,σi2)X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)Xi​∼N(μi​,σi2​) where μi\mu_iμi​is

the mean and σi2\sigma_i^2σi2​is the variance of feature iii in class CCC.

Objective: Calculate the posterior probability of a class given the features and classify based

on the highest probability.

Assumption: Features are Gaussian-distributed.

Parameters: Mean and variance of each feature for each class.


TRAINING AND PREDICTION WITH GAUSSIAN NAIVE
BAYES

Training:
Calculate the mean (μ\muμ) and variance (σ2\sigma^2σ2) of each feature for each class from the

training data.

Estimate class prior probabilities P(C)P(C)P(C).

Prediction:
For a new data point, compute the posterior probability for each class using:

P(C∣X)∝P(C)⋅∏iP(Xi∣C)P(C|X) \propto P(C) \cdot \prod_{i} P(X_i|C)P(C∣X)∝P(C)⋅i∏​P(Xi​C)


Choose the class with the highest posterior probability.

Training: Compute mean and variance of features, estimate class priors.

Prediction: Use posterior probability to classify.


APPLICATIONS AND ADVANTAGES OF
GAUSSIAN NAIVE BAYES

Applications: Used in text classification (spam detection), medical diagnosis, and any scenario

where feature independence is a reasonable assumption.

Advantages:
Simplicity and Efficiency: Fast to train and predict.

Good Performance with Large Datasets: Performs well even with relatively simple assumptions.

Limitations:
Independence Assumption: Assumes features are independent, which may not hold in real-world data.

Gaussian Assumption: Assumes features follow a Gaussian distribution, which might not always be true.

Strengths: Fast, simple, effective with large datasets.

Weaknesses: Assumes feature independence and Gaussian distribution.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy