0% found this document useful (0 votes)
3 views43 pages

Presentation 3

Unsupervised learning is a machine learning approach that utilizes unlabeled data for tasks such as clustering and dimensionality reduction. Key algorithms include k-Means, hierarchical clustering, and various dimensionality reduction techniques like PCA and t-SNE. Additionally, reinforcement learning and ensemble learning are discussed, highlighting their applications and algorithms, including Q-learning and boosting methods.

Uploaded by

Kiran Hublikar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views43 pages

Presentation 3

Unsupervised learning is a machine learning approach that utilizes unlabeled data for tasks such as clustering and dimensionality reduction. Key algorithms include k-Means, hierarchical clustering, and various dimensionality reduction techniques like PCA and t-SNE. Additionally, reinforcement learning and ensemble learning are discussed, highlighting their applications and algorithms, including Q-learning and boosting methods.

Uploaded by

Kiran Hublikar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

UnSupervised

Learning
• Unsupervised learning is a type of machine
learning where the algorithm is trained on
unlabeled data, meaning there are no
predefined outputs or target variables.
• It is mainly used for clustering,
dimensionality reduction, anomaly
detection, and association rule learning.
• Below is a list of key unsupervised learning
algorithms:
1. UnSupervised Learning Algorithms

• Clustering (grouping similar data points)

• Dimensionality Reduction
(reducing feature space while preserving information)
1. UnSupervised Learning Algorithms

• Clustering (grouping similar data points)


1. UnSupervised Learning Algorithms

• Dimensionality Reduction
(reducing feature space while preserving information)
1. UnSupervised
Learning Algorithms

Sigmoid Function in Unsupervised Learning :

• It maps values between 0 and 1, making it useful in probabilistic models.


• Used in autoencoders, GMMs, RBMs for representation learning.
UnSupervised
Learning
Clustering Algorithms
Algorithms
• k-Means Clustering
• Hierarchical Clustering
• DBSCAN (Density-Based Spatial Clustering)
Clustering Algorithms • Gaussian Mixture Models (GMM)
(grouping similar data points) • Mean Shift Clustering
• Affinity Propagation
UnSupervised
Learning
Dimensionality Reduction Models
Algorithms
• Principal Component Analysis (PCA)
• Linear Discriminant Analysis (LDA)
Dimensionality Reduction
• t-Distributed Stochastic Neighbor Embedding (t-SNE)
Models
(reducing feature space while • Autoencoders
preserving information) • Independent Component Analysis (ICA)
• Factor Analysis
UnSupervised
Learning
Clustering Algorithms
Algorithms
• k-Means Clustering
• Hierarchical Clustering
• DBSCAN (Density-Based Spatial Clustering)
Clustering Algorithms • Gaussian Mixture Models (GMM)
(grouping similar data points) • Mean Shift Clustering
• Affinity Propagation
k-Means Clustering in UnSupervised Learning

•K-Means is a centroid-based clustering algorithm that partitions data into K clusters.


•It minimizes the sum of squared distances between data points and the cluster centroids.
k-Means Clustering in UnSupervised Learning

sl.no Age Amount

C1 20 500
C2 40 1000
C3 30 800
C4 18 300
C5 28 200
C6 35 600
C7 45 100
C8 50 2000
k-Means Clustering in UnSupervised Learning

sl.no Age Amount C2


C1
{40 , 1000}
{20 , 500}
C1 20 500
C2 40 1000
C3 30 800
C4 18 300
C5 28 200
C6 35 600
C7 45 100
C8 50 2000
k-Means Clustering in UnSupervised Learning

sl.no Age Amount C2


C1
{40 , 1000}
{20 , 500}
C1 20 500
C2 40 1000
C3 30 800 Uses Centroid Calculation here
First calculate the distance between two clusters
C4 18 300
C5 28 200
C6 35 600
C7 45 100
C8 50 2000
k-Means Clustering in UnSupervised Learning

from sklearn.cluster import KMeans


import numpy as np

X = np.array([[1, 2], [2, 3], [3, 4], [5, 7], [6, 8], [7, 9]])

# K-Means clustering Attributes


kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
print(kmeans.cluster_centers_)
Hierarchical Clustering in UnSupervised Learning

•It creates a hierarchical structure of clusters using dendrograms.


•Can be agglomerative (bottom-up) or divisive (top-down).

Field of Application:
• Document clustering
• Social network analysis
• Gene expression analysis

For example : 1 5 8 10 19 20

{19, 20} has only 1 difference


{8 , 10}
Now two clusters
{ 1 { 5 { 8 , 10} } }
Hierarchical Clustering in UnSupervised Learning

from scipy.cluster.hierarchy import dendrogram, linkage


import matplotlib.pyplot as plt

X = np.array([[1, 2], [2, 3], [3, 4], [5, 7], [6, 8], [7, 9]])

# Perform hierarchical clustering


linked = linkage(X, method='ward')

dendrogram(linked)
plt.show()
Hierarchical Clustering in UnSupervised Learning

from scipy.cluster.hierarchy import dendrogram, linkage


import matplotlib.pyplot as plt

X = np.array([[1, 2], [2, 3], [3, 4], [5, 7], [6, 8], [7, 9]])

# Perform hierarchical clustering


linked = linkage(X, method='ward')

dendrogram(linked)
plt.show()
Machine
Learning

Ensembled
Supervised UnSupervised Reinforcement
Learning
Reinforcement
Learning
• Reinforcement Learning (RL) is a type of
machine learning where an agent learns to
make decisions by interacting with an
environment.
• The agent receives rewards or penalties
based on its actions and aims to maximize
the cumulative reward over time.
Reinforcement Learning Algorithms in MACHINE LEARNING

Key Concepts in RL:

Environment – The State (S) – A specific Action (A) – The


Agent – The learner or
system the agent situation in the choices available to
decision-maker.
interacts with. environment. the agent.

Policy (π) – The Value Function (V) – Q-Value (Q-function)


Reward (R) –
strategy the agent Measures the – Measures the
Feedback given to the
follows to decide expected return of a expected return of an
agent for an action.
actions. state. action in a state.
Reinforcement Learning
Algorithms in MACHINE
LEARNING
• Fields of Use of RL (with Examples)

• Gaming – AI agents learn to play video games (e.g.,


AlphaGo, Dota 2 AI).
• Robotics – Robots learn motor skills (e.g., Boston
Dynamics robots).
• Finance – RL optimizes stock trading strategies.
• Healthcare – AI suggests personalized treatment plans.
• Autonomous Vehicles – Self-driving cars learn
navigation (e.g., Tesla’s Autopilot).
Reinforcement
Learning • Q-Learning
Algorithms • Deep Q-Networks (DQN)
• Policy Gradient Methods
• Actor-Critic Models
• Monte Carlo Methods
• SARSA (State-Action-Reward-State-Action)
• Proximal Policy Optimization (PPO)
• Trust Region Policy Optimization (TRPO)
Reinforcement Learning
Algorithms

• 1. Model-Free Algorithms
• These do not require knowledge of the
environment's dynamics.
• Q-Learning (Off-policy) – Example: A self-driving
car learns to drive by trial and error.
• SARSA (State-Action-Reward-State-Action)
(On-policy) – Example: A robot learns to walk
while continuously updating its policy.
• Deep Q-Network (DQN) – Example: AI playing
Atari games using neural networks.
Reinforcement Learning
Algorithms

• 2. Policy-Based Algorithms
• These directly optimize the policy.
• REINFORCE – Example: Optimizing robotic arm movements in a
factory.
• Actor-Critic Methods – Example: AI in gaming, where one network
suggests actions (Actor) and another evaluates them (Critic).

3. Model-Based Algorithms
These use a model to predict future states.
• Monte Carlo Tree Search (MCTS) – Example: AlphaGo, which defeated
humans in the game of Go.
Reinforcement Learning Algorithms

Sigmoid Function:
The sigmoid function is a mathematical function used in machine learning and neural networks.

Formula:

• Range: (0,1)
• Use Case: Converts any input into a probability-like value.
• Example: Used in logistic regression and binary classification (e.g., email spam detection).
Ensembled
Learning
• Ensemble learning is a machine learning
technique where multiple models (often called
"weak learners" or "base models") are trained
and combined to solve the same problem,
improving the overall accuracy and robustness
of predictions.
• The main idea is that a group of weak models,
when combined properly, can perform better
than any single strong model.
Ensembeled Learning in MACHINE LEARNING

What is
Ensemble Learning?
Ensembeled Learning in MACHINE LEARNING

What is
Ensemble Learning?
Ensembeled Learning in MACHINE LEARNING

Types of Ensemble Learning :


1. Bagging (Bootstrap Aggregating) :

2. Boosting :

3. Stacking (Stacked Generalization) :


Ensembeled Learning in MACHINE LEARNING

Types of Ensemble Learning :


1. Bagging (Bootstrap Aggregating) :

• Bagging reduces variance by training multiple models on different subsets of the data.
• Example: Random Forest, where multiple decision trees are trained on random subsets of the data, and
their outputs are averaged or voted upon.
Ensembeled Learning in MACHINE LEARNING

Types of Ensemble Learning :


2. Boosting :

• Boosting reduces bias by training models sequentially, where each model focuses on the errors
made by the previous one.
• Example: AdaBoost, Gradient Boosting, XGBoost, etc.
Ensembeled Learning in MACHINE LEARNING

Types of Ensemble Learning :


3. Stacking (Stacked Generalization) :

• Stacking combines multiple diverse models and trains a meta-model to learn how to best combine their
predictions.
• Example: Using a logistic regression model as a meta-learner over multiple base learners like decision
trees, SVMs, and neural networks.
Ensembeled Learning in MACHINE LEARNING

Sigmoid Function in Ensemble Learning:

The sigmoid function is commonly used in ensemble learning when dealing with logistic regression, boosting,
and neural networks. It transforms any real-valued number into a range between 0 and 1, making it useful for
probability estimation.

Sigmoid Function :

Where:
• x is the input to the function (often a linear combination of model predictions).
• e is Euler’s number (~2.718).
Ensembeled Learning in MACHINE LEARNING

Overfitting
100% dataset
70% for training
30 % for testing
But in testing model
wont work properly
Ensembeled Learning in MACHINE LEARNING

Applications of Ensemble Learning probability estimation :

1. Image Recognition & Computer Vision

• Facial recognition (e.g., ensemble of CNN models)


• Object detection (e.g., using ensemble-based deep learning models)
Ensembeled Learning in MACHINE LEARNING

Applications of Ensemble Learning probability estimation :

2. Healthcare & Medical Diagnosis

• Cancer detection (e.g., combining multiple machine learning models for accurate
classification)
• Disease prediction using patient data
Ensembeled Learning in MACHINE LEARNING

Applications of Ensemble Learning probability estimation :

3. Finance & Stock Market Prediction

• Credit scoring and risk analysis (e.g., ensemble of decision trees)


• Algorithmic trading using multiple machine learning models
Ensembeled Learning in MACHINE LEARNING

Applications of Ensemble Learning probability estimation :

4. Natural Language Processing (NLP)

• Sentiment analysis (e.g., combining BERT, LSTMs, and decision trees)


• Spam detection (e.g., bagging-based classification)
Ensembeled Learning in MACHINE LEARNING

Applications of Ensemble Learning probability estimation :

5. Fraud Detection

• Detecting fraudulent transactions using ensemble methods like Random Forest and
XGBoost.
Ensembeled Learning in MACHINE LEARNING

Applications of Ensemble Learning probability estimation :

6. Autonomous Vehicles

• Self-driving cars use ensemble learning to combine data from multiple


sensors.
Deep Learning
• Deep Learning is a subset of Machine Learning
that uses artificial neural networks to model
and solve complex problems.
• Inspired by the human brain, deep learning
algorithms consist of multiple layers of neurons
that process data hierarchically.
• It is widely used in image recognition, natural
language processing, robotics, healthcare,
finance, and many other fields.
Deep Learning • Multi-Layer Perceptron (MLP)
Algorithms • Convolutional Neural Networks (CNN)
• Recurrent Neural Networks (RNN)
• Long Short-Term Memory (LSTM)
• Gated Recurrent Units (GRU)
• Transformers (BERT, GPT, T5)
• Generative Adversarial Networks (GAN)
• Variational Autoencoders (VAE)
• Deep Belief Networks (DBN)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy