Presentation 3
Presentation 3
Learning
• Unsupervised learning is a type of machine
learning where the algorithm is trained on
unlabeled data, meaning there are no
predefined outputs or target variables.
• It is mainly used for clustering,
dimensionality reduction, anomaly
detection, and association rule learning.
• Below is a list of key unsupervised learning
algorithms:
1. UnSupervised Learning Algorithms
• Dimensionality Reduction
(reducing feature space while preserving information)
1. UnSupervised Learning Algorithms
• Dimensionality Reduction
(reducing feature space while preserving information)
1. UnSupervised
Learning Algorithms
C1 20 500
C2 40 1000
C3 30 800
C4 18 300
C5 28 200
C6 35 600
C7 45 100
C8 50 2000
k-Means Clustering in UnSupervised Learning
X = np.array([[1, 2], [2, 3], [3, 4], [5, 7], [6, 8], [7, 9]])
Field of Application:
• Document clustering
• Social network analysis
• Gene expression analysis
For example : 1 5 8 10 19 20
X = np.array([[1, 2], [2, 3], [3, 4], [5, 7], [6, 8], [7, 9]])
dendrogram(linked)
plt.show()
Hierarchical Clustering in UnSupervised Learning
X = np.array([[1, 2], [2, 3], [3, 4], [5, 7], [6, 8], [7, 9]])
dendrogram(linked)
plt.show()
Machine
Learning
Ensembled
Supervised UnSupervised Reinforcement
Learning
Reinforcement
Learning
• Reinforcement Learning (RL) is a type of
machine learning where an agent learns to
make decisions by interacting with an
environment.
• The agent receives rewards or penalties
based on its actions and aims to maximize
the cumulative reward over time.
Reinforcement Learning Algorithms in MACHINE LEARNING
• 1. Model-Free Algorithms
• These do not require knowledge of the
environment's dynamics.
• Q-Learning (Off-policy) – Example: A self-driving
car learns to drive by trial and error.
• SARSA (State-Action-Reward-State-Action)
(On-policy) – Example: A robot learns to walk
while continuously updating its policy.
• Deep Q-Network (DQN) – Example: AI playing
Atari games using neural networks.
Reinforcement Learning
Algorithms
• 2. Policy-Based Algorithms
• These directly optimize the policy.
• REINFORCE – Example: Optimizing robotic arm movements in a
factory.
• Actor-Critic Methods – Example: AI in gaming, where one network
suggests actions (Actor) and another evaluates them (Critic).
3. Model-Based Algorithms
These use a model to predict future states.
• Monte Carlo Tree Search (MCTS) – Example: AlphaGo, which defeated
humans in the game of Go.
Reinforcement Learning Algorithms
Sigmoid Function:
The sigmoid function is a mathematical function used in machine learning and neural networks.
Formula:
• Range: (0,1)
• Use Case: Converts any input into a probability-like value.
• Example: Used in logistic regression and binary classification (e.g., email spam detection).
Ensembled
Learning
• Ensemble learning is a machine learning
technique where multiple models (often called
"weak learners" or "base models") are trained
and combined to solve the same problem,
improving the overall accuracy and robustness
of predictions.
• The main idea is that a group of weak models,
when combined properly, can perform better
than any single strong model.
Ensembeled Learning in MACHINE LEARNING
What is
Ensemble Learning?
Ensembeled Learning in MACHINE LEARNING
What is
Ensemble Learning?
Ensembeled Learning in MACHINE LEARNING
2. Boosting :
• Bagging reduces variance by training multiple models on different subsets of the data.
• Example: Random Forest, where multiple decision trees are trained on random subsets of the data, and
their outputs are averaged or voted upon.
Ensembeled Learning in MACHINE LEARNING
• Boosting reduces bias by training models sequentially, where each model focuses on the errors
made by the previous one.
• Example: AdaBoost, Gradient Boosting, XGBoost, etc.
Ensembeled Learning in MACHINE LEARNING
• Stacking combines multiple diverse models and trains a meta-model to learn how to best combine their
predictions.
• Example: Using a logistic regression model as a meta-learner over multiple base learners like decision
trees, SVMs, and neural networks.
Ensembeled Learning in MACHINE LEARNING
The sigmoid function is commonly used in ensemble learning when dealing with logistic regression, boosting,
and neural networks. It transforms any real-valued number into a range between 0 and 1, making it useful for
probability estimation.
Sigmoid Function :
Where:
• x is the input to the function (often a linear combination of model predictions).
• e is Euler’s number (~2.718).
Ensembeled Learning in MACHINE LEARNING
Overfitting
100% dataset
70% for training
30 % for testing
But in testing model
wont work properly
Ensembeled Learning in MACHINE LEARNING
• Cancer detection (e.g., combining multiple machine learning models for accurate
classification)
• Disease prediction using patient data
Ensembeled Learning in MACHINE LEARNING
5. Fraud Detection
• Detecting fraudulent transactions using ensemble methods like Random Forest and
XGBoost.
Ensembeled Learning in MACHINE LEARNING
6. Autonomous Vehicles