Ensemble Learning Techniques 12 Marks
Ensemble Learning Techniques 12 Marks
Ensemble learning is a machine learning technique where multiple models (called base learners or
weak learners) are combined to solve a problem and improve the overall performance. The goal is
to build a strong model that achieves better accuracy, generalization, and robustness than any
- Bagging works by training multiple models independently on different subsets of the data using
- Each model is trained in parallel, and their outputs are combined to make the final prediction.
Steps:
Advantage:
- Reduces variance.
- Boosting is a sequential technique where each model is trained to correct the errors of the previous
one.
Steps:
Advantage:
- Stacking combines multiple base models of different types and uses a meta-model to make the
final prediction.
- Unlike bagging and boosting, stacking can combine diverse algorithms (e.g., Decision Trees, SVM,
KNN).
Steps:
Advantage:
Conclusion:
Ensemble learning is a core technique in machine learning that enhances model performance by
These techniques are widely used in industries for applications such as fraud detection, medical