The two paradigms of ensemble methods are
- Sequential ensemble methods
- Parallel ensemble methods
The two paradigms of ensemble methods in machine learning are:
- Bagging (Bootstrap Aggregating): Bagging involves training multiple instances of the same base learning algorithm on different subsets of the training data. Each subset is typically generated by sampling with replacement (bootstrap sampling) from the original training data. After training, predictions from each model are combined (e.g., by averaging for regression or voting for classification) to make the final prediction.
- Boosting: Boosting involves sequentially training multiple weak learners (typically shallow decision trees or other simple models) where each subsequent model focuses on learning from the mistakes made by the previous models. In boosting, the weights of the training instances are adjusted during each iteration such that the misclassified instances receive higher weights, forcing the subsequent models to pay more attention to them. The final prediction is typically made by a weighted sum of the predictions of all weak learners.
These ensemble methods are powerful techniques used to improve the predictive performance and robustness of machine learning models by leveraging the wisdom of crowds or by iteratively refining weak models.