" /> Ensemble Learning - CISMeF





Preferred Label : Ensemble Learning;

MeSH definition : A method in machine learning where several base learners are combined to create an optimal learning model. This can be used for a variety of models including forecasting, classification, or function approximation.;

Définition CISMeF : ML techniques that combine multiple models to improve the overall predictive performance compared to using a single model. This involves training a set of base models, such as neural networks, and then aggregating their predictions to make the final prediction. Some common ensemble methods include bagging (i.e., training multiple models on different subsets of the training data and averaging their predictions), boosting (i.e., training models sequentially where each new model focuses on correcting the errors of the previous model), and stacking (i.e., using the predictions of multiple base models as input features for a higher-level “meta-model” that learns how to best combine them).; Source: Adapted from: Dietterich, T.G. (2000). Ensemble Methods in Machine Learning. In: Multiple Classifier Systems. MCS 2000. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45014-9_1External Link Disclaimer IEEE Standards. (2022). IEEE Standard for Performance and Safety Evaluation of Artificial Intelligence Based Medical Devices: Terminology (IEEE Std 2802 ‐2022). https://standards.ieee.org/ieee/2802/7460/;

MeSH synonym : Ensemble Learnings; Learning, Ensemble; Ensemble Model Aggregation; Aggregation, Ensemble Model; Ensemble Model Aggregations; Model Aggregation, Ensemble;

CISMeF synonym : Ensemble Methods;

Details


You can consult :

A method in machine learning where several base learners are combined to create an optimal learning model. This can be used for a variety of models including forecasting, classification, or function approximation.
ML techniques that combine multiple models to improve the overall predictive performance compared to using a single model. This involves training a set of base models, such as neural networks, and then aggregating their predictions to make the final prediction. Some common ensemble methods include bagging (i.e., training multiple models on different subsets of the training data and averaging their predictions), boosting (i.e., training models sequentially where each new model focuses on correcting the errors of the previous model), and stacking (i.e., using the predictions of multiple base models as input features for a higher-level “meta-model” that learns how to best combine them).
Source: Adapted from: Dietterich, T.G. (2000). Ensemble Methods in Machine Learning. In: Multiple Classifier Systems. MCS 2000. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45014-9_1External Link Disclaimer IEEE Standards. (2022). IEEE Standard for Performance and Safety Evaluation of Artificial Intelligence Based Medical Devices: Terminology (IEEE Std 2802 ‐2022). https://standards.ieee.org/ieee/2802/7460/

Nous contacter.
31/05/2025


[Home] [Top]

© Rouen University Hospital. Any partial or total use of this material must mention the source.