Gradient boosting is a widely used machine learning algorithm for tabular regression, classification and ranking. Although, most of the open source implementations of gradient boosting such as XGBoost, LightGBM and others have used decision trees as the sole base estimator for gradient boosting. This paper, for the first time, takes an alternative path of not just relying on a static base estimator (usually decision tree), and rather trains a list of models in parallel on the residual errors of the previous layer and then selects the model with the least validation error as the base estimator for a particular layer. This paper has achieved state-of-the-art results when compared to other gradient boosting implementations on 50+ tabular regression and classification datasets. Furthermore, ablation studies show that MSBoost is particularly effective for small and noisy datasets. Thereby, it has a significant social impact especially in tabular machine learning problems in the domains where it is not feasible to obtain large high quality datasets.
-
Notifications
You must be signed in to change notification settings - Fork 0
MSBoost is a gradient boosting algorithm that improves performance by selecting the best model from multiple parallel-trained models for each layer, excelling in small and noisy datasets.
License
Agnij-Moitra/MSBoost
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
MSBoost is a gradient boosting algorithm that improves performance by selecting the best model from multiple parallel-trained models for each layer, excelling in small and noisy datasets.
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published