Skip to content

Boosting: AdaBoost, Gradient Boosted Trees, XGBoost

Boosting is an ensemble learning technique that combines weak learners to create a strong model, sequentially focusing on hard-to-predict samples. AdaBoost (Adaptive Boosting) pioneered boosting for classification, while Gradient Boosted Trees (GBT) and XGBoost extend it to general loss functions and scalable implementations. In 2025, boosting methods like XGBoost remain dominant in structured data tasks, Kaggle competitions, and as components in hybrid systems with large language models (LLMs) for feature engineering and prediction.

This lecture in the "Foundations for AI/ML" series (core-ml cluster) builds on ensemble methods like random forests and decision trees, exploring boosting algorithms, their theoretical foundations, bias-variance reduction, and applications. We’ll provide intuitive explanations, mathematical insights, and practical implementations in Python (scikit-learn/XGBoost) and Rust (xgboost-rs/linfa), ensuring a rigorous guide aligned with 2025 ML trends.


1. Motivation and Intuition

Boosting transforms weak learners (slightly better than random) into strong models by sequential training, with each learner focusing on previous errors. AdaBoost adjusts sample weights, GBT fits residuals, and XGBoost adds regularization and scalability.

Why Boosting in 2025?

  • Accuracy: Often outperforms single models.
  • Bias Reduction: Focuses on hard samples.
  • Interpretability: Feature importance from aggregated trees.
  • Modern Applications: XGBoost for tabular data, LLM feature boosting.

Real-World Examples

  • Finance: Credit risk prediction.
  • Medicine: Disease prognosis.
  • AI Pipelines: Boosting on LLM embeddings for classification.
  • Sustainability: Predicting energy consumption.

INFO

Boosting is like a relay race—each weak learner passes the baton, focusing on remaining challenges for overall victory.


2. Mathematical Formulation

Boosting combines M weak learners f_m(x) into F(x) = sum α_m f_m(x), α_m weights.

AdaBoost: For classification, f_m(x) = ±1.

GBT: Minimizes loss L by fitting f_m to -∂L/∂F residuals.

XGBoost: Adds regularization, shrinkage.

Bias-Variance

  • Boosting reduces bias (sequential error correction), moderate variance.

3. AdaBoost: Adaptive Boosting

Algorithm (binary classification):

  1. Initialize weights w_i = 1/n.
  2. For m=1 to M:
    • Fit weak learner f_m on weighted data.
    • Error ε_m = sum w_i I(y_i ≠ f_m(x_i)).
    • α_m = (1/2) log((1-ε_m)/ε_m).
    • Update w_i *= exp(-α_m y_i f_m(x_i)), normalize.
  3. F(x) = sign(sum α_m f_m(x)).

Derivation

MLE for exponential loss L = sum exp(-y_i F(x_i)).

ML Insight

  • Focuses on hard samples via weights.

4. Gradient Boosted Trees (GBT)

Algorithm:

  1. Initialize F_0 = argmin_c sum L(y_i, c).
  2. For m=1 to M:
    • Compute pseudo-residuals r_{im} = -∂L(y_i, F_{m-1}(x_i))/∂F_{m-1}.
    • Fit tree f_m to predict r_im.
    • α_m = argmin_α sum L(y_i, F_{m-1} + α f_m(x_i)).
    • F_m = F_{m-1} + α_m f_m.
  3. Output F_M.

For squared loss, r_im = y_i - F_{m-1}(x_i), α_m=1.

Derivation

Second-order Taylor expansion of loss, MLE for Gaussian.


5. XGBoost: Extreme Gradient Boosting

XGBoost: Optimized GBT with:

  • Regularization: Ω(f) = γ T + (1/2) λ ||w||², T leaves, w weights.
  • Second-order approximation: g_i first, h_i second derivatives.
  • Shrinkage: η learning rate.
  • Column subsampling, parallelization.

Objective:

J = sum L(y_i, \hat{y}_i) + sum Ω(f_m)

ML Insight

  • XGBoost's speed from cache optimization, sparsity.

6. Bias-Variance in Boosting

  • AdaBoost: Reduces bias, can overfit (variance).
  • GBT/XGBoost: Reduces bias sequentially, regularization controls variance.

In 2025, boosting with LLMs for feature boosting.


7. Applications in Machine Learning (2025)

  1. Classification: Sentiment analysis, fraud detection.
  2. Regression: Price prediction.
  3. Ranking: Search engines (LambdaMART).
  4. Anomaly Detection: Isolation forests variant.
  5. Hybrid Systems: XGBoost on LLM features.
  6. Edge AI: Lightweight boosting on devices.

Challenges

  • Overfitting: Regularization essential.
  • Computation: XGBoost parallel, but large data costly.
  • Imbalanced Data: Weighted loss.

8. Numerical Implementations

Implement AdaBoost, XGBoost for classification.

python
import numpy as np
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

# AdaBoost
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=0)

ada = AdaBoostClassifier(n_estimators=50, learning_rate=1.0, random_state=0)
ada.fit(X_train, y_train)
y_pred = ada.predict(X_test)
print("AdaBoost Accuracy:", accuracy_score(y_test, y_pred))

# GBT
gbt = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1, max_depth=3, random_state=0)
gbt.fit(X_train, y_train)
y_pred_gbt = gbt.predict(X_test)
print("GBT Accuracy:", accuracy_score(y_test, y_pred_gbt))

# XGBoost
xgb = XGBClassifier(n_estimators=100, learning_rate=0.1, max_depth=3, random_state=0)
xgb.fit(X_train, y_train)
y_pred_xgb = xgb.predict(X_test)
print("XGBoost Accuracy:", accuracy_score(y_test, y_pred_xgb))

# Feature importance
importances = xgb.feature_importances_
print("XGBoost Importances:", importances)
rust
use xgboost::{Booster, DMatrix};
use ndarray::{Array2, Array1};

fn main() {
    // Placeholder: Iris dataset
    let x_train: Array2<f64> = Array2::zeros((120, 4));
    let y_train: Array1<f64> = Array1::zeros(120);
    let x_test: Array2<f64> = Array2::zeros((30, 4));
    let y_test: Array1<f64> = Array1::zeros(30);

    // XGBoost in Rust (using xgboost-rs)
    let mut params = std::collections::HashMap::new();
    params.insert("max_depth".to_string(), "3".to_string());
    params.insert("eta".to_string(), "0.1".to_string());
    params.insert("objective".to_string(), "multi:softmax".to_string());
    params.insert("num_class".to_string(), "3".to_string());
    let dtrain = DMatrix::from_dense(&x_train.as_slice().unwrap(), x_train.nrows()).unwrap();
    let dtest = DMatrix::from_dense(&x_test.as_slice().unwrap(), x_test.nrows()).unwrap();
    let booster = Booster::train(&dtrain, &params, 100, &vec![]).unwrap();
    let preds = booster.predict(&dtest).unwrap();
    let accuracy = preds.iter().zip(y_test.iter()).filter(|(&p, &t)| p.round() as i32 == *t as i32).count() as f64 / y_test.len() as f64;
    println!("XGBoost Accuracy: {}", accuracy);
}

Dependencies (Cargo.toml):

toml
[dependencies]
xgboost-rs = "0.3.3"
ndarray = "0.15.6"

Implements AdaBoost, GBT, XGBoost.


8. Numerical Stability and High-Dimensions

  • Boosting Stability: Sequential, less variance than single trees.
  • XGBoost in High-D: Column subsampling mitigates curse.
  • Overfitting: Regularization, early stopping.

In 2025, stability in distributed boosting for federated ML.


9. Case Study: Iris Dataset (Classification)

python
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree

# Load dataset
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)

# Train Random Forest
rf = RandomForestClassifier(n_estimators=100, max_depth=3, criterion='gini', oob_score=True, random_state=0)
rf.fit(X_train, y_train)

# Evaluate
y_pred = rf.predict(X_test)
print(classification_report(y_test, y_pred))
print("OOB Score:", rf.oob_score_)

# Feature importance
importances = rf.feature_importances_
plt.bar(iris.feature_names, importances)
plt.title("Feature Importance")
plt.show()

# Visualize one tree
plt.figure(figsize=(12,8))
plot_tree(rf.estimators_[0], feature_names=iris.feature_names, class_names=iris.target_names, filled=True)
plt.title("Random Forest Tree")
plt.show()
rust
use linfa::prelude::*;
use linfa_ensemble::RandomForest;
use ndarray::{Array2, Array1};

fn main() {
    // Placeholder: Iris dataset
    let x_train: Array2<f64> = Array2::zeros((120, 4));
    let y_train: Array1<i32> = Array1::zeros(120);
    let x_test: Array2<f64> = Array2::zeros((30, 4));
    let y_test: Array1<i32> = Array1::zeros(30);

    let dataset = Dataset::new(x_train, y_train);
    let rf = RandomForest::params().n_trees(100).max_depth(Some(3)).fit(&dataset).unwrap();
    let preds = rf.predict(&x_test);
    let accuracy = preds.iter().zip(y_test.iter()).filter(|(&p, &t)| p == t).count() as f64 / y_test.len() as f64;
    println!("Random Forest Accuracy: {}", accuracy);

    // Feature importance not natively supported; compute manually
}

Note: Rust requires external data loading; use Python for full importance and visualization.


12. Under the Hood Insights

  • Bootstrap Sampling: With replacement, ~63% unique samples per tree.
  • Feature Subsampling: m = sqrt(p) for classification.
  • OOB Error: Internal cross-validation.
  • Variance Reduction: Decorrelated trees.

13. Limitations

  • Computation: Large n_estimators costly.
  • Interpretability: Less than single trees.
  • High-D: Benefits from feature selection.
  • Imbalanced Data: Weighted sampling.

14. Summary

Ensemble methods like bagging and random forests are powerful variance reducers foundational to ML. In 2025, their role in explainable AI, edge computing, and LLM hybrids keeps them vital. Decorrelation and aggregation address single tree limitations.


Further Reading

  • Breiman, "Random Forests" (2001).
  • Hastie, Elements of Statistical Learning (Ch. 15).
  • linfa-ensemble docs: github.com/rust-ml/linfa.
  • Friedman, "Stochastic Gradient Boosting" (2002).