Monday, 10 January 2005: 11:00 AM
Predicting Good Probabilities with Supervised Learning
Poster PDF
(115.0 kB)
We present results of an empirical evaluation of the probabilities predicted by seven supervised learning algorithms. The algorithms are SVMs, neural nets, decision trees, memory-based learning, bagged trees, boosted trees, and boosted stumps. For each algorithm we test many different parameter settings. A total of 2000 different models are tested on each problem. Experiments with seven test problems suggest that neural nets and bagged decision trees are the best learning methods for predicting well-calibrated probabilities. Although SVMs and boosted trees are not well calibrated, they have excellent performance on other metrics such as accuracy and area under the ROC curve. We analyze the predictions made by these models and show that they are distorted probabilities. To correct for this distortion, we use two methods for calibrating probabilities: Platt Scaling and Isotonic Regression. Calibration significantly improves the performance of boosted trees and SVMs. After calibration, these two learning methods outperform neural nets and bagged decision trees and become the best learning methods for predicting calibrated posterior probabilities.
Supplementary URL: