Consequences of Variability in Classifier Performance Estimates

Authors: 
Troy Raeder, T. Ryan Hoens, and Nitesh V. Chawla
Citation: 
Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 2010.
Publication Date: 
December, 2010

The prevailing approach to evaluating classifiers in the machine learning community involves comparing the performance of several algorithms over a series of usually unrelated data sets. However, beyond this there are many dimensions along which methodologies vary wildly. We show that, depending on the stability and similarity of the algorithms being compared, these sometimes-arbitrary methodological choices can have a significant impact on the conclusions of any study, including the results of statistical tests. In particular, we show that performance metrics and data sets used, the type of cross-validation employed, and the number of iterations of cross-validation run have a significant, and often predictable, effect. Based on these results, we offer a series of recommendations for achieving consistent, reproducible results in classifier performance comparisons.