Building Decision Trees for the Multi-class Imbalance Problem

Authors: 
T. Ryan Hoens, Qi Aian, Nitesh V. Chawla, and Zhi-Hua Zhou
Citation: 
Advances in Knowledge Discovery and Data Mining Lecture Notes in Computer Science Volume 7301, 2012, pp 122-134
Publication Date: 
December, 2012

Learning in imbalanced datasets is a pervasive problem prevalent in a wide variety of real-world applications. In imbalanced datasets, the class of interest is generally a small fraction of the total instances, but misclassification of such instances is often expensive. While there is a significant body of research on the class imbalance problem for binary class datasets, multi-class datasets have received considerably less attention. This is partially due to the fact that the multi-class imbalance problem is often much harder than its related binary class problem, as the relative frequency and cost of each of the classes can vary widely from dataset to dataset. In this paper we study the multi-class imbalance problem as it relates to decision trees (specifically C4.4 and HDDT), and develop a new multi-class splitting criterion. From our experiments we show that multi-class Hellinger distance decision trees, when combined with decomposition techniques, outperform C4.4.