Abstract
Cascades of boosted ensembles have become popular in the ob ject detection community following their highly successful introduc- tion in the face detector of Viola and Jones [1]. In this paper, we explore several aspects of this architecture that have not yet received adequate attention: decision points of cascade stages, faster ensemble learning, and stronger weak hypotheses. We present a novel strategy to determine the appropriate balance between false positive and detection rates in the individual stages of the cascade based on a probablistic model of the overall cascade’s performance. To improve the training time of individ- ual stages, we explore the use of feature filtering before the application of Adaboost. Finally, we show that the use of stronger weak hypothe- ses based on CART can significantly improve upon the standard face detection results on the CMU-MIT data set.