Abstract
The aim of this paper is to leverage foreground segmentation to improve classification performance on weakly a nnotated datasets – those with no additional annotation other than class labels. We introduce TriCoS, a new co-segmentation algorithm that looks at all training images jointly and automatically segments out the most class-discriminative foregrounds for each image. Ultimately, those fore- ground segmentations are used to train a classi fication system. TriCoS solves the co-segmentation problem by minimizing losses at three dif- ferent levels: the category level for foreground/background consistency across images belonging to the same category, the image level for spatial continuity within each image, and the dataset level for discrimination between classes. In an extensive set of experiments, we evaluate the algorithm on three bench- mark datasets: the UCSD-Caltech Birds-200-2010, the Stanford Dogs, and the Oxford Flowers 102. With the help of a modern image classi fier, we show su- perior performance compared to previously published classi fication methods and other co-segmentation methods.