Abstract
Transfer learning can counter the heavy-tailed nature of the distribution of training examples over ob ject classes. Here, we study transfer learning for ob ject class detection. Starting from the intuition that “what makes a good detector” should manifest itself in the form of repeatable statistics over existing “good” detectors, we design a low- level feature model that can be used as a prior for learning new ob ject class models from scarce training data. Our priors are structured, cap- turing dependencies both on the level of individual features and spatially neighboring pairs of features. We confirm experimentally the connection between the information captured by our priors and “good” detectors as well as the connection to transfer learning from sources of different quality. We give an in-depth analysis of our priors on a subset of the chal- lenging PASCAL VOC 2007 data set and demonstrate improved average performance over all 20 classes, achieved without manual intervention.