资源论文Learning Spatial Context: Using Stuff to Find Things

Learning Spatial Context: Using Stuff to Find Things

2020-03-30 | |  43 |   35 |   0

Abstract

The sliding window approach of detecting rigid ob jects (such as cars) is predicated on the belief that the ob ject can be identified from the appearance in a small region around the ob ject. Other types of ob- jects of amorphous spatial extent (e.g., trees, sky), however, are more naturally classified based on texture or color. In this paper, we seek to combine recognition of these two types of ob jects into a system that leverages “context” toward improving detection. In particular, we cluster image regions based on their ability to serve as context for the detection of ob jects. Rather than providing an explicit training set with region labels, our method automatically groups regions based on both their ap- pearance and their relationships to the detections in the image. We show that our things and stuff (TAS) context model produces meaningful clus- ters that are readily interpretable, and helps improve our detection abil- ity over state-of-the-art detectors. We also present a method for learning the active set of relationships for a particular dataset. We present re- sults on ob ject detection in images from the PASCAL VOC 2005/2006 datasets and on the task of overhead car detection in satellite images, demonstrating significant improvements over state-of-the-art detectors.

上一篇:Stereo Matching: An Outlier Confidence Approach

下一篇:A Fast Algorithm for Creating a Compact and Discriminative Visual Codebook

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...