资源论文Undoing the Damage of Dataset Bias

Undoing the Damage of Dataset Bias

2020-04-02 | |  128 |   41 |   0

Abstract

The presence of bias in existing ob ject recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given lim- ited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the ob ject model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks com- pared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets.

上一篇:Rainbow Flash Camera: Depth Edge Extraction Using Complementary Colors

下一篇:Manifold Statistics for Essential Matrices

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...