资源论文Learning Aligned Cross-Modal Representations from Weakly Aligned Data

Learning Aligned Cross-Modal Representations from Weakly Aligned Data

2019-12-20 | |  58 |   41 |   0

Abstract

People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation notaligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality. ? denotes equal contribution Real Clip Bedroom Kindergarten classroom Figure 1: Can you recognize scenes across different modalities?scene dataset. In this paper, we investigate how to learn cross

上一篇:Egocentric Future Localization

下一篇:Structured Regression Gradient Boosting

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...