资源论文Semi-Supervised Multi-Modal Learning with Incomplete Modalities

Semi-Supervised Multi-Modal Learning with Incomplete Modalities

2019-11-08 | |  107 |   44 |   0

Abstract In real world applications, data are often with multiple modalities. Researchers proposed the multimodal learning approaches for integrating the information from different modalities. Most of the previous multi-modal methods assume that training examples are with complete modalities. However, due to the failures of data collection, selfdefificiencies and other various reasons, multi-modal examples are usually with incomplete feature representation in real applications. In this paper, the incomplete feature representation issues in multimodal learning are named as incomplete modalities, and we propose a semi-supervised multimodal learning method aimed at this incomplete modal issue (SLIM). SLIM can utilize the extrinsic information from unlabeled data against the insuffificiencies brought by the incomplete modal issues in a semi-supervised scenario. Besides, the proposed SLIM forms the problem into a unifified framework which can be treated as a classififier or clustering learner, and integrates the intrinsic consistencies and extrinsic unlabeled information. As SLIM can extract the most discriminative predictors for each modality, experiments on 15 real world multi-modal datasets validate the effectiveness of our method

上一篇:Semi-Supervised Optimal Transport for Heterogeneous Domain Adaptation

下一篇:Semi-Supervised Optimal Margin Distribution Machines∗

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...