资源论文DM2C: Deep Mixed-Modal Clustering

DM2C: Deep Mixed-Modal Clustering

2020-02-19 | |  90 |   75 |   0

Abstract

Data exhibited with multiple modalities are ubiquitous in real-world clustering tasks. Most existing methods, however, pose a strong assumption that the pairing information for modalities is available for all instances. In this paper, we consider a more challenging task where each instance is represented in only one modality, which we call mixed-modal data. Without any extra pairing supervision across modalities, it is difficult to find a universal semantic space for all of them. To tackle this problem, we present an adversarial learning framework for clustering with mixed-modal data. Instead of transforming all the samples into a joint modalityindependent space, our framework learns the mappings across individual modality spaces by virtue of cycle-consistency. Through these mappings, we could easily unify all the samples into a single modality space and perform the clustering. Evaluations on several real-world mixed-modal datasets could demonstrate the superiority of our proposed framework.

上一篇:Deep Set Prediction Networks

下一篇:Adversarial Robustness through Local Linearization

用户评价
全部评价

热门资源

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Bounding the Inef...

    Social networks on the Internet have seen an en...

  • Shape-based Autom...

    We present an algorithm for automatic detection...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...