资源论文Unsupervised Disentangled Representation Learning with Analogical Relations

Unsupervised Disentangled Representation Learning with Analogical Relations

2019-11-08 | |  62 |   54 |   0

Abstract Learning the disentangled representation of interpretable generative factors of data is one of the foundations to allow artifificial intelligence to think like people. In this paper, we propose the analogical training strategy for the unsupervised disentangled representation learning in generative models. The analogy is one of the typical cognitive processes, and our proposed strategy is based on the observation that sample pairs in which one is different from the other in one specifific generative factor show the same analogical relation. Thus, the generator is trained to generate sample pairs from which a designed classififier can identify the underlying analogical relation. In addition, we propose a disentanglement metric called the subspace score, which is inspired by subspace learning methods and does not require supervised information. Experiments show that our proposed training strategy allows the generative models to fifind the disentangled factors, and that our methods can give competitive performances as compared with the state-of-the-art methods

上一篇:Adaptive Collaborative Similarity Learning for Unsupervised Multi-view Feature Selection

下一篇:Unsupervised Deep Hashing via Binary Latent Factor Models for Large-scale Cross-modal Retrieval

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...