资源论文Multi-mapping Image-to-Image Translation via Learning Disentanglement

Multi-mapping Image-to-Image Translation via Learning Disentanglement

2020-02-25 | |  83 |   56 |   0

Abstract

Recent advances of image-to-image translation focus on learning the one-to-many mapping from two aspects: multi-modal translation and multi-domain translation. However, the existing methods only consider one of the two perspectives, which makes them unable to solve each other’s problem. To address this issue, we propose a novel unified model, which bridges these two objectives. First, we disentangle the input images into the latent representations by an encoder-decoder architecture with a conditional adversarial training in the feature space. Then, we encourage the generator to learn multi-mappings by a random cross-domain translation. As a result, we can manipulate different parts of the latent representations to perform multi-modal and multi-domain translations simultaneously. Experiments demonstrate that our method outperforms state-of-the-art methods. Code will be available at https://github.com/Xiaoming-Yu/DMIT.

上一篇:DRUM: End-To-End Differentiable Rule Mining On Knowledge Graphs

下一篇:Blow: a single-scale hyperconditioned flow for non-parallel raw-audio voice conversion

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...