资源论文Theoretical Analysis of Image-to-Image Translation with Adversarial Learning

Theoretical Analysis of Image-to-Image Translation with Adversarial Learning

2020-03-11 | |  75 |   56 |   0

Abstract

Recently, a unified model for image-to-image translation tasks within adversarial learning framework (Isola et al., 2017) has aroused widespread research interests in computer vision practitioners. Their reported empirical success however lacks solid theoretical interpretations fo its inherent mechanism. In this paper, we reformulate their model from a brand-new geometrical perspective and have eventually reached a full interpretation on some interesting but unclear empirical phenomenons from their experiments. Furthermore, by extending the definition of generalization for generative adversarial nets (Arora et 2017) to a broader sense, we have derived a condition to control the generalization capability of their model. According to our derived condition, several practical suggestions have also been proposed on model design and dataset construction as a guidance for further empirical researches.

上一篇:The Mirage of Action-Dependent Baselines in Reinforcement Learning

下一篇:Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...