资源论文YOU ONLY TRAIN ONCE :L OSS -C ONDITIONAL TRAINING OF DEEP NETWORKS

YOU ONLY TRAIN ONCE :L OSS -C ONDITIONAL TRAINING OF DEEP NETWORKS

2020-01-02 | |  74 |   50 |   0

Abstract

In many machine learning problems, loss functions are weighted sums of several terms. A typical approach to dealing with these is to train multiple separate models with different selections of weights and then either choose the best one according to some criterion or keep multiple models if it is desirable to maintain a diverse set of solutions. This is inefficient both at training and at inference time. We propose a method that allows replacing multiple models trained on one loss function each by a single model trained on a distribution of losses. At test time a model trained this way can be conditioned to generate outputs corresponding to any loss from the training distribution of losses. We demonstrate this approach on three tasks with parametrized losses:图片.png-VAE, learned image compression, and fast style transfer.

上一篇:INDUCTIVE AND UNSUPERVISED REPRESENTATIONL EARNING ON GRAPH STRUCTURED OBJECTS

下一篇:GLAD: LEARNING SPARSE GRAPH RECOVERY

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...