资源论文Disentangled Sequential Autoencoder

Disentangled Sequential Autoencoder

2020-03-16 | |  42 |   38 |   0

Abstract

We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which i split into a static and dynamic part, allowing us approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture give us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificial generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochas tic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.

上一篇:Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?

下一篇:Learning Low-Dimensional Temporal Representations

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...