资源论文Stochastic Video Generation with a Learned Prior

Stochastic Video Generation with a Learned Prior

2020-03-19 | |  107 |   57 |   0

Abstract

Generating video frames that accurately predict future world states is challenging. Existing approaches either fail to capture the full distribution of outcomes, or yield blurry generations, or both. In this paper we introduce a video generation model with a learned prior over stochastic latent variables at each time step. Video frames are generated by drawing samples from this prior and combining them with a deterministic estimate of the future frame. The approach is simple and easily trained end-to-end on a variety of datasets Sample generations are both varied and sharp, even many frames into the future, and compare favorably to those from existing approaches.

上一篇:Learning Localized Spatio-Temporal Models From Streaming Data

下一篇:Adaptive Sampled Softmax with Kernel Based Sampling

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...