资源论文A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music?

A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music?

2020-03-16 | |  81 |   52 |   0

Abstract

TheVariationalAutoencoder(VAE)hasprovento be an effective model for producing semantically meaningful latent representations for natural data. However,ithasthusfarseenlimitedapplicationto sequential data, and, as we demonstrate, existing recurrent VAE models have difficulty modeling sequences with long-term structure. To address this issue, we propose the use of a hierarchical decoder, which first outputs embeddings for subsequences of the input and then uses these embeddings to generate each subsequence independently. This structure encourages the model to utilize its latent code, thereby avoiding the “posterior collapse” problem which remains an issue for recurrent VAEs. We apply this architecture to modeling sequences of musical notes and find that it exhibits dramatically better sampling, interpolation, and reconstruction performance than a “flat” baseline model. An implementation of our “MusicVAE” is available online

上一篇:A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming

下一篇:Riemannian Stochastic Recursive Gradient Algorithm

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...