资源论文A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music

A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music

2019-08-22 | |  232 |   90 |   0

Abstract The Variational Autoencoder (VAE) has proven to be an effective model for producing semantically meaningful latent representations for natural data. However, it has thus far seen limited application to sequential data, and, as we demonstrate, existing recurrent VAE models have diffificulty modeling sequences with long-term structure. To address this issue, we propose the use of a hierarchical decoder, which fifirst outputs embeddings for subsequences of the input and then uses these embeddings to generate each subsequence independently. This structure encourages the model to utilize its latent code, thereby avoiding the “posterior collapse” problem which remains an issue for recurrent VAEs. We apply this architecture to modeling sequences of musical notes and fifind that it exhibits dramatically better sampling, interpolation, and reconstruction performance than a “flflat” baseline model. An implementation of our “MusicVAE” is available online.2

上一篇:MASK R-CNN

下一篇:Fatigue Detection U sing Facial Landmarks

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...