资源论文Unsupervised Learning of Spoken Language with Visual Context

Unsupervised Learning of Spoken Language with Visual Context

2020-02-07 | |  79 |   52 |   0

Abstract 

Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.

上一篇:Generating Images with Perceptual Similarity Metrics based on Deep Networks

下一篇:Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...