资源论文Jointly Discovering Visual Objects and Spoken Wordsfrom Raw Sensory Input

Jointly Discovering Visual Objects and Spoken Wordsfrom Raw Sensory Input

2019-10-22 | |  66 |   52 |   0
Abstract. In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semanticallycoupled object and word detectors

上一篇:Fully-Convolutional Point Networksfor Large-Scale Point Clouds

下一篇:Direct Sparse Odometry with Rolling Shutter

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...