Abstract. In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions
of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a
by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between
the modalities during training. We perform analysis using the Places 205 and
ADE20k datasets demonstrating that our models implicitly learn semanticallycoupled object and word detectors