Abstract. We introduce PixelPlayer, a system that, by leveraging large
amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that
represents the sound from each pixel. Our approach capitalizes on the
natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional
manual supervision. Experimental results on a newly collected MUSIC
dataset show that our proposed Mix-and-Separate framework outperforms several baselines on source separation. Qualitative results suggest
our model learns to ground sounds in vision, enabling applications such
as independently adjusting the volume of sound sources.