Music imagery information retrieval (MIIR) is an emerging field of research at the intersection of cognitive neuroscience and music information retrieval. The goal is to identify music pieces from brain signals recorded while they were either perceived or imagined. MIIR systems may one day be able to recognize a song just as we think of it. Being able to reliably distinguish between just two different imagined music pieces would already allow us to build a music-based brain computer interface (BCI) for patients that are only able to communicate by modulating their own neural activity.
As a step towards such technology, we are presenting a public domain dataset of electroencephalography (EEG) recordings taken during music perception and imagination. We acquired this data during an ongoing study that so far comprised 10 subjects listening to and imagining 12 short music fragments – each 7s-16s long – taken from well-known pieces. These stimuli were selected from different genres and systematically span several musical dimensions such as meter, tempo and the presence of lyrics. This way, various retrieval and classification scenarios can be addressed such as stimulus recognition, tempo estimation, meter recognition, and lyrics detection
The dataset is primarily aimed to enable music information retrieval researchers interested in these new MIIR challenges to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or cross-modal synchronization on this new kind of data. We also hope that the OpenMIIR dataset will facilitate a stronger interdisciplinary collaboration between music information retrieval researchers and neuroscientists.