Abstract
What if we could effectively read the mind and transfer
human visual capabilities to computer vision methods? In
this paper, we aim at addressing this question by developing
the first visual object classifier driven by human brain signals. In particular, we employ EEG data evoked by visual
object stimuli combined with Recurrent Neural Networks
(RNN) to learn a discriminative brain activity manifold of
visual categories in a reading the mind effort. Afterward,
we transfer the learned capabilities to machines by training
a Convolutional Neural Network (CNN)–based regressor
to project images onto the learned manifold, thus allowing
machines to employ human brain–based features for automated visual classification. We use a 128-channel EEG with
active electrodes to record brain activity of several subjects
while looking at images of 40 ImageNet object classes. The
proposed RNN-based approach for discriminating object
classes using brain signals reaches an average accuracy of
about 83%, which greatly outperforms existing methods attempting to learn EEG visual object representations. As for
automated object categorization, our human brain–driven
approach obtains competitive performance, comparable to
those achieved by powerful CNN models and it is also able
to generalize over different visual datasets