Abstract
What is the relationship between sentence representations learned by deep recurrent models against those encoded by the brain? Is
there any correspondence between hidden layers of these recurrent models and brain regions when processing sentences? Can these
deep models be used to synthesize brain data
which can then be utilized in other extrinsic tasks? We investigate these questions using sentences with simple syntax and semantics (e.g., The bone was eaten by the dog.).
We consider multiple neural network architectures, including recently proposed ELMo
and BERT. We use magnetoencephalography
(MEG) brain recording data collected from human subjects when they were reading these
simple sentences