Abstract
Recently, much progress has been made in learning general-purpose sentence representations that
can be used across domains. However, most of
the existing models typically treat each word in
a sentence equally. In contrast, extensive studies
have proven that human read sentences efficiently by making a sequence of fixation and saccades.
This motivates us to improve sentence representations by assigning different weights to the vectors
of the component words, which can be treated as
an attention mechanism on single sentences. To
that end, we propose two novel attention models, in
which the attention weights are derived using significant predictors of human reading time, i.e., Surprisal, POS tags and CCG supertags. The extensive
experiments demonstrate that the proposed methods significantly improve upon the state-of-the-art
sentence representation models