Abstract
Automatic grammatical error correction
(GEC) research has made remarkable progress
in the past decade. However, all existing approaches to GEC correct errors by considering
a single sentence alone and ignoring crucial
cross-sentence context. Some errors can only
be corrected reliably using cross-sentence
context and models can also benefit from the
additional contextual information in correcting other errors. In this paper, we address
this serious limitation of existing approaches
and improve strong neural encoder-decoder
models by appropriately modeling wider
contexts. We employ an auxiliary encoder that
encodes previous sentences and incorporate
the encoding in the decoder via attention and
gating mechanisms. Our approach results
in statistically significant improvements
in overall GEC performance over strong
baselines across multiple test sets. Analysis of
our cross-sentence GEC model on a synthetic
dataset shows high performance in verb
tense corrections that require cross-sentence
context.