Abstract
Wikipedia can easily be justified as a behemoth, considering the sheer volume of content that is added or removed every minute to
its several projects. This creates an immense
scope, in the field of natural language processing toward developing automated tools for
content moderation and review. In this paper
we propose Self Attentive Revision Encoder
(StRE) which leverages orthographic similarity of lexical units toward predicting the quality of new edits. In contrast to existing propositions which primarily employ features like
page reputation, editor activity or rule based
heuristics, we utilize the textual content of
the edits which, we believe contains superior
signatures of their quality. More specifically,
we deploy deep encoders to generate representations of the edits from its text content,
which we then leverage to infer quality. We
further contribute a novel dataset containing
? 21M revisions across 32K Wikipedia pages
and demonstrate that StRE outperforms existing methods by a significant margin – at least
17% and at most 103%. Our pre-trained model
achieves such result after retraining on a set as
small as 20% of the edits in a wikipage. This,
to the best of our knowledge, is also the first attempt towards employing deep language models to the enormous domain of automated content moderation and review in Wikipedia