Abstract
There are many different ways in which external information might be used in an NLP task.
This paper investigates how external syntactic information can be used most effectively
in the Semantic Role Labeling (SRL) task.
We evaluate three different ways of encoding syntactic parses and three different ways
of injecting them into a state-of-the-art neural
ELMo-based SRL sequence labelling model.
We show that using a constituency representation as input features improves performance
the most, achieving a new state-of-the-art for
non-ensemble SRL models on the in-domain
CoNLL’05 and CoNLL’12 benchmarks.1