Abstract
We experiment with two recent contextualized word embedding methods (ELMo and
BERT) in the context of open-domain argument search. For the first time, we show
how to leverage the power of contextualized word embeddings to classify and cluster
topic-dependent arguments, achieving impressive results on both tasks and across multiple
datasets. For argument classification, we improve the state-of-the-art for the UKP Sentential Argument Mining Corpus by 20.8 percentage points and for the IBM Debater - Evidence
Sentences dataset by 7.4 percentage points.
For the understudied task of argument clustering, we propose a pre-training step which
improves by 7.8 percentage points over strong
baselines on a novel dataset, and by 12.3 percentage points for the Argument Facet Similarity (AFS) Corpus