SherLIiC: A Typed Event-Focused Lexical Inference Benchmark forEvaluating Natural Language Inference
Abstract
We present SherLIiC,1
a testbed for lexical inference in context (LIiC), consisting of 3985
manually annotated inference rule candidates
(InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus
ClueWeb09. Each InfCand consists of one of
these relations, expressed as a lemmatized dependency path, and two argument placeholders, each linked to one or more Freebase types.
Due to our candidate selection process based
on strong distributional evidence, SherLIiC is
much harder than existing testbeds because
distributional evidence is of little utility in the
classification of InfCands. We also show that,
due to its construction, many of SherLIiC’s
correct InfCands are novel and missing from
existing rule bases. We evaluate a number of
strong baselines on SherLIiC, ranging from semantic vector space models to state of the art
neural models of natural language inference
(NLI). We show that SherLIiC poses a tough
challenge to existing NLI systems.