Abstract
Benefiting from the excellent ability of neural networks on learning semantic representations, existing studies for entity linking (EL) have resorted to
neural networks to exploit both the local mentionto-entity compatibility and the global interdependence between different EL decisions for target entity disambiguation. However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack
of the guidance from external knowledge. In this
paper, we propose a novel end-to-end neural network with recurrent random-walk layers for collective EL, which introduces external knowledge to
model the semantic interdependence between different EL decisions. Specifically, we first establish a model based on local context features, and
then stack random-walk layers to reinforce the evidence for related EL decisions into high-probability
decisions, where the semantic interdependence between candidate entities is mainly induced from
an external knowledge base. Finally, a semantic regularizer that preserves the collective EL decisions consistency is incorporated into the conventional objective function, so that the external
knowledge base can be fully exploited in collective
EL decisions. Experimental results and in-depth
analysis on various datasets show that our model achieves better performance than other state-ofthe-art models. Our code and data are released at
https://github.com/DeepLearnXMU/RRWEL