Abstract
Due to limitation of labeled resources, crossdomain named entity recognition (NER) has
been a challenging task. Most existing work
considers a supervised setting, making use of
labeled data for both the source and target domains. A disadvantage of such methods is that
they cannot train for domains without NER
data. To address this issue, we consider using
cross-domain LM as a bridge cross-domains
for NER domain adaptation, performing crossdomain and cross-task knowledge transfer by
designing a novel parameter generation network. Results show that our method can effectively extract domain differences from crossdomain LM contrast, allowing unsupervised
domain adaptation while also giving state-ofthe-art results among supervised domain adaptation methods.