Abstract
Cross-lingual transfer, where a high-resource
transfer language is used to improve the accuracy of a low-resource task language, is now
an invaluable tool for improving performance
of natural language processing (NLP) on lowresource languages. However, given a particular task language, it is not clear which language
to transfer from, and the standard strategy is to
select languages based on ad hoc criteria, usually the intuition of the experimenter. Since a
large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even
the most enlightened experimenter rarely considers all these factors for the particular task
at hand. In this paper, we consider this task
of automatically selecting optimal transfer languages as a ranking problem, and build models that consider the aforementioned features
to perform this prediction. In experiments on
representative NLP tasks, we demonstrate that
our model predicts good transfer languages
much better than ad hoc baselines considering single features in isolation, and glean insights on what features are most informative
for each different NLP tasks, which may inform future ad hoc selection even without use
of our method