Abstract
Embedding a clause inside another (“the girl
[who likes cars [that run fast]] has arrived”)
is a fundamental resource that has been argued to be a key driver of linguistic expressiveness. As such, it plays a central role in
fundamental debates on what makes human
language unique, and how they might have
evolved. Empirical evidence on the prevalence and the limits of embeddings has however been based on either laboratory setups or
corpus data of relatively limited size. We introduce here a collection of large, dependencyparsed written corpora in 17 languages, that
allow us, for the first time, to capture clausal
embedding through dependency graphs and assess their distribution. Our results indicate
that there is no evidence for hard constraints
on embedding depth: the tail of depth distributions is heavy. Moreover, although deeply
embedded clauses tend to be shorter, suggesting processing load issues, complex sentences
with many embeddings do not display a bias towards less deep embeddings. Taken together,
the results suggest that deep embeddings are
not disfavored in written language. More generally, our study illustrates how resources and
methods from latest-generation big-data NLP
can provide new perspectives on fundamental
questions in theoretical linguistics.