Abstract

Colorless green recurrent networks dream hierarchically

Authors
  • Kristina Gulordava (Universitat Pompeu Fabra)
  • Piotr Bojanowski (Facebook AI Research, Paris)
  • Edouard Grave (Facebook AI Research, New York)
  • Tal Linzen (Johns Hopkins University)
  • Marco Baroni (Facebook AI Research, Paris)

Abstract

Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues ("The colorless green ideas I ate with the chair sleep furiously"), and, for Italian, we compare model performance to human intuitions. Our language-model-trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallow-pattern extractors, but they also acquire deeper grammatical competence.

Keywords: recurrent neural networks, syntax, number agreement, dependency treebanks

How to Cite:

Gulordava, K., Bojanowski, P., Grave, E., Linzen, T. & Baroni, M., (2019) “Colorless green recurrent networks dream hierarchically”, Society for Computation in Linguistics 2(1), 363-364. doi: https://doi.org/10.7275/zb8y-wg03

Downloads:
Download PDF

84 Views

32 Downloads

Published on
01 Jan 2019