Paper

What Don\'t RNN Language Models Learn About Filler-Gap Dependencies?

Author
  • Rui P. Chaves (University at Buffalo)

Abstract

In a series of experiments, Wilcox et al. (2019,2019) provide evidence suggesting that general-purpose state-of-the-art LSTM RNN language models have not only learned English filler-gap dependencies, but also some of their associated \'island\' constraints (Ross 1967). In the present paper, I cast doubt on such claims, and argue that upon closer inspection filler-gap dependencies are learned only very imperfectly, including their associated island constraints. I conjecture that the LSTM RNN models in question have more likely learned some surface statistical regularities in the dataset rather than higher-level abstract generalizations about the linguistic mechanisms underlying filler-gap constructions.

Keywords: Islands, Computational Psycholinguistics, Experimental Syntax, LSTM RNN

How to Cite:

Chaves, R. P., (2020) “What Don\'t RNN Language Models Learn About Filler-Gap Dependencies?”, Society for Computation in Linguistics 3(1), 20-30. doi: https://doi.org/10.7275/f7yj-1n62

Downloads:
Download PDF

154 Views

82 Downloads

Published on
01 Jan 2020