Paper

How Well Do LSTM Language Models Learn Filler-gap Dependencies?

Authors
  • Satoru Ozaki (Carnegie Mellon University)
  • Dan Yurovsky (Carnegie Mellon University)
  • Lori Levin (Carnegie Mellon University)

Abstract

This paper revisits the question of what LSTMs know about the syntax of filler-gap dependencies in English. One contribution of this paper is to adjust the metrics used by Wilcox et al. 2018 and show that their language models (LMs) learn embedded wh-questions -- a kind of filler-gap dependencies -- better than they originally claimed. Another contribution of this paper is to examine four additional filler-gap dependency constructions to see whether LMs perform equally on all types of filler-gap dependencies. We find that different constructions are learned to different extents, and there is a correlation between performance and frequency of constructions in the Penn Treebank Wall Street Journal corpus.

Keywords: syntax, psycholinguistics, RNN, LSTM

How to Cite:

Ozaki, S., Yurovsky, D. & Levin, L., (2022) “How Well Do LSTM Language Models Learn Filler-gap Dependencies?”, Society for Computation in Linguistics 5(1), 76-88. doi: https://doi.org/10.7275/414y-1893

Downloads:
Download PDF

187 Views

69 Downloads

Published on
01 Feb 2022