Paper

On Evaluating the Generalization of LSTM Models in Formal Languages

Authors
  • Mirac Suzgun (Harvard University)
  • Yonatan Belinkov (Harvard University)
  • Stuart M. Shieber (Harvard University)

Abstract

Recurrent Neural Networks (RNNs) are theoretically Turing-complete and established themselves as a dominant model for language processing. Yet, there still remains an uncertainty regarding their language learning capabilities. In this paper, we empirically evaluate the inductive learning capabilities of Long Short-Term Memory networks, a popular extension of simple RNNs, to learn simple formal languages, in particular anbn, anbncn, and anbncndn. We investigate the influence of various aspects of learning, such as training data regimes and model capacity, on the generalization to unobserved samples. We find striking differences in model performances under different training settings and highlight the need for careful analysis and assessment when making claims about the learning capabilities of neural network models.

Keywords: LSTM, RNN, Long Short-Term Memory network, CSL, CFL, context sensitive, context free, evaluation, generalization, formal languages, distribution, hidden units, window sizes, windows, uniform, u-shaped, beta binomial

How to Cite:

Suzgun, M., Belinkov, Y. & Shieber, S. M., (2019) “On Evaluating the Generalization of LSTM Models in Formal Languages”, Society for Computation in Linguistics 2(1), 277-286. doi: https://doi.org/10.7275/s02b-4d91

Downloads:
Download PDF

165 Views

49 Downloads

Published on
01 Jan 2019