Paper

A Closer Look at the Performance of Neural Language Models on Reflexive Anaphor Licensing

Authors
  • Jennifer Hu (Massachusetts Institute of Technology)
  • Sherry Y Chen (Massachusetts Institute of Technology)
  • Roger P. Levy (Massachusetts Institute of Technology)

Abstract

An emerging line of work uses psycholinguistic methods to evaluate the syntactic generalizations acquired by neural language models (NLMs). While this approach has shown NLMs to be capable of learning a wide range of linguistic knowledge, confounds in the design of previous experiments may have obscured the potential of NLMs to learn certain grammatical phenomena. Here we re-evaluate the performance of a range of NLMs on reflexive anaphor licensing. Under our paradigm, the models consistently show stronger evidence of learning than reported in previous work. Our approach demonstrates the value of well-controlled psycholinguistic methods in gaining a fine-grained understanding of NLM learning potential.

Keywords: language models, syntax, psycholinguistics, reflexive anaphor licensing

How to Cite:

Hu, J., Chen, S. Y. & Levy, R. P., (2020) “A Closer Look at the Performance of Neural Language Models on Reflexive Anaphor Licensing”, Society for Computation in Linguistics 3(1), 382-392. doi: https://doi.org/10.7275/67qw-mf84

Downloads:
Download PDF

59 Views

32 Downloads

Published on
01 Jan 2020