Paper

Language Models Can Learn Exceptions to Syntactic Rules

Authors
  • Cara Su-Yi Leong (New York University)
  • Tal Linzen (New York University)

Abstract

Artificial neural networks can generalize productively to novel contexts. Can they also learn exceptions to those productive rules? We explore this question using the case of restrictions on English passivization (e.g., the fact that \'\'The vacation lasted five days\'\' is grammatical, but \'\'*Five days was lasted by the vacation\'\' is not). We collect human acceptability judgments for passive sentences with a range of verbs, and show that the probability distribution defined by GPT-2, a language model, matches the human judgments with high correlation. We also show that the relative acceptability of a verb in the active vs. passive voice is positively correlated with the relative frequency of its occurrence in those voices. These results provide preliminary support for the entrenchment hypothesis, according to which learners track and uses the distributional properties of their input to learn negative exceptions to rules. At the same time, this hypothesis fails to explain the magnitude of unpassivizability demonstrated by certain individual verbs, suggesting that other cues to exceptionality are available in the linguistic input.

Keywords: passivization, negative evidence, language modeling, exceptions, syntactic generalization, entrenchment

How to Cite:

Leong, C. & Linzen, T., (2023) “Language Models Can Learn Exceptions to Syntactic Rules”, Society for Computation in Linguistics 6(1), 133-144. doi: https://doi.org/10.7275/h25z-0y75

Downloads:
Download PDF

87 Views

37 Downloads

Published on
01 Jun 2023