Extended Abstract

Testing for Grammatical Category Abstraction in Neural Language Models

Authors
  • Najoung Kim (Johns Hopkins University)
  • Paul Smolensky (Johns Hopkins University / Microsoft Research, Redmond)

Abstract

We propose a new method inspired by human developmental studies to probe pretrained neural language models on their ability to make grammatical category (part-of-speech) abstraction and generalization to novel contexts. Our method does not require training a separate classifier, bypassing the methodological questions raised in the recent literature on the validity of using diagnostic classifiers as probes. The results of our experiment testing BERT-large suggests that it can make category-based generalizations to a degree, but this capacity is still limited in several aspects.

Keywords: grammatical categories, generalization, part-of-speech, neural language models, BERT

How to Cite:

Kim, N. & Smolensky, P., (2021) “Testing for Grammatical Category Abstraction in Neural Language Models”, Society for Computation in Linguistics 4(1), 467-470. doi: https://doi.org/10.7275/2nb8-ag59

Downloads:
Download PDF

178 Views

37 Downloads

Published on
01 Jan 2021