Abstract

When Classifying Arguments, BERT Doesn't Care About Word Order... Except When It Matters

Authors
  • Isabel Papadimitriou (Stanford University)
  • Richard Futrell (University of California, Irvine)
  • Kyle Mahowald (University of Texas at Austin)

Abstract

We probe nouns in BERT contextual embedding space for grammatical role (subject vs. object of a clause), and examine how probing results vary between prototypical examples, where the role matches what we would expect from seeing that word in the context, and non-prototypical examples, where the role is mostly imparted by the context. In this way, engage with the contrast that has arisen in the literature, between studies that show contextual models as grammatically sensitive, and others that show that these models are robust to changes in word order. Our experiments yield three results: 1) Grammatical role is recovered in later layers for difficult non-prototypical cases, while prototypical cases are accurate without many layers of context 2) When we switch the subject and the object of a sentence around (eg, The chef cut the onion, The onion cut the chef), we see that the same word (eg, onion) can be fluently identified as both a subject and an object 3) Subjecthood probing breaks if we ablate local word order by shuffle words locally and break grammaticality.

Keywords: Contextual embeddings, BERT, grammatical role, subjecthood, word order, verb arguments, prototype

How to Cite:

Papadimitriou, I., Futrell, R. & Mahowald, K., (2022) “When Classifying Arguments, BERT Doesn't Care About Word Order... Except When It Matters”, Society for Computation in Linguistics 5(1), 203-205. doi: https://doi.org/10.7275/tvzb-rz76

Downloads:
Download PDF

152 Views

58 Downloads

Published on
01 Feb 2022