Paper

Does a Neural Model Understand the De Re / De Dicto Distinction?

Authors
  • Gaurav Kamath (McGill University)
  • Laurestine Bradford (McGill University)

Abstract

Neural network language models (NNLMs) are often casually said to "understand" language, but what linguistic structures do they really learn? We pose this question in the context of de re / de dicto ambiguities. Nouns and determiner phrases in intensional contexts, such as belief, desire, and modality, are subject to referential ambiguities. The phrase "Lilo believes an alien is on the loose,\'\' for example, has two interpretations: one ("de re") in which she believes a specific entity which happens to be an alien is on the loose, and another ("de dicto") in which she believes some unspecified alien is on the loose. In this paper we confront an NNLM with contexts producing de re / de dicto ambiguities. We use coreference resolution to investigate which interpretive possibilities the model captures. We find that while RoBERTa is sensitive to the fact that intensional predicates and indefinite determiners each change coreference possibilities, it does not grasp how the two interact with each other, and hence misses a deeper level of semantic structure. This inquiry is novel in its cross-disciplinary approach to philosophy, semantics and NLP, bringing formal semantic insight to an active research area testing the nature of NNLMs\' linguistic "understanding."

Keywords: neural language models, intensionality, model interpretability, semantics, scope ambiguity

How to Cite:

Kamath, G. & Bradford, L., (2023) “Does a Neural Model Understand the De Re / De Dicto Distinction?”, Society for Computation in Linguistics 6(1), 69-84. doi: https://doi.org/10.7275/7286-6n89

Downloads:
Download PDF

58 Views

28 Downloads

Published on
01 Jun 2023