Skip to main content
Abstract

Pragmatic Competence in LLMs: The Case of Eliciture

Authors
  • Dingyi Pan (UC San Diego)
  • Andrew Kehler (UC San Diego)

Abstract

While much of the linguistic research on large language models (LLMs) has focused on evaluating their syntactic and semantic abilities, fewer studies have examined their skills in the domain of pragmatics. In this study, we focus on conversational elicitures, a type of non-mandated pragmatic inference where the felicity of the utterance is not at stake. This type of pragmatic enrichment involves a potential causal inference between the proposition denoted by a matrix clause and another derived from a relative clause modifying a direct object. For instance, in the sentence “Melissa detests the children who are arrogant and rude,” one can infer that Melissa detests the children because they are arrogant and rude, rather than interpreting them as two unrelated facts. In this paper, we investigate whether LLMs can draw such inferences and use them in downstream syntactic processing tasks, in this case predicting high/low relative clause attachment. Our results suggest that larger and more recent models exhibit these capabilities.

Keywords: Computational pragmatics, Pragmatic reasoning, Large language models

How to Cite:

Pan, D. & Kehler, A., (2025) “Pragmatic Competence in LLMs: The Case of Eliciture”, Society for Computation in Linguistics 8(1): 43. doi: https://doi.org/10.7275/scil.3177

Downloads:
Download PDF

46 Views

18 Downloads

Published on
2025-06-14

Peer Reviewed