Extended Abstract

Do language models know how to be polite?

Authors
  • Soo-Hwan Lee (New York University)
  • Shaonan Wang (New York University)

Abstract

Politeness is often associated with a degree of formality that the speaker conveys to the addressee of a conversation. There are multiple ways to convey politeness in natural language. Languages such as Korean and Japanese, for instance, have politeness markers that appear in certain positions inside a given sentence. Sometimes, the absence of these politeness markers leads to inappropriateness. This work focuses on a particular case in which a politeness marker can be realized only when its dependency requirement is satisfied. While language model (LM) performance on syntactic dependencies such as filler-gap dependencies (Wilcox et al., 2018), subject-verb agreement (Linzen et al., 2016), anaphor binding (Hu et al., 2020), and control phenomena (Lee and Schuster, 2022) have been explored in recent years, little work has been done on non-syntactic dependencies that reflect politeness or even pragmatic effects in general. The phenomenon at issue is unique in that the dependency is not fulfilled by any of the commonly assumed syntactic disposition or agreement patterns observed elsewhere in the human grammar. Our results suggest that the overall performance of the Transformer-based LMs such as GPT-2 and the variants of BERT on this dependency test is unexpected. Since their performance is below or around chance accuracy for the main task of our experiment, we posit that these pretrained LMs fail to fully capture the politeness phenomenon in Korean. The performance of ChatGPT on a related task, however, is significantly better than its predecessors. While it is tempting to conclude that ChatGPT is better suited for capturing this specific phenomenon, we show that the model is right for the wrong reason. We demonstrate that the model merely selects the sentence that ends with the politeness marker, instead of recognizing the true dependency between the cue word and the target word.

Keywords: linguistic dependency, politeness, Korean, language models

How to Cite:

Lee, S. & Wang, S., (2023) “Do language models know how to be polite?”, Society for Computation in Linguistics 6(1), 375-378. doi: https://doi.org/10.7275/8621-5w02

Downloads:
Download PDF

110 Views

46 Downloads

Published on
01 Jun 2023