Skip to main content
Abstract

semantic-features: A User-Friendly Tool for Studying Contextual Word Embeddings in Interpretable Semantic Spaces

Authors
  • Jwalanthi Ranganathan (The University of Texas at Austin)
  • Rohan Jha orcid logo (The University of Texas at Austin)
  • Kanishka Misra (Toyota Technological Insititute at Chicago)
  • Kyle Mahowald

Abstract

We introduce semantic-features as an extensible, easy-to-use library based on Chronis et al. (2023) for studying contextualized word embeddings of LMs by projecting them into interpretable spaces. We apply this tool in an experiment where we measure the contextual effect of the choice of dative construction (prepositional or double object) on the semantic interpretation of utterances (Bresnan, 2007). Specifically, we test whether “London” in “I sent London the letter.” is more likely to be interpreted as an animate referent (e.g., as the name of a person) than in “I sent the letter to London.” To this end, we devise a dataset of 450 sentence pairs, one in each dative construction, with recipients being ambiguous with respect to person-hood vs. place-hood. By applying semantic-features, we show that the contextualized word embeddings of three masked language models show the expected sensitivities. This leaves us optimistic about the usefulness of our tool.

Keywords: language model analysis, contextualized word embeddings, distributional semantics, dative, neural networks

How to Cite:

Ranganathan, J., Jha, R., Misra, K. & Mahowald, K., (2025) “semantic-features: A User-Friendly Tool for Studying Contextual Word Embeddings in Interpretable Semantic Spaces”, Society for Computation in Linguistics 8(1): 44. doi: https://doi.org/10.7275/scil.3182

Downloads:
Download PDF

54 Views

10 Downloads

Published on
2025-06-13

Peer Reviewed