Paper

What do you mean, BERT? Assessing BERT as a Distributional Semantics Model

Authors
  • Timothee Mickus (Université de Lorraine, CNRS, ATILF)
  • Denis Paperno (Utrecht University)
  • Mathieu Constant (Université de Lorraine, CNRS, ATILF)
  • Kees van Deemter (Utrecht University)

Abstract

Contextualized word embeddings, i.e. vector representations for words in context, are naturally seen as an extension of previous non-contextual distributional semantic models. In this work, we focus on BERT, a deep neural network that produces contextualized embeddings and has set the state-of-the-art in several semantic tasks, and study the semantic coherence of its embedding space. While showing a tendency towards coherence, BERT does not fully live up to the natural expectations for a semantic vector space. In particular, we find that the position of the sentence in which a word occurs, while having no meaning correlates, leaves a noticeable trace on the word embeddings and disturbs similarity relationships.

Keywords: distributional semantics, contextualized word embeddings, neural networks

How to Cite:

Mickus, T., Paperno, D., Constant, M. & van Deemter, K., (2020) “What do you mean, BERT? Assessing BERT as a Distributional Semantics Model”, Society for Computation in Linguistics 3(1), 350-361. doi: https://doi.org/10.7275/t778-ja71

Downloads:
Download PDF

124 Views

47 Downloads

Published on
01 Jan 2020