Paper

On the difficulty of a distributional semantics of spoken language

Authors
  • Grzegorz Chrupała (Tilburg University)
  • Lieke Gelderloos (Tilburg University)
  • Ákos Kádár (Tilburg University)
  • Afra Alishahi (Tilburg University)

Abstract

In the domain of unsupervised learning most work on speech has focused on discovering low-level constructs such as phoneme inventories or word-like units. In contrast, for written language, where there is a large body of work on unsupervised induction of semantic representations of words, whole sentences and longer texts. In this study we examine the challenges of adapting these approaches from written to spoken language. We conjecture that unsupervised learning of the semantics of spoken language becomes feasible if we abstract from the surface variability. We simulate this setting with a dataset of utterances spoken by a realistic but uniform synthetic voice. We evaluate two simple unsupervised models which, to varying degrees of success, learn semantic representations of speech fragments. Finally we present inconclusive results on human speech, and discuss the challenges inherent in learning distributional semantic representations on unrestricted natural spoken language.

Keywords: speech recognition, distributional semantics, unsupervised learning, representation learning

How to Cite:

Chrupała, G., Gelderloos, L., Kádár, Á. & Alishahi, A., (2019) “On the difficulty of a distributional semantics of spoken language”, Society for Computation in Linguistics 2(1), 167-173. doi: https://doi.org/10.7275/extq-7546

Downloads:
Download PDF

149 Views

40 Downloads

Published on
01 Jan 2019