Abstract

What representations do RNNs learn and use from morpho-phonological processes?: an exploration of PCA and PC neutralizations on Turkish vowel harmony

Authors
  • Jane Li (Johns Hopkins University)
  • Kyle Rawlins (Johns Hopkins University)
  • Paul Smolensky (Johns Hopkins University)

Abstract

Recurrent neural networks (RNNs) have demonstrated success in capturing human intuitions on inflectional morpho-phonology. However, it remains unclear the type of internal generalizations they make from observing instances of morpho-phonological alternation. In this study, we examine whether phonological features are represented in the learned phoneme embeddings of the RNN, and whether these representations are used in the inflection of novel stems. With Turkish complex vowel harmony, we found a consistent mapping of [± front] and [± round] features in the principal component (PC) subspace of the phoneme embeddings. However, when we altered the embeddings such that the [± front] or [± round] distinctions are lost, the RNN still generated the same outputs as when the embeddings were unaltered. This suggests that the distinctions encoded in the embeddings end up being overlooked or outweighed by other information in the stem, or symbolic manipulation is computed elsewhere in the system.

Keywords: morpho-phonology, phonological generalization, phonological learning, recurrent neural networks

How to Cite:

Li, J., Rawlins, K. & Smolensky, P., (2024) “What representations do RNNs learn and use from morpho-phonological processes?: an exploration of PCA and PC neutralizations on Turkish vowel harmony”, Society for Computation in Linguistics 7(1), 289–290. doi: https://doi.org/10.7275/scil.2165

Downloads:
Download PDF

184 Views

52 Downloads

Published on
24 Jun 2024
Peer Reviewed