Skip to main content
Abstract

CNNs that robustly compute vowel harmony do not explicitly represent phonological tiers

Authors
  • Jane Li (Johns Hopkins University)
  • Alan Tiantian Zhou orcid logo (Johns Hopkins University)

Abstract

Linguistic and model-theoretic analyses of long-distance phonology postulate the existence of phonological tiers (Goldsmith, 1976; Heinz et al., 2011). For example, vowel harmony may be analyzed as a process that projects vowels (but not consonants) onto a tier and ensures that all sounds on the tier have the same feature (e.g., [±front] in Turkish vowel harmony, Clements et al. (1982)). Li and Zhou (under review) recently demonstrated that convolutional neural networks (CNNs) learning a toy example of vowel harmony (§2) on short strings robustly generalize the pattern to much longer strings. One explanation is that these CNNs have independently recovered an “algorithm” that is consistent with the tier projection analysis. Alternatively, these models may have uncovered an approximation of this system, or an entirely different system that robustly generalizes to long lengths. This work investigates these hypotheses via various interpretability methods. In particular, we search for evidence for a “strong” implementation of tier projection, in which these CNNs exactly implement the tier-projection and feature-matching analyses described above.

Keywords: phonological learning, neural network learning, phonology

How to Cite:

Li, J. & Zhou, A. T., (2025) “CNNs that robustly compute vowel harmony do not explicitly represent phonological tiers”, Society for Computation in Linguistics 8(1): 48. doi: https://doi.org/10.7275/scil.3189

Downloads:
Download PDF

36 Views

10 Downloads

Published on
2025-06-14

Peer Reviewed