Skip to main content
Paper

Learning Covert URs via Disparity Minimization

Author
  • Jonathan Charles Paramore (University of California, Santa Cruz)

Abstract

When considering the acquisition of underlying representations (URs), two common challenges are often levied against the inclusion of abstract URs in phonological theory: (1) permitting abstract URs causes the search space of potential URs to grow to a computationally intractable degree, and (2) learners have no recourse through which to prefer minimally abstract URs over increasingly abstract alternatives when both types of URs model the data with equal success. This paper directly addresses the second issue by implementing a MaxEnt learner equipped with a bias that penalizes disparities between UR inputs and their corresponding outputs. By favoring mappings with minimal divergence, the bias generates a preference for minimally abstract URs when competing candidates perform equally well in modeling the data. In addition, the paper proposes a conceptual framework for addressing the first issue, in which the space of potential URs is organized so that candidates are considered serially, beginning with those that exhibit the fewest disparities. This method offers a potential strategy for avoiding the added compute time introduced by permitting UR abstraction.

Keywords: Covert URs, learnability, underlying representation, Punjabi, MaxLex, MaxEnt

How to Cite:

Paramore, J. C., (2025) “Learning Covert URs via Disparity Minimization”, Society for Computation in Linguistics 8(1): 29. doi: https://doi.org/10.7275/scil.3205

Downloads:
Download PDF

47 Views

14 Downloads

Published on
2025-06-13

Peer Reviewed