Paper

Conditions on abruptness in a gradient-ascent Maximum Entropy learner

Author
  • Elliott Moreton (University of North Carolina, Chapel Hill)

Abstract

When does a gradual learning rule yield gradual learning performance? This paper studies a gradient-ascent Maximum Entropy phonotactic learner, as applied to two-alternative forced-choice performance expressed as log-odds. The main result is that slow initial performance cannot accelerate later if the initial weights are near zero, but can if they are not. Stated another way, abruptness in this learner is an effect of transfer, either from Universal Grammar in the form of an initial weighting, or from previous learning in the form of an acquired weighting.

Keywords: abrupt, gradual, learning, Maximum Entropy, gradient ascent, Replicator, transfer, harmony

How to Cite:

Moreton, E., (2018) “Conditions on abruptness in a gradient-ascent Maximum Entropy learner”, Society for Computation in Linguistics 1(1), 113-124. doi: https://doi.org/10.7275/R5XG9PBX

Downloads:
Download PDF

66 Views

22 Downloads

Published on
01 Jan 2018