A Generalized Algorithm for Learning Positive and Negative Grammars with Unconventional String Models
Abstract
This paper introduces an algorithm for learning positive and negative grammars with enriched representational models. In conventional model-theoretic treatments of strings, each position belongs to exactly one unary relation. Strother-Garcia et al. (2016) introduce unconventional string models, in which multiple positions can have shared properties, and demonstrate their utility for grammatical inference. Chandlee et al. (2019) develop this approach for learning negative grammars. Here we show that by fixing $k$ ó the size of the elements in the grammar ó Chandlee et al.ís approach can be further generalized to learn both positive and negative grammars over unconventional string models. We prove that this algorithm finds the most general grammars which cover the data.
Keywords: mathematical linguistics, computational learning theory, positive grammars
How to Cite:
Payne, S., (2024) “A Generalized Algorithm for Learning Positive and Negative Grammars with Unconventional String Models”, Society for Computation in Linguistics 7(1), 75–85. doi: https://doi.org/10.7275/scil.2132
Downloads:
Download PDF