Paper

Learning Argument Structures with Recurrent Neural Network Grammars

Authors
  • Ryo Yoshida (The University of Tokyo)
  • Yohei Oseki (The University of Tokyo)

Abstract

In targeted syntactic evaluations, the syntactic competence of LMs has been investigated through various syntactic phenomena, among which one of the important domains has been argument structure. Argument structures in head-initial languages have been exclusively tested in the previous literature, but may be readily predicted from lexical information of verbs, potentially overestimating the syntactic competence of LMs. In this paper, we explore whether argument structures can be learned by LMs in head-final languages, which could be more challenging given that argument structures must be predicted before encountering verbs during incremental sentence processing, so that the relative weight of syntactic information should be heavier than lexical information. Specifically, we examined double accusative constraint and double dative constraint in Japanese with the sequential and hierarchical LMs: n-gram model, LSTM, GPT-2, and RNNG. Our results demonstrated that the double accusative constraint is captured by all LMs, whereas the double dative constraint is successfully explained only by the hierarchical model. In addition, we probed incremental sentence processing by LMs through the lens of surprisal, and suggested that the hierarchical model may capture deep semantic roles that verbs assign to arguments, while the sequential models seem to be influenced by surface case alignments.

Keywords: Japanese, argument structure, grammaticality, acceptability, probability, structure, language model

How to Cite:

Yoshida, R. & Oseki, Y., (2022) “Learning Argument Structures with Recurrent Neural Network Grammars”, Society for Computation in Linguistics 5(1), 101-111. doi: https://doi.org/10.7275/kne0-hc86

Downloads:
Download PDF

170 Views

44 Downloads

Published on
01 Feb 2022