Paper

Jabberwocky Parsing: Dependency Parsing with Lexical Noise

Authors
  • Jungo Kasai (University of Washington)
  • Robert Frank (Yale University)

Abstract

Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll’s poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform badly in the jabberwocky context. We also demonstrate that the technique of word dropout drastically improves parsing robustness in this setting, and also leads to significant improvements in out-of-domain parsing.

Keywords: syntax, neural networks, parsing

How to Cite:

Kasai, J. & Frank, R., (2019) “Jabberwocky Parsing: Dependency Parsing with Lexical Noise”, Society for Computation in Linguistics 2(1), 113-123. doi: https://doi.org/10.7275/h12q-k754

Downloads:
Download PDF

158 Views

39 Downloads

Published on
01 Jan 2019