Verb Argument Structure Alternations in Word and Sentence Embeddings
- Katharina Kann (New York University)
- Alex Warstadt (New York University)
- Adina Williams (New York University)
- Samuel R. Bowman (New York University)
Abstract
Verbs occur in different syntactic environments, or frames. We investigate whether artificial neural networks encode grammatical distinctions necessary for inferring the idiosyncratic frame-selectional properties of verbs. We introduce five datasets, collectively called FAVA, containing in aggregate nearly 10k sentences labeled for grammatical acceptability, illustrating different verbal argument structure alternations. We then test whether models can distinguish acceptable English verb--frame combinations from unacceptable ones using a sentence embedding alone. For converging evidence, we further construct LAVA, a corresponding word-level dataset, and investigate whether the same syntactic features can be extracted from word embeddings. Our models perform reliable classifications for some verbal alternations but not others, suggesting that while these representations do encode fine-grained lexical information, it is incomplete or can be hard to extract. Further, differences between the word- and sentence-level models show that some information present in word embeddings is not passed on to the downstream sentence embeddings.
Keywords: Artificial neural networks, word embeddings, sentence embeddings, verb argument structure alternations
How to Cite:
Kann, K., Warstadt, A., Williams, A. & Bowman, S. R., (2019) “Verb Argument Structure Alternations in Word and Sentence Embeddings”, Society for Computation in Linguistics 2(1), 287-297. doi: https://doi.org/10.7275/q5js-4y86
Downloads:
Download PDF