Abstract

Tensor Product Decomposition Networks: Uncovering Representations of Structure Learned by Neural Networks

Authors
  • Richard T McCoy (Johns Hopkins University)
  • Tal Linzen (Johns Hopkins University)
  • Ewan Dunbar (Université Paris Diderot - Sorbonne Paris Cité)
  • Paul Smolensky (Johns Hopkins University)

Abstract

We introduce an analysis technique for understanding compositional structure present in the vector representations used by neural networks. The inner workings of neural networks are notoriously difficult to understand, and in particular it is far from clear how they manage to perform remarkably well on tasks that depend on compositional structure even though they use continuous vector representations with no obvious compositional structure. Using our analysis technique, we show that the representations of these models can be closely approximated by Tensor Product Representations, a type of interpretable structure that lends significant insight into the workings of these hard-to-interpret models.

Keywords: compositionality, neural networks, symbolic structure, tensor product representations

How to Cite:

McCoy, R. T., Linzen, T., Dunbar, E. & Smolensky, P., (2020) “Tensor Product Decomposition Networks: Uncovering Representations of Structure Learned by Neural Networks”, Society for Computation in Linguistics 3(1), 474-475. doi: https://doi.org/10.7275/v6an-nf79

Downloads:
Download PDF

69 Views

25 Downloads

Published on
01 Jan 2020