Till startsida
Sitemap
To content Read more about how we use cookies on gu.se

Learning Syntactic Agreement with Deep Neural Networks

Conference contribution
Authors Jean-Philippe Bernardy
Shalom Lappin
Published in Israel Seminar on Computational Linguistics, September 25, 2017
Publication year 2017
Published at Department of Philosophy, Linguistics and Theory of Science
Language en
Links clasp.gu.se/digitalAssets/1657/1657...
Keywords Deep Learning Syntactic Agreement
Subject categories Electrical Engineering, Electronic Engineering, Information Engineering

Abstract

We consider the extent to which different deep neural network (DNN) con- figurations can learn syntactic relations, by taking up Linzen et al.’s (2016) work on subject-verb agreement with LSTM RNNs. We test their methods on a much larger corpus than they used (a ∼24 million example part of the WaCky corpus, instead of their ∼1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, mem- ory size, drop out rate, and lexical embedding dimension size). We also try out our own unsupervised DNN language model. Our results are broadly compat- ible with those that Linzen et al. report. However, we discovered some inter- esting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has significant consequences for our understanding of the way in which DNNs represent syntactic information.

Page Manager: Webmaster|Last update: 9/11/2012
Share:

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?