TY - JOUR AU - AB - 1 2 3,4 5 Ethan Wilcox , Roger Levy , Takashi Morita , and Richard Futrell Department of Linguistics, Harvard University, wilcoxeg@g.harvard.edu Department of Brain and Cognitive Sciences, MIT, rplevy@mit.edu Primate Research Institute, Kyoto University, tmorita@alum.mit.edu Department of Linguistics and Philosophy, MIT Department of Language Science, UC Irvine, rfutrell@uci.edu Abstract that indicates representation of a syntactic depen- dency. Using this method, Linzen et al. (2016) and RNN language models have achieved state- Gulordava et al. (2018) demonstrated that these of-the-art perplexity results and have proven models are able to successfully learn the number useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they agreement dependency between a subject and its learn. Here we investigate whether state-of- verb, even when there are intervening elements, the-art RNN language models represent long- and McCoy et al. (2018) found that RNNs learn distance filler–gap dependencies and con- the hierarchical rules of English auxiliary inver- straints on them. Examining RNN behavior sion. In this paper, we broaden and deepen this line on experimentally controlled sentences de- of inquiry by examining what LSTMs learn about signed to expose filler–gap dependencies, we an unexplored syntactic relationship: the filler–gap show TI - What do RNN Language Models Learn about Filler–Gap Dependencies? JF - Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP DO - 10.18653/v1/w18-5423 DA - 2018-01-01 UR - https://www.deepdyve.com/lp/unpaywall/what-do-rnn-language-models-learn-about-filler-gap-dependencies-AY7aqhlCCY DP - DeepDyve ER -