TY - JOUR AU - AB - Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, Ngoc Thang Vu Institute for Natural Language Processing (IMS) Universitat ยจ Stuttgart, Germany fblohmms,jagfelga,soodea,xiangyu,thangvug @ims.uni-stuttgart.de Abstract on continuous data and can thus easily be imper- ceptible if desired, adversarial attacks in NLP en- We propose a machine reading comprehension tail the necessity to perform discrete and percep- model based on the compare-aggregate frame- tible changes to the data. Thus, attack methods work with two-staged attention that achieves for computer vision such as the Fast Gradient Sign state-of-the-art results on the MovieQA ques- Method (FGSM) (Goodfellow et al., 2014) cannot tion answering dataset. To investigate the lim- be directly applied to NLP. itations of our model as well as the behavioral Machine comprehension has recently received difference between convolutional and recur- rent neural networks, we generate adversarial increased interest in the NLP community (Yang examples to confuse the model and compare to et al., 2015; Tapaswi et al., 2016; Rajpurkar et al., human performance. Furthermore, we assess 2016; Chen et al., 2016). Neural network models the generalizability of our model by analyz- perform reasonably well on many data TI - Comparing Attention-Based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension JF - Proceedings of the 22nd Conference on Computational Natural Language Learning DO - 10.18653/v1/k18-1011 DA - 2018-01-01 UR - https://www.deepdyve.com/lp/unpaywall/comparing-attention-based-convolutional-and-recurrent-neural-networks-M07M6MTd5A DP - DeepDyve ER -