TY - JOUR AU - AB - Structural Neural Encoders for AMR-to-text Generation Marco Damonte Shay B. Cohen School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB, UK m.damonte@sms.ed.ac.uk scohen@inf.ed.ac.uk Abstract (a) eat-01 AMR-to-text generation is a problem recently :arg1 :instrument :arg0 introduced to the NLP community, in which the goal is to generate sentences from Ab- pizza finger he stract Meaning Representation (AMR) graphs. Sequence-to-sequence models can be used to this end by converting the AMR graphs part-of to strings. Approaching the problem while (b) working directly with graphs requires the use eat-01 :arg0 he :arg1 pizza :instr. finger :part-of he of graph-to-sequence models that encode the AMR graph into a vector representation. Such (c) encoding has been shown to be beneficial in the past, and unlike sequential encoding, it al- lows us to explicitly capture reentrant struc- eat-01 :arg0 he :arg1 pizza :instr. finger :part-of he tures in the AMR graphs. We investigate the extent to which reentrancies (nodes with mul- (d) tiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not pre- served. We show that improvements in the eat-01 :arg0 he :arg1 pizza :instr. finger :part-of he treatment of reentrancies TI - Structural Neural Encoders for JF - Proceedings of the 2019 Conference of the North DO - 10.18653/v1/n19-1366 DA - 2019-01-01 UR - https://www.deepdyve.com/lp/unpaywall/structural-neural-encoders-for-PKm6JywUjG DP - DeepDyve ER -