An in-depth look at Euclidean disk embeddings for structure preserving parsing

Federico Fancellu, Lan Xiao, Allan Jepson, Afsaneh Fazly


Abstract
Preserving the structural properties of trees or graphs when embedding them into a metric space allows for a high degree of interpretability, and has been shown beneficial for downstream tasks (e.g., hypernym detection, natural language inference, multimodal retrieval). However, whereas the majority of prior work looks at using structure-preserving embeddings when encoding a structure given as input, e.g., WordNet (Fellbaum, 1998), there is little exploration on how to use such embeddings when predicting one. We address this gap for two structure generation tasks, namely dependency and semantic parsing. We test the applicability of disk embeddings (Suzuki et al., 2019) that has been proposed for embedding Directed Acyclic Graphs (DAGs) but has not been tested on tasks that generate such structures. Our experimental results show that for both tasks the original disk embedding formulation leads to much worse performance when compared to non-structure-preserving baselines. We propose enhancements to this formulation and show that they almost close the performance gap for dependency parsing. However, the gap still remains notable for semantic parsing due to the complexity of meaning representation graphs, suggesting a challenge for generating interpretable semantic parse representations.
Anthology ID:
2021.blackboxnlp-1.21
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
283–295
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.21
DOI:
10.18653/v1/2021.blackboxnlp-1.21
Bibkey:
Cite (ACL):
Federico Fancellu, Lan Xiao, Allan Jepson, and Afsaneh Fazly. 2021. An in-depth look at Euclidean disk embeddings for structure preserving parsing. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 283–295, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
An in-depth look at Euclidean disk embeddings for structure preserving parsing (Fancellu et al., BlackboxNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.blackboxnlp-1.21.pdf