Nhan Dam
2020
Scene Graph Modification Based on Natural Language Commands
Xuanli He
|
Quan Hung Tran
|
Gholamreza Haffari
|
Walter Chang
|
Zhe Lin
|
Trung Bui
|
Franck Dernoncourt
|
Nhan Dam
Findings of the Association for Computational Linguistics: EMNLP 2020
Structured representations like graphs and parse trees play a crucial role in many Natural Language Processing systems. In recent years, the advancements in multi-turn user interfaces necessitate the need for controlling and updating these structured representations given new sources of information. Although there have been many efforts focusing on improving the performance of the parsers that map text to graphs or parse trees, very few have explored the problem of directly manipulating these representations. In this paper, we explore the novel problem of graph modification, where the systems need to learn how to update an existing scene graph given a new user’s command. Our novel models based on graph-based sparse transformer and cross attention information fusion outperform previous systems adapted from the machine translation and graph generation literature. We further contribute our large graph modification datasets to the research community to encourage future research for this new problem.
Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering
Quan Hung Tran
|
Nhan Dam
|
Tuan Lai
|
Franck Dernoncourt
|
Trung Le
|
Nham Le
|
Dinh Phung
Proceedings of the 28th International Conference on Computational Linguistics
Interpretability and explainability of deep neural net models are always challenging due to their size and complexity. Many previous works focused on visualizing internal components of neural networks to represent them through human-friendly concepts. On the other hand, in real life, when making a decision, human tends to rely on similar situations in the past. Thus, we argue that one potential approach to make the model interpretable and explainable is to design it in a way such that the model explicitly connects the current sample with the seen samples, and bases its decision on these samples. In this work, we design one such model: an explainable, evidence-based memory network architecture, which learns to summarize the dataset and extract supporting evidences to make its decision. The model achieves state-of-the-art performance on two popular question answering datasets, the TrecQA dataset and the WikiQA dataset. Via further analysis, we showed that this model can reliably trace the errors it has made in the validation step to the training instances that might have caused this error. We believe that this error-tracing capability might be beneficial in improving dataset quality in many applications.
Search
Co-authors
- Quan Hung Tran 2
- Franck Dernoncourt 2
- Xuanli He 1
- Gholamreza Haffari 1
- Walter Chang 1
- show all...