Minh C. Phan
2019
Robust Representation Learning of Biomedical Names
Minh C. Phan
|
Aixin Sun
|
Yi Tay
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Biomedical concepts are often mentioned in medical documents under different name variations (synonyms). This mismatch between surface forms is problematic, resulting in difficulties pertaining to learning effective representations. Consequently, this has tremendous implications such as rendering downstream applications inefficacious and/or potentially unreliable. This paper proposes a new framework for learning robust representations of biomedical names and terms. The idea behind our approach is to consider and encode contextual meaning, conceptual meaning, and the similarity between synonyms during the representation learning process. Via extensive experiments, we show that our proposed method outperforms other baselines on a battery of retrieval, similarity and relatedness benchmarks. Moreover, our proposed method is also able to compute meaningful representations for unseen names, resulting in high practical utility in real-world applications.
Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives
Yi Tay
|
Shuohang Wang
|
Anh Tuan Luu
|
Jie Fu
|
Minh C. Phan
|
Xingdi Yuan
|
Jinfeng Rao
|
Siu Cheung Hui
|
Aston Zhang
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
This paper tackles the problem of reading comprehension over long narratives where documents easily span over thousands of tokens. We propose a curriculum learning (CL) based Pointer-Generator framework for reading/sampling over large documents, enabling diverse training of the neural model based on the notion of alternating contextual difficulty. This can be interpreted as a form of domain randomization and/or generative pretraining during training. To this end, the usage of the Pointer-Generator softens the requirement of having the answer within the context, enabling us to construct diverse training samples for learning. Additionally, we propose a new Introspective Alignment Layer (IAL), which reasons over decomposed alignments using block-based self-attention. We evaluate our proposed method on the NarrativeQA reading comprehension benchmark, achieving state-of-the-art performance, improving existing baselines by 51% relative improvement on BLEU-4 and 17% relative improvement on Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and CL components.
Search
Co-authors
- Yi Tay 2
- Aixin Sun 1
- Shuohang Wang 1
- Anh Tuan Luu 1
- Jie Fu 1
- show all...
Venues
- acl2