Abstract
Extracting biomedical relations from large corpora of scientific documents is a challenging natural language processing task. Existing approaches usually focus on identifying a relation either in a single sentence (mention-level) or across an entire corpus (pair-level). In both cases, recent methods have achieved strong results by learning a point estimate to represent the relation; this is then used as the input to a relation classifier. However, the relation expressed in text between a pair of biomedical entities is often more complex than can be captured by a point estimate. To address this issue, we propose a latent variable model with an arbitrarily flexible distribution to represent the relation between an entity pair. Additionally, our model provides a unified architecture for both mention-level and pair-level relation extraction. We demonstrate that our model achieves results competitive with strong baselines for both tasks while having fewer parameters and being significantly faster to train. We make our code publicly available.- Anthology ID:
- 2020.sustainlp-1.3
- Volume:
- Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- sustainlp
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 19–28
- Language:
- URL:
- https://aclanthology.org/2020.sustainlp-1.3
- DOI:
- 10.18653/v1/2020.sustainlp-1.3
- Cite (ACL):
- Harshil Shah and Julien Fauqueur. 2020. Learning Informative Representations of Biomedical Relations with Latent Variable Models. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 19–28, Online. Association for Computational Linguistics.
- Cite (Informal):
- Learning Informative Representations of Biomedical Relations with Latent Variable Models (Shah & Fauqueur, sustainlp 2020)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2020.sustainlp-1.3.pdf
- Code
- BenevolentAI/RELVM