Abstract
Linking of biomedical entity mentions to various terminologies of chemicals, diseases, genes, adverse drug reactions is a challenging task, often requiring non-syntactic interpretation. A large number of biomedical corpora and state-of-the-art models have been introduced in the past five years. However, there are no general guidelines regarding the evaluation of models on these corpora in single- and cross-terminology settings. In this work, we perform a comparative evaluation of various benchmarks and study the efficiency of state-of-the-art neural architectures based on Bidirectional Encoder Representations from Transformers (BERT) for linking of three entity types across three domains: research abstracts, drug labels, and user-generated texts on drug therapy in English. We have made the source code and results available at https://github.com/insilicomedicine/Fair-Evaluation-BERT.- Anthology ID:
- 2020.coling-main.588
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 6710–6716
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.588
- DOI:
- 10.18653/v1/2020.coling-main.588
- Cite (ACL):
- Elena Tutubalina, Artur Kadurin, and Zulfat Miftahutdinov. 2020. Fair Evaluation in Concept Normalization: a Large-scale Comparative Analysis for BERT-based Models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6710–6716, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- Fair Evaluation in Concept Normalization: a Large-scale Comparative Analysis for BERT-based Models (Tutubalina et al., COLING 2020)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2020.coling-main.588.pdf
- Code
- insilicomedicine/Fair-Evaluation-BERT
- Data
- BC5CDR, NCBI Disease