Sub-word information in pre-trained biomedical word representations: evaluation and hyper-parameter optimization

Dieter Galea, Ivan Laponogov, Kirill Veselkov


Abstract
Word2vec embeddings are limited to computing vectors for in-vocabulary terms and do not take into account sub-word information. Character-based representations, such as fastText, mitigate such limitations. We optimize and compare these representations for the biomedical domain. fastText was found to consistently outperform word2vec in named entity recognition tasks for entities such as chemicals and genes. This is likely due to gained information from computed out-of-vocabulary term vectors, as well as the word compositionality of such entities. Contrastingly, performance varied on intrinsic datasets. Optimal hyper-parameters were intrinsic dataset-dependent, likely due to differences in term types distributions. This indicates embeddings should be chosen based on the task at hand. We therefore provide a number of optimized hyper-parameter sets and pre-trained word2vec and fastText models, available on https://github.com/dterg/bionlp-embed.
Anthology ID:
W18-2307
Volume:
Proceedings of the BioNLP 2018 workshop
Month:
July
Year:
2018
Address:
Melbourne, Australia
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
56–66
Language:
URL:
https://aclanthology.org/W18-2307
DOI:
10.18653/v1/W18-2307
Bibkey:
Cite (ACL):
Dieter Galea, Ivan Laponogov, and Kirill Veselkov. 2018. Sub-word information in pre-trained biomedical word representations: evaluation and hyper-parameter optimization. In Proceedings of the BioNLP 2018 workshop, pages 56–66, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Sub-word information in pre-trained biomedical word representations: evaluation and hyper-parameter optimization (Galea et al., BioNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/W18-2307.pdf
Note:
 W18-2307.Notes.zip
Code
 dterg/bionlp-embed