Elena Sergeeva
2019
Neural Token Representations and Negation and Speculation Scope Detection in Biomedical and General Domain Text
Elena Sergeeva
|
Henghui Zhu
|
Amir Tahmasebi
|
Peter Szolovits
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)
Since the introduction of context-aware token representation techniques such as Embeddings from Language Models (ELMo) and Bidirectional Encoder Representations from Transformers (BERT), there has been numerous reports on improved performance on a variety of natural language tasks. Nevertheless, the degree to which the resulting context-aware representations encode information about morpho-syntactic properties of the word/token in a sentence remains unclear. In this paper, we investigate the application and impact of state-of-the-art neural token representations for automatic cue-conditional speculation and negation scope detection coupled with the independently computed morpho-syntactic information. Through this work, We establish a new state-of-the-art for the BioScope and NegPar corpus. More importantly, we provide a thorough analysis of neural representations and additional features interactions, cue-representation for conditioning, discuss model behavior on different datasets and address the annotation-induced biases in the learned representations.
2018
MIT-MEDG at SemEval-2018 Task 7: Semantic Relation Classification via Convolution Neural Network
Di Jin
|
Franck Dernoncourt
|
Elena Sergeeva
|
Matthew McDermott
|
Geeticka Chauhan
Proceedings of the 12th International Workshop on Semantic Evaluation
SemEval 2018 Task 7 tasked participants to build a system to classify two entities within a sentence into one of the 6 possible relation types. We tested 3 classes of models: Linear classifiers, Long Short-Term Memory (LSTM) models, and Convolutional Neural Network (CNN) models. Ultimately, the CNN model class proved most performant, so we specialized to this model for our final submissions. We improved performance beyond a vanilla CNN by including a variant of negative sampling, using custom word embeddings learned over a corpus of ACL articles, training over corpora of both tasks 1.1 and 1.2, using reversed feature, using part of context words beyond the entity pairs and using ensemble methods to improve our final predictions. We also tested attention based pooling, up-sampling, and data augmentation, but none improved performance. Our model achieved rank 6 out of 28 (macro-averaged F1-score: 72.7) in subtask 1.1, and rank 4 out of 20 (macro F1: 80.6) in subtask 1.2.
Search
Co-authors
- Henghui Zhu 1
- Amir Tahmasebi 1
- Peter Szolovits 1
- Di Jin 1
- Franck Dernoncourt 1
- show all...