M. Janina Sarol
2021
UIUC_BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions
Haoyang Liu
|
M. Janina Sarol
|
Halil Kilicoglu
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
We propose a cascade of neural models that performs sentence classification, phrase recognition, and triple extraction to automatically structure the scholarly contributions of NLP publications. To identify the most important contribution sentences in a paper, we used a BERT-based classifier with positional features (Subtask 1). A BERT-CRF model was used to recognize and characterize relevant phrases in contribution sentences (Subtask 2). We categorized the triples into several types based on whether and how their elements were expressed in text, and addressed each type using separate BERT-based classifiers as well as rules (Subtask 3). Our system was officially ranked second in Phase 1 evaluation and first in both parts of Phase 2 evaluation. After fixing a submission error in Pharse 1, our approach yields the best results overall. In this paper, in addition to a system description, we also provide further analysis of our results, highlighting its strengths and limitations. We make our code publicly available at https://github.com/Liu-Hy/nlp-contrib-graph.
2020
An Empirical Methodology for Detecting and Prioritizing Needs during Crisis Events
M. Janina Sarol
|
Ly Dinh
|
Rezvaneh Rezapour
|
Chieh-Li Chin
|
Pingjing Yang
|
Jana Diesner
Findings of the Association for Computational Linguistics: EMNLP 2020
In times of crisis, identifying essential needs is crucial to providing appropriate resources and services to affected entities. Social media platforms such as Twitter contain a vast amount of information about the general public’s needs. However, the sparsity of information and the amount of noisy content present a challenge for practitioners to effectively identify relevant information on these platforms. This study proposes two novel methods for two needs detection tasks: 1) extracting a list of needed resources, such as masks and ventilators, and 2) detecting sentences that specify who-needs-what resources (e.g., we need testing). We evaluate our methods on a set of tweets about the COVID-19 crisis. For extracting a list of needs, we compare our results against two official lists of resources, achieving 0.64 precision. For detecting who-needs-what sentences, we compared our results against a set of 1,000 annotated tweets and achieved a 0.68 F1-score.
Search
Co-authors
- Haoyang Liu 1
- Halil Kilicoglu 1
- Ly Dinh 1
- Rezvaneh Rezapour 1
- Chieh-Li Chin 1
- show all...