Rishita Anubhai


2022

pdf
Label Semantics for Few Shot Named Entity Recognition
Jie Ma | Miguel Ballesteros | Srikanth Doss | Rishita Anubhai | Sunil Mallya | Yaser Al-Onaizan | Dan Roth
Findings of the Association for Computational Linguistics: ACL 2022

We study the problem of few shot learning for named entity recognition. Specifically, we leverage the semantic information in the names of the labels as a way of giving the model additional signal and enriched priors. We propose a neural architecture that consists of two BERT encoders, one to encode the document and its tokens and another one to encode each of the labels in natural language format. Our model learns to match the representations of named entities computed by the first encoder with label representations computed by the second encoder. The label semantics signal is shown to support improved state-of-the-art results in multiple few shot NER benchmarks and on-par performance in standard benchmarks. Our model is especially effective in low resource settings.

2021

pdf
Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause Extraction
Elsbeth Turcan | Shuai Wang | Rishita Anubhai | Kasturi Bhattacharjee | Yaser Al-Onaizan | Smaranda Muresan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
Miguel Ballesteros | Rishita Anubhai | Shuai Wang | Nima Pourdamghani | Yogarshi Vyas | Jie Ma | Parminder Bhatia | Kathleen McKeown | Yaser Al-Onaizan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations. Our proposed models receive a pair of events within a span of text as input and they identify temporal relations (Before, After, Equal, Vague) between them. Given that a key challenge with this task is the scarcity of annotated data, our models rely on either pretrained representations (i.e. RoBERTa, BERT or ELMo), transfer and multi-task learning (by leveraging complementary datasets), and self-training techniques. Experiments on the MATRES dataset of English documents establish a new state-of-the-art on this task.

pdf
To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging
Kasturi Bhattacharjee | Miguel Ballesteros | Rishita Anubhai | Smaranda Muresan | Jie Ma | Faisal Ladhak | Yaser Al-Onaizan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success. However, training these models can be costly both from an economic and environmental standpoint. In this work, we investigate how to effectively use unlabeled data: by exploring the task-specific semi-supervised approach, Cross-View Training (CVT) and comparing it with task-agnostic BERT in multiple settings that include domain and task relevant English data. CVT uses a much lighter model architecture and we show that it achieves similar performance to BERT on a set of sequence tagging tasks, with lesser financial and environmental impact.

pdf
Resource-Enhanced Neural Model for Event Argument Extraction
Jie Ma | Shuai Wang | Rishita Anubhai | Miguel Ballesteros | Yaser Al-Onaizan
Findings of the Association for Computational Linguistics: EMNLP 2020

Event argument extraction (EAE) aims to identify the arguments of an event and classify the roles that those arguments play. Despite great efforts made in prior work, there remain many challenges: (1) Data scarcity. (2) Capturing the long-range dependency, specifically, the connection between an event trigger and a distant event argument. (3) Integrating event trigger information into candidate argument representation. For (1), we explore using unlabeled data. For (2), we use Transformer that uses dependency parses to guide the attention mechanism. For (3), we propose a trigger-aware sequence encoder with several types of trigger-dependent sequence representations. We also support argument extraction either from text annotated with gold entities or from plain text. Experiments on the English ACE 2005 benchmark show that our approach achieves a new state-of-the-art.