Giovanni Paolini
2024
A Weak Supervision Approach for Few-Shot Aspect Based Sentiment Analysis
Robert Vacareanu
|
Siddharth Varia
|
Kishaloy Halder
|
Shuai Wang
|
Giovanni Paolini
|
Neha Anna John
|
Miguel Ballesteros
|
Smaranda Muresan
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We explore how weak supervision on abundant unlabeled data can be leveraged to improve few-shot performance in aspect-based sentiment analysis (ABSA) tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and we use it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. We test the resulting model on three widely used ABSA datasets, before and after fine-tuning. Our proposed method preserves the full fine-tuning performance while showing significant improvements (15.84 absolute F1) in the few-shot learning scenario for the harder tasks. In zero-shot (i.e., without fine-tuning), our method outperforms the previous state of the art on the aspect extraction sentiment classification (AESC) task and is, additionally, capable of performing the harder aspect sentiment triplet extraction (ASTE) task.
2023
Taxonomy Expansion for Named Entity Recognition
Karthikeyan K
|
Yogarshi Vyas
|
Jie Ma
|
Giovanni Paolini
|
Neha John
|
Shuai Wang
|
Yassine Benajiba
|
Vittorio Castelli
|
Dan Roth
|
Miguel Ballesteros
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Training a Named Entity Recognition (NER) model often involves fixing a taxonomy of entity types. However, requirements evolve and we might need the NER model to recognize additional entity types. A simple approach is to re-annotate entire dataset with both existing and additional entity types and then train the model on the re-annotated dataset. However, this is an extremely laborious task. To remedy this, we propose a novel approach called Partial Label Model (PLM) that uses only partially annotated datasets. We experiment with 6 diverse datasets and show that PLM consistently performs better than most other approaches (0.5 - 2.5 F1), including in novel settings for taxonomy expansion not considered in prior work. The gap between PLM and all other approaches is especially large in settings where there is limited data available for the additional entity types (as much as 11 F1), thus suggesting a more cost effective approaches to taxonomy expansion.
Search
Co-authors
- Shuai Wang 2
- Miguel Ballesteros 2
- Robert Vacareanu 1
- Siddharth Varia 1
- Kishaloy Halder 1
- show all...