Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis

Sudhandar Balakrishnan, Yihao Fang, Xiaodan Zhu


Abstract
The invention of transformer-based models such as BERT, GPT, and RoBERTa has enabled researchers and financial companies to finetune these powerful models and use them in different downstream tasks to achieve state-of-the-art performance. Recently, a lightweight alternative (approximately 0.1% - 3% of the original model parameters) to fine-tuning, known as prefix tuning has been introduced. This method freezes the model parameters and only updates the prefix to achieve performance comparable to full fine-tuning. Prefix tuning enables researchers and financial practitioners to achieve similar results with much fewer parameters. In this paper, we explore the robustness of prefix tuning when facing noisy data. Our experiments demonstrate that fine-tuning is more robust to noise than prefix tuning—the latter method faces a significant decrease in performance on most corrupted data sets with increasing noise levels. Furthermore, prefix tuning has high variances on the F1 scores compared to fine-tuning in many corruption methods. We strongly advocate that caution should be carefully taken when applying the state-of-the-art prefix tuning method to noisy data.
Anthology ID:
2022.finnlp-1.9
Volume:
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura, Hsin-Hsi Chen
Venue:
FinNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–88
Language:
URL:
https://aclanthology.org/2022.finnlp-1.9
DOI:
10.18653/v1/2022.finnlp-1.9
Bibkey:
Cite (ACL):
Sudhandar Balakrishnan, Yihao Fang, and Xiaodan Zhu. 2022. Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis. In Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP), pages 78–88, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis (Balakrishnan et al., FinNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.finnlp-1.9.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2022.finnlp-1.9.mp4