Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity Recognition

Zihan Wang, Kewen Zhao, Zilong Wang, Jingbo Shang


Abstract
Fine-tuning pre-trained language models is a common practice in building NLP models for various tasks, including the case with less supervision. We argue that under the few-shot setting, formulating fine-tuning closer to the pre-training objective shall be able to unleash more benefits from the pre-trained language models. In this work, we take few-shot named entity recognition (NER) for a pilot study, where existing fine-tuning strategies are much different from pre-training. We propose a novel few-shot fine-tuning framework for NER, FFF-NER. Specifically, we introduce three new types of tokens, “is-entity”, “which-type” and “bracket”, so we can formulate the NER fine-tuning as (masked) token prediction or generation, depending on the choice of the pre-training objective. In our experiments, we apply to fine-tune both BERT and BART for few-shot NER on several benchmark datasets and observe significant improvements over existing fine-tuning strategies, including sequence labeling, prototype meta-learning, and prompt-based approaches. We further perform a series of ablation studies, showing few-shot NER performance is strongly correlated with the similarity between fine-tuning and pre-training.
Anthology ID:
2022.findings-emnlp.232
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3186–3199
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.232
DOI:
10.18653/v1/2022.findings-emnlp.232
Bibkey:
Cite (ACL):
Zihan Wang, Kewen Zhao, Zilong Wang, and Jingbo Shang. 2022. Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity Recognition. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3186–3199, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity Recognition (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.findings-emnlp.232.pdf