Abstract
Recently, fine-tuning the pre-trained language model (PrLM) on labeled sentiment datasets demonstrates impressive performance. However, collecting labeled sentiment dataset is time-consuming, and fine-tuning the whole PrLM brings about much computation cost. To this end, we focus on multi-source unsupervised sentiment adaptation problem with the pre-trained features, which is more practical and challenging. We first design a dynamic feature network to fully exploit the extracted pre-trained features for efficient domain adaptation. Meanwhile, with the difference of the traditional source-target domain alignment methods, we propose a novel asymmetric mutual learning strategy, which can robustly estimate the pseudo-labels of the target domain with the knowledge from all the other source models. Experiments on multiple sentiment benchmarks show that our method outperforms the recent state-of-the-art approaches, and we also conduct extensive ablation studies to verify the effectiveness of each the proposed module.- Anthology ID:
- 2022.coling-1.604
- Volume:
- Proceedings of the 29th International Conference on Computational Linguistics
- Month:
- October
- Year:
- 2022
- Address:
- Gyeongju, Republic of Korea
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 6934–6943
- Language:
- URL:
- https://aclanthology.org/2022.coling-1.604
- DOI:
- Cite (ACL):
- Rui Li, Cheng Liu, and Dazhi Jiang. 2022. Asymmetric Mutual Learning for Multi-source Unsupervised Sentiment Adaptation with Dynamic Feature Network. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6934–6943, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
- Cite (Informal):
- Asymmetric Mutual Learning for Multi-source Unsupervised Sentiment Adaptation with Dynamic Feature Network (Li et al., COLING 2022)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2022.coling-1.604.pdf