Counterfactual Adversarial Training for Improving Robustness of Pre-trained Language Models

Hoai Linh Luu, Naoya Inoue


Anthology ID:
2023.paclic-1.88
Volume:
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
Month:
December
Year:
2023
Address:
Hong Kong, China
Editors:
Chu-Ren Huang, Yasunari Harada, Jong-Bok Kim, Si Chen, Yu-Yin Hsu, Emmanuele Chersoni, Pranav A, Winnie Huiheng Zeng, Bo Peng, Yuxi Li, Junlin Li
Venue:
PACLIC
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
881–888
Language:
URL:
https://aclanthology.org/2023.paclic-1.88
DOI:
Bibkey:
Cite (ACL):
Hoai Linh Luu and Naoya Inoue. 2023. Counterfactual Adversarial Training for Improving Robustness of Pre-trained Language Models. In Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation, pages 881–888, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Counterfactual Adversarial Training for Improving Robustness of Pre-trained Language Models (Luu & Inoue, PACLIC 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.paclic-1.88.pdf