Jiarui Fang
2022
RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining
Hui Su
|
Weiwei Shi
|
Xiaoyu Shen
|
Zhou Xiao
|
Tuo Ji
|
Jiarui Fang
|
Jie Zhou
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose RoCBert: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features areimportant to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, RoCBert outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks.
Search
Co-authors
- Hui Su 1
- Weiwei Shi 1
- Xiaoyu Shen 1
- Zhou Xiao 1
- Tuo Ji 1
- show all...
- Jie Zhou 1
Venues
- acl1