Hanlu Wu
2023
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
|
Hongcheng Gao
|
Ganqu Cui
|
Lifan Yuan
|
Dehan Kong
|
Hanlu Wu
|
Ning Shi
|
Bo Yuan
|
Longtao Huang
|
Hui Xue
|
Zhiyuan Liu
|
Maosong Sun
|
Heng Ji
Findings of the Association for Computational Linguistics: ACL 2023
Textual adversarial attacks can discover models’ weaknesses by adding semantic-preserved but misleading perturbations to the inputs. The long-lasting adversarial attack-and-defense arms race in Natural Language Processing (NLP) is algorithm-centric, providing valuable techniques for automatic robustness evaluation. However, the existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples. In this paper, we aim to set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to further exploit the advantages of adversarial attacks. To address the above challenges, we first determine robustness evaluation dimensions based on model capabilities and specify the reasonable algorithm to generate adversarial samples for each dimension. Then we establish the evaluation protocol, including evaluation settings and metrics, under realistic demands. Finally, we use the perturbation degree of adversarial samples to control the sample validity. We implement a toolkit RobTest that realizes our automatic robustness evaluation framework. In our experiments, we conduct a robustness evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation framework, and further show the rationality of each component in the framework.
2020
Unsupervised Reference-Free Summary Quality Evaluation via Contrastive Learning
Hanlu Wu
|
Tengfei Ma
|
Lingfei Wu
|
Tariro Manyumwa
|
Shouling Ji
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Evaluation of a document summarization system has been a critical factor to impact the success of the summarization task. Previous approaches, such as ROUGE, mainly consider the informativeness of the assessed summary and require human-generated references for each test summary. In this work, we propose to evaluate the summary qualities without reference summaries by unsupervised contrastive learning. Specifically, we design a new metric which covers both linguistic qualities and semantic informativeness based on BERT. To learn the metric, for each summary, we construct different types of negative samples with respect to different aspects of the summary qualities, and train our model with a ranking loss. Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries. Furthermore, we show that our method is general and transferable across datasets.
Search
Co-authors
- Yangyi Chen 1
- Hongcheng Gao 1
- Ganqu Cui 1
- Lifan Yuan 1
- Dehan Kong 1
- show all...