Data-driven pre-trained language models typically perform shortcut learning wherein they rely on the spurious correlations between the data and the ground truth. This reliance can undermine the robustness and generalization of the model. To address this issue, data augmentation emerges as a promising solution. By integrating anti-shortcut data to the training set, the models’ shortcut-induced biases can be mitigated. However, existing methods encounter three challenges: 1) Manual definition of shortcuts is tailored to particular datasets, restricting generalization. 2) The inherent confirmation bias during model training hampers the effectiveness of data augmentation. 3) Insufficient exploration of the relationship between the model performance and the augmented data quantity may result in excessive data consumption. To tackle these challenges, we propose a method of Smart Data Augmentation based on Large Language Models (SAug-LLM). It leverages the LLMs to autonomously identify shortcuts and generate their anti-shortcut counterparts. In addition, the dual validation is employed to mitigate the confirmation bias during the model retraining. Furthermore, the data augmentation process is optimized to effectively rectify model biases while minimizing data consumption. We validate the effectiveness and generalization of our method through extensive experiments across various natural language processing tasks, demonstrating an average performance improvement of 5.61%.
“故事包含大量的社会、物理等常识,同时蕴含深刻的道理,是知识传播、文化传承、价值塑造的重要载体。故事理解是NLP中的一项重要任务。近几年,研究者对大语言模型(LLMs)的语言理解能力进行了很多评估与分析,但由于现有的故事理解数据集大多为答案出现在原文的实体类问题,因此对LLMs故事理解能力的评价与分析非常有限。为此,本文构建了一个寓言故事理解数据集CRMUS,并基于人类故事理解的认知过程:先进行常识推理,然后理解故事寓意,设计了两个任务来评价模型的相应能力。基于CSMUS数据集,我们对多个代表性的LLMs进行了评估,发现:LLMs已经可以较好地理解故事中的常识并进行推理,但在理解故事寓意方面还存在很大提升空间。此外,我们使用项目反应理论(IRT)对数据集进行了质量分析,表明该数据集是高质量的,可以有效地评估LLMs。”
“This paper provides a comprehensive review of the the CCL24-Eval Task 8: Commonsense Reasoning and Moral Understanding in Children’s Stories(CRMUS). This task has designed two sub-tasks, which aim to assess the commonsense reasoning and implicit meaning comprehension capabilities of Large Language Models(LLMs). We heve received registration forms from 33 teams, 15 of which submitted final results that exceeded the baseline score. We present the results of the top 5 teams and our analysis of these results.”
“汉语高考阅读理解对抗鲁棒评测任务致力于提升机器阅读理解模型在复杂、真实对抗环境下的鲁棒性。本次任务设计了四种对抗攻击策略(关键词扰动、推理逻辑扰动、时空属性扰动、因果关系扰动),构建了对抗鲁棒子集GCRC advRobust。任务需要根据给定的文章和问题从4个选项中选择正确的答案。本次评测受到工业界和学术界的广泛关注,共有29支队伍报名参赛,但由于难度较大,仅有8支队伍提交了结果。有关该任务的所有技术信息,包括系统提交、官方结果以及支持资源和软件的链接,可从任务网站获取1。”