2025
pdf
bib
abs
What’s the most important value? INVP: INvestigating the Value Priorities of LLMs through Decision-making in Social Scenarios
Xuelin Liu
|
Pengyuan Liu
|
Dong Yu
Proceedings of the 31st International Conference on Computational Linguistics
As large language models (LLMs) demonstrate impressive performance in various tasks and are increasingly integrated into the decision-making process, ensuring they align with human values has become crucial. This paper highlights that value priorities—the relative importance of different value—play a pivotal role in the decision-making process. To explore the value priorities in LLMs, this paper introduces INVP, a framework for INvestigating Value Priorities through decision-making in social scenarios. The framework encompasses social scenarios including binary decision-making, covering both individual and collective decision-making contexts, and is based on Schwartz’s value theory for constructing value priorities. Using this framework, we construct a dataset, which contains a total of 1613 scenarios and 3226 decisions across 283 topics. We evaluate seven popular LLMs and the experimental results reveal commonalities in the value priorities across different LLMs, such as an emphasis on Universalism and Benevolence, while Power and Hedonism are typically given lower priority. This study provides fresh insights into understanding and enhancing the moral and value alignment of LLMs when making complex social decisions.
pdf
bib
abs
Investigating Value-Reasoning Reliability in Small Large Language Models
Xia Du
|
Shuhan Sun
|
Pengyuan Liu
|
Dong Yu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Although small Large Language models (sLLMs) have been widely deployed in practical applications, little attention has been paid to their value-reasoning abilities, particularly in terms of reasoning reliability. To address this gap, we propose a systematic evaluation framework for assessing the Value-Reasoning Reliability of sLLMs. We define Value-Reasoning Reliability as comprising: (1) Output consistency under identical prompts, (2) Output Robustness under semantically equivalent prompts, (3) Maintaining stable value reasoning in the face of attacks, and (4) Consistency of value reasoning in open-ended value expression tasks. Our framework includes three core tasks: Repetition Consistency task, Interaction Stability task, and Open-ended Expression Consistency task. We further incorporate self-reported confidence scores to evaluate the model’s value reasoning reliability from two perspectives: the model’s self-awareness of its values, and its value-based decision-making. Our findings show that models vary significantly in their stability when responding to value-related questions. Moreover, we observe considerable output randomness, which is not always correlated with the self-reported confidence or expressed value preferences. This suggests that current models lack a reliable internal mechanism for stable value reasoning when addressing value-sensitive queries.
pdf
bib
abs
Attribution and Application of Multiple Neurons in Multimodal Large Language Models
Feiyu Wang
|
Ziran Zhao
|
Dong Yu
|
Pengyuan Liu
Findings of the Association for Computational Linguistics: EMNLP 2025
Multimodal Large Language Models (MLLMs) have demonstrated exceptional performance across various tasks. However, the internal mechanisms by which they interpret and integrate cross-modal information remain insufficiently understood. In this paper, to address the limitations of prior studies that could only identify neurons corresponding to single-token and rely on the vocabulary of LLMs, we propose a novel method to identify multimodal neurons in Transformer-based MLLMs. Then we introduce fuzzy set theory to model the complex relationship between neurons and semantic concepts and to characterize how multiple neurons collaboratively contribute to semantic concepts. Through both theoretical analysis and empirical validation, we demonstrate the effectiveness of our method and present some meaningful findings. Furthermore, by modulating neuron activation values based on the constructed fuzzy sets, we enhance performance on the Visual Question Answering (VQA) task, showing the practical value of our approach in downstream applications in MLLMs.
2024
pdf
bib
abs
文本样式和主题框架引导下的大模型辅助儿童新闻生成(Text Styles and Thematic Framework Guided Large Modeling to Aid Children’s News Generation)
Xiaomeng Du (杜晓蒙)
|
Dong Yu (于东)
|
Pengyuan Liu (刘鹏远)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“主流新闻内容多针对成年人设计,不易于儿童理解,难以满足其阅读需求。对此,我们提出了一种基于主题的儿童新闻篇章结构框架(TNC-LLM)。该框架融合了文本样式定义(TSD)和主题类别定义(TCD)两大核心模块,TSD模块采用多种机器学习算法,从不同粒度分析文本样式风格和段落布局等特点,TCD模块针对不同主题进行了内容分析,以揭示儿童新闻的写作特点和内容的倾向性,确保内容的教育性和适宜性。本文实验主要评估了ChatGPT3.5等四个模型在将成年人新闻转换为面向儿童的新闻的性能。实验结果表明,TNC-LLM在儿童新闻内容生成任务中对内容的准确性、文本的趣味性以及教育性等关键维度有显著提升。此外,该框架具有普适性,能够应用于不同类型的大型语言模型。”
pdf
bib
abs
中西谚语多元价值观资源库建设及对比研究(The construction and comparative study of the resource library of Chinese and Western proverbs and multiple values)
Xia Du (杜霞)
|
Pengyuan Liu (刘鹏远)
|
Dong Yu (于东)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“中西方谚语是中西方文化的结晶,分别蕴含着中西方文化中最基本的价值观。但目前缺乏中西方谚语价值观资源,难以对谚语所体现的中西方价值观进行全面的研究,特别是定量对比研究。因此本文设计了多元价值观体系,包含动机及需求、共同及特色价值观、价值判断和使用场景,根据这个体系构建了中西方谚语多元价值观资源库并进行了考察与对比分析。本文发现中西谚语在价值判断、使用场景及部分价值观上具有相似性,在具体内涵表达上各具独特性。”
pdf
bib
abs
大语言模型开放性生成文本中的职业性别偏见研究(Generated by Large Language Models)
Xu Zhang (张旭)
|
Mengqing Guo (郭梦清)
|
Shucheng Zhu (朱述承)
|
Dong Yu (于东)
|
Ying Liu (刘颖)
|
Pengyuan Liu (刘鹏远)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“大语言模型问世以来,在自然语言处理诸多任务上都取得了惊人的表现。但其中可能存在的安全性和公平性问题也引起了人们的重视,特别是模型生成文本可能含有对特定职业、性别等群体的偏见和歧视。本文通过两种性别表征形式,构造了显性和隐性的”性别+职业“提示语,提示大语言模型生成开放性文本,并从情感极性、词汇丰富度和冒犯性程度三个维度对生成文本的偏见进行分析,评估并比较了传统模型与以ChatGPT为代表的大语言模型中的职业显性性别和隐性性别交叉偏见。结果表明,比起单维度的职业、性别身份信息,更复杂的职业性别交叉身份信息会减少ChatGPT生成文本中的偏见,具体表现为情感极性趋于中性,词汇丰富度提高;ChatGPT对于不同类型的职业性别身份展现出差异的态度,对研究型、艺术型等创造类的职业情感极性更高,对事务型、经管型等与人打交道的职业情感极性偏低;另外,ChatGPT相比之前的GPT-2模型在生成能力和消除偏见上有所进步,在多种组合身份提示下的生成文本更加积极、多样,冒犯性内容显著减少。”
pdf
bib
abs
基于领域信息分解式学习的大语言模型修辞认知增强方法(Method for Enhancing Rhetorical Cognition of Large Language Models Based on Decomposed Learning of Field Information)
Wen Wang (王雯)
|
Dong Yu (于东)
|
Pengyuan Liu (刘鹏远)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“中文修辞手法多样且概念差异性大,大语言模型对部分修辞手法的认知存在缺陷。针对该问题,本文研究如何增强大语言模型的修辞认知能力,并探究其与修辞识别性能之间的关系。为此,本文提出了QAKAG框架,此框架首先引入信息分解式学习思想,通过问答形式检测大语言模型的修辞认知缺陷,然后以四种不同的知识组合方式探究最优信息补充机制,实现了大语言模型修辞认知能力的增强。本文构建了多类别中文修辞句数据集MCRSD和修辞知识库MCRKB,并在ChatGPT4等六个大语言模型上开展实验研究,验证了QAKAG框架对增强大语言模型修辞认知能力的有效性以及其各阶段的必要性。结果表明,在QAKAG框架的增强下,六个大语言模型在多类别修辞识别任务上的性能相较直接回答识别问题的平均F1值提高22.1%,优于Zero-shot-CoT、RAG-BaiKe、Few-Shot5提示策略。”
pdf
bib
abs
Enhancing Free-Form Table Question Answering Models by Distilling Relevant-Cell-Based Rationales
Zhiyu Yang
|
Shuo Wang
|
Yukun Yan
|
Pengyuan Liu
|
Dong Yu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Free-form table question answering is a challenging task since tables contain structured contentscompared to plain texts, which requires high-level reasoning abilities to effectively identify cellsthat are relevant to the question and produce a correct and faithful answer based on their relations.Large language models (LLMs) have exhibited remarkable reasoning capabilities in numerousNLP applications. However, in some specific tasks, specially-trained small models can still out-perform LLMs. Furthermore, small models require extremely less computation costs comparedto LLMs. To leverage the strengths of both types of models, we propose a Relevant-Cell-basedKnowledge Distillation with inference-time Teacher Guidance (RCKD-TG) method. This ap-proach aims to combine small free-form table question answering models’ abilities to learn fromhuman annotations and large language models’ abilities to effectively reason from table contents,via applying Relevant-Cell-based rationales distilled from LLMs to small models’ training andinference stages. Our experiments demonstrate the superiority of our method over vanilla smallmodels in correctness, faithfulness, adequacy and fluency, also over general LLMs in adheringto the style of human annotations. We achieve state-of-the-art performance on FeTaQA, a rep-resentative free-form table question answering benchmark. Our result of a 41.3 BLEU scoredemonstrates the feasibility of effectively using small models’ task-specific abilities and LLMs’reasoning capabilities at the same time. Additionally, our method exhibits high computation ef-ficiency and data efficiency. Compared to strong baselines, we achieve better performance withsignificantly less training data.”
pdf
bib
abs
Generate-then-Revise: An Effective Synthetic Training Data Generation Framework For Event Detection Retrieval
Huidong Du
|
Hao Sun
|
Pengyuan Liu
|
Dong Yu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Large language models (LLMs) struggle with event detection (ED) due to the structured and vari-able number of events in the output. Existing supervised approaches rely on a large amount ofmanually annotated corpora, facing challenges in practice when event types are diverse and theannotated data is scarce. We propose Generate-then-Revise (GtR), a framework that leveragesLLMs in the opposite direction to address these challenges in ED. GtR utilizes an LLM to gen-erate high-quality training data in three stages, including a novel data revision step to minimizenoise in the synthetic data. The generated data is then used to train a smaller model for evalua-tion. Our approach demonstrates significant improvements on the low-resource ED. We furtheranalyze the generated data, highlighting the potential of synthetic data generation for enhancingED performance.Introduction”
pdf
bib
abs
人类思维指导下大小模型协同决策的中文修辞识别与理解方法
Wen Wang (王雯)
|
Siyi Tang (汤思怡)
|
Dong Yu (于东)
|
Pengyuan Liu (刘鹏远)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“CCL24-Eval任务6提出了一个多层次、细粒度中小学作文修辞识别与理解任务。针对任务特点,本文提出了人类思维指导下大小模型协同决策的中文修辞识别与理解方法。该方法根据人类在面对修辞识别和理解任务时的处理思路,将任务顺序重新定义,并分别选取大小语言模型,使每个步骤的实现效果均达到局部最优,以局部最优达到整体任务的最优效果。结果表明,本文提出的方法能够有效对修辞进行识别与理解,在三个赛道上相较于Baseline方法分别提升了13.54、4.03、57.11。”
pdf
bib
abs
System Report for CCL24-Eval Task 9: Bridging the Gap between Authentic and Answer-Guided Images for Chinese Vision-Language Understanding Enhancement
Feiyu Wang
|
Wenyu Guo
|
Dong Yu
|
Chen Kang
|
Pengyuan Liu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“The objective of the Chinese Vision-Language Understanding Evaluation (CVLUE) is to comprehensively assess the performance of Chinese vision-language multimodal pre-trained models in multimodal modeling and understanding across four tasks: Image-Text Retrieval, Visual Question Answering, Visual Grounding, and Visual Dialog. To enhance the models’ performance across various multimodal tasks, this paper propose a multimodal information understanding enhancement method based on answer-guided images. Firstly, we propose task-specific methods for answer-guided image generation. Secondly, the authentic and answer-guided images are fed into the model for multimodal fine-tuning, respectively. Finally, training objectives are set for different tasks to minimize the gap between the answer-guided images and authentic images, thereby supervising the results produced by the authentic images utlizing answer-guided images. The experimental results demonstrate the effectiveness of the proposed method.”
pdf
bib
abs
MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization
Zhiyu Yang
|
Zihan Zhou
|
Shuo Wang
|
Xin Cong
|
Xu Han
|
Yukun Yan
|
Zhenghao Liu
|
Zhixing Tan
|
Pengyuan Liu
|
Dong Yu
|
Zhiyuan Liu
|
Xiaodong Shi
|
Maosong Sun
Findings of the Association for Computational Linguistics: ACL 2024
Scientific data visualization plays a crucial role in research by enabling the direct display of complex information and assisting researchers in identifying implicit patterns. Despite its importance, the use of Large Language Models (LLMs) for scientific data visualization remains rather unexplored. In this study, we introduce MatPlotAgent, an efficient model-agnostic LLM agent framework designed to automate scientific data visualization tasks. Leveraging the capabilities of both code LLMs and multi-modal LLMs, MatPlotAgent consists of three core modules: query understanding, code generation with iterative debugging, and a visual feedback mechanism for error correction. To address the lack of benchmarks in this field, we present MatPlotBench, a high-quality benchmark consisting of 100 human-verified test cases. Additionally, we introduce a scoring approach that utilizes GPT-4V for automatic evaluation. Experimental results demonstrate that MatPlotAgent can improve the performance of various LLMs, including both commercial and open-source models. Furthermore, the proposed evaluation method shows a strong correlation with human-annotated scores.
pdf
bib
abs
Evaluating Moral Beliefs across LLMs through a Pluralistic Framework
Xuelin Liu
|
Yanfei Zhu
|
Shucheng Zhu
|
Pengyuan Liu
|
Ying Liu
|
Dong Yu
Findings of the Association for Computational Linguistics: EMNLP 2024
Proper moral beliefs are fundamental for language models, yet assessing these beliefs poses a significant challenge. This study introduces a novel three-module framework to evaluate the moral beliefs of four prominent large language models. Initially, we constructed a dataset containing 472 moral choice scenarios in Chinese, derived from moral words. The decision-making process of the models in these scenarios reveals their moral principle preferences. By ranking these moral choices, we discern the varying moral beliefs held by different language models. Additionally, through moral debates, we investigate the firmness of these models to their moral choices. Our findings indicate that English language models, namely ChatGPT and Gemini, closely mirror moral decisions of the sample of Chinese university students, demonstrating strong adherence to their choices and a preference for individualistic moral beliefs. In contrast, Chinese models such as Ernie and ChatGLM lean towards collectivist moral beliefs, exhibiting ambiguity in their moral choices and debates. This study also uncovers gender bias embedded within the moral beliefs of all examined language models. Our methodology offers an innovative means to assess moral beliefs in both artificial and human intelligence, facilitating a comparison of moral values across different cultures.
2023
pdf
bib
abs
中国社会道德变化模型与发展动因探究——基于70年《人民日报》的计量与分析 (The Model of Moral Change and Motivation in Chinese Society ——The Vocabulary Analysis of the 70-year ”People’s Daily”)
Hongrui Wang (王弘睿)
|
Dong Yu (于东)
|
Pengyuan Liu (刘鹏远)
|
Liying Ceng (曾立英)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“社会道德的历时变迁研究具有重要意义。通过观察语言使用与道德变迁的历时联系,能够帮助描绘社会道德的变化趋势和发展规律、把握社会道德动态、推进道德建设。目前缺少从词汇角度、利用计算手段对大规模历时语料进行系统、全面的社会道德变迁研究。基于此,该文提出道德主题词历时计量模型,通过计量指标对1946-2015共70年的《人民日报》语料进行了历时计算与分析,观察了70年社会道德主题词的使用选择与变化。研究结果发现,道德词汇的历时使用与社会道德之间存在互动关系,反映出70年中国社会道德的历时变革与发展情况。”
pdf
bib
abs
动词视角下的汉语性别表征研究——基于多语体语料库与依存分析(Gendered Representation in Chinese via Verbal Analysis —Based on a Multi-register Corpus and Dependency Parsing)
Yingshi Chen (陈颖诗)
|
Dong Yu (于东)
|
Pengyuan Liu (刘鹏远)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“动作是反映性别社会化的重要形式,研究汉语中动词的性别表征,可以找到语言构建不同性别身份的路径,即所采用的方式、形式。本文以依存句法关系为抓手,在四种语体的语料中抽取出和不同性别词构成依存结构的动词,统计出有显著性别差异的动词,并根据性别词充当的句子成分,结合语义进行了定量和定性分析。总体来看,大部分汉语动词表征是中性的,能体现性别的动词是少数,汉语作为一种承载着中华智慧且具有深厚文化底蕴的语言,对性别的表征是中立且平等的,这也体现出了我国的性别平等观念。而在表征性别的动词中,能看到构建男性和女性身份的两种不同路径。显著表征女性的动词在不同语体的语料中均多于显著表征男性的,但是表征男性的动词的语义分布则更为均衡,体现了“男性默认-女性专门”。在司法动词上,女性常常作为暴力行为的受害者,同时施害者男性却隐身了,体现了筜男性主宰笭女性顺从笢。不同语体的动词在构建性别时体现了不同的功能,新闻塑造了较为传统的性别规范,传统和网络文学以不同的形式打破了固有的性别规范。”
pdf
bib
abs
大规模语言模型增强的中文篇章多维度阅读体验量化研究(Quantitative Research on Multi-dimensional Reading Experience of Chinese Texts Enhanced by Large Language Model)
Jiadai Sun (孙嘉黛)
|
Siyi Tang (汤思怡)
|
Shike Wang (王诗可)
|
Dong Yu (于东)
|
Pengyuan Liu (刘鹏远)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“现有的文本分级阅读研究往往从文本可读性的角度出发,以离散的文本难度等级的形式为读者推荐阅读书目。目前,仍缺少一种研究读者在阅读过程中产生的多方面、深层次阅读体验的体系结构。对此,我们调研了读者在阅读中文篇章过程中产生的不同阅读体验,提出了中文篇章多维度阅读体验的量化体系。我们将阅读过程中呈现的连续性的阅读体验归纳为多种类别,并在此基础上构建了中文篇章多维度阅读体验数据集。同时,我们探究了以大规模语言模型为基础的ChatGPT对阅读体验的量化能力,发现其虽具备强大的信息抽取和语义理解能力,在阅读体验的量化上却表现不佳。但我们发现大规模语言模型所蕴含的能力能够以知识蒸馏的方式协助深层属性的量化,基于此,我们实现了大规模语言模型增强的中文篇章多维阅读体验量化模型。模型在各维度阅读体验上的平均F1值达到0.72,高于ChatGPT的Fewshot结果0.48。”
pdf
bib
abs
Bridging the Gap between Synthetic and Authentic Images for Multimodal Machine Translation
Wenyu Guo
|
Qingkai Fang
|
Dong Yu
|
Yang Feng
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Multimodal machine translation (MMT) simultaneously takes the source sentence and a relevant image as input for translation. Since there is no paired image available for the input sentence in most cases, recent studies suggest utilizing powerful text-to-image generation models to provide image inputs. Nevertheless, synthetic images generated by these models often follow different distributions compared to authentic images. Consequently, using authentic images for training and synthetic images for inference can introduce a distribution shift, resulting in performance degradation during inference. To tackle this challenge, in this paper, we feed synthetic and authentic images to the MMT model, respectively. Then we minimize the gap between the synthetic and authentic images by drawing close the input image representations of the Transformer Encoder and the output distributions of the Transformer Decoder. Therefore, we mitigate the distribution disparity introduced by the synthetic images during inference, thereby freeing the authentic images from the inference process. Experimental results show that our approach achieves state-of-the-art performance on the Multi30K En-De and En-Fr datasets, while remaining independent of authentic images during inference.
2022
pdf
bib
abs
CoreValue:面向价值观计算的中文核心价值-行为体系及知识库(CoreValue: Chinese Core Value-Behavior Frame and Knowledge Base for Value Computing)
Pengyuan Liu (刘鹏远)
|
Sanle Zhang (张三乐)
|
Dong Yu (于东)
|
Lin Bo (薄琳)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“由主体行为推断其价值观是人工智能理解并具有人类价值观的前提之一。在NLP相关领域,研究主要集中在对文本价值观或道德的是非判断上,鲜见由主体行为推断其价值观的工作,也缺乏相应的数据资源。该文首先构建了中文核心价值-行为体系。该体系以社会主义核心价值观为基础,分为两部分:1)类别体系。共包含8大类核心价值,进一步细分为19小类双方向价值并对应38类行为;2)要素体系。划分为核心与非核心要素共7种。随后,抽取语料中含有主体行为的文本句,依据该体系进行人工标注,构建了一个包含6994个行为句及其对应的细粒度价值与方向,34965个要素的细粒度中文价值-行为知识库。最后,该文提出了价值观类别判别、方向判别及联合判别任务并进行了实验。结果表明,基于预训练语言模型的方法在价值观方向判别上表现优异,在细粒度价值类别判别以及价值类别多标签判别上,有较大提升空间。”
pdf
bib
abs
From Polarity to Intensity: Mining Morality from Semantic Space
Chunxu Zhao
|
Pengyuan Liu
|
Dong Yu
Proceedings of the 29th International Conference on Computational Linguistics
Most works on computational morality focus on moral polarity recognition, i.e., distinguishing right from wrong. However, a discrete polarity label is not informative enough to reflect morality as it does not contain any degree or intensity information. Existing approaches to compute moral intensity are limited to word-level measurement and heavily rely on human labelling. In this paper, we propose MoralScore, a weakly-supervised framework that can automatically measure moral intensity from text. It only needs moral polarity labels, which are more robust and easier to acquire. Besides, the framework can capture latent moral information not only from words but also from sentence-level semantics which can provide a more comprehensive measurement. To evaluate the performance of our method, we introduce a set of evaluation metrics and conduct extensive experiments. Results show that our method achieves good performance on both automatic and human evaluations.
pdf
bib
abs
CLGC: A Corpus for Chinese Literary Grace Evaluation
Yi Li
|
Dong Yu
|
Pengyuan Liu
Proceedings of the Thirteenth Language Resources and Evaluation Conference
In this paper, we construct a Chinese literary grace corpus, CLGC, with 10,000 texts and more than 1.85 million tokens. Multi-level annotations are provided for each text in our corpus, including literary grace level, sentence category, and figure-of-speech type. Based on the corpus, we dig deep into the correlation between fine-grained features (semantic information, part-of-speech and figure-of-speech, etc.) and literary grace level. We also propose a new Literary Grace Evaluation (LGE) task, which aims at making a comprehensive assessment of the literary grace level according to the text. In the end, we build some classification models with machine learning algorithms (such as SVM, TextCNN) to prove the effectiveness of our features and corpus for LGE. The results of our preliminary classification experiments have achieved 79.71% on the weighted average F1-score.
2021
pdf
bib
abs
Importance-based Neuron Allocation for Multilingual Neural Machine Translation
Wanying Xie
|
Yang Feng
|
Shuhao Gu
|
Dong Yu
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages. However, the current multilingual translation paradigm often makes the model tend to preserve the general knowledge, but ignore the language-specific knowledge. Some previous works try to solve this problem by adding various kinds of language-specific modules to the model, but they suffer from the parameter explosion problem and require specialized manual design. To solve these problems, we propose to divide the model neurons into general and language-specific parts based on their importance across languages. The general part is responsible for preserving the general knowledge and participating in the translation of all the languages, while the language-specific part is responsible for preserving the language-specific knowledge and participating in the translation of some specific languages. Experimental results on several language pairs, covering IWSLT and Europarl corpus datasets, demonstrate the effectiveness and universality of the proposed method.
pdf
bib
abs
字里行间的道德:中文文本道德句识别研究(Morality Between the Lines: Research on Identification of Chinese Moral Sentence)
Shiya Peng (彭诗雅)
|
Chang Liu (刘畅)
|
Yayue Deng (邓雅月)
|
Dong Yu (于东)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
随着人工智能的发展,越来越多的研究开始关注人工智能伦理。在NLP领域,道德自动识别作为研究分析文本中的道德的一项重要任务,近年来开始受到研究者的关注。该任务旨在识别文本中的道德片段,其对自然语言处理的道德相关的下游任务如偏见识别消除、判定模型隐形歧视等具有重要意义。与英文相比,目前面向中文的道德识别研究开展缓慢,其主要原因是至今还没有较大型的道德中文数据集为研究提供数据。为解决上述问题,本文在中文语料上进行了中文道德句的标注工作,并初步对识别中文文本道德句进行探索。我们首先构建了国内首个10万级别的中文道德句数据集,然后本文提出了利用流行的几种机器学习方法探究识别中文道德句任务的效果。此外,我们还探索了利用额外知识辅助的方法,对中文道德句的识别任务进行了进一步的探究。
pdf
bib
abs
TenTrans Large-Scale Multilingual Machine Translation System for WMT21
Wanying Xie
|
Bojie Hu
|
Han Yang
|
Dong Yu
|
Qi Ju
Proceedings of the Sixth Conference on Machine Translation
This paper describes TenTrans large-scale multilingual machine translation system for WMT 2021. We participate in the Small Track 2 in five South East Asian languages, thirty directions: Javanese, Indonesian, Malay, Tagalog, Tamil, English. We mainly utilized forward/back-translation, in-domain data selection, knowledge distillation, and gradual fine-tuning from the pre-trained model FLORES-101. We find that forward/back-translation significantly improves the translation results, data selection and gradual fine-tuning are particularly effective during adapting domain, while knowledge distillation brings slight performance improvement. Also, model averaging is used to further improve the translation performance based on these systems. Our final system achieves an average BLEU score of 28.89 across thirty directions on the test set.
2020
pdf
bib
abs
面向人工智能伦理计算的中文道德词典构建方法研究(Construction of a Chinese Moral Dictionary for Artificial Intelligence Ethical Computing)
Hongrui Wang (王弘睿)
|
Chang Liu (刘畅)
|
Dong Yu (于东)
Proceedings of the 19th Chinese National Conference on Computational Linguistics
道德词典资源的建设是人工智能伦理计算的一个研究重点。由于道德行为复杂多样,现有的英文道德词典分类体系并不完善,而中文方面目前尚未有相关的词典资源,理论体系和构建方法仍待探究。针对以上问题,该文提出了面向人工智能伦理计算的中文道德词典构建任务,设计了四类标签和四种类型,得到包含25,012个词的中文道德词典资源。实验结果表明,该词典资源不仅能够使机器学会道德知识,判断词的道德标签和类型,而且能够为句子级别的道德文本分析提供数据支持。
pdf
bib
abs
结合深度学习和语言难度特征的句子可读性计算方法(The method of calculating sentence readability combined with deep learning and language difficulty characteristics)
Yuling Tang (唐玉玲)
|
Dong Yu (于东)
Proceedings of the 19th Chinese National Conference on Computational Linguistics
本文提出了可读性语料库构建的改进方法,基于该方法,构建了规模更大的汉语句子可读性语料库。该语料库在句子绝对难度评估任务上的准确率达到0.7869,相对前人工作提升了0.15以上,证明了改进方法的有效性。将深度学习方法应用于汉语可读性评估,探究了不同深度学习方法自动捕获难度特征的能力,并进仛步探究了向深度学习特征中融入不同层面的语难度特征对模型整体性能的影响。实验结果显示,不同深度学习模型的难度特征捕获能力不尽相同,语言难度特征可以不同程度地提高深度学习模型的难度表征能力。
pdf
bib
abs
Token-level Adaptive Training for Neural Machine Translation
Shuhao Gu
|
Jinchao Zhang
|
Fandong Meng
|
Yang Feng
|
Wanying Xie
|
Jie Zhou
|
Dong Yu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
There exists a token imbalance phenomenon in natural language as different tokens appear with different frequencies, which leads to different learning difficulties for tokens in Neural Machine Translation (NMT). The vanilla NMT model usually adopts trivial equal-weighted objectives for target tokens with different frequencies and tends to generate more high-frequency tokens and less low-frequency tokens compared with the golden token distribution. However, low-frequency tokens may carry critical semantic information that will affect the translation quality once they are neglected. In this paper, we explored target token-level adaptive objectives based on token frequencies to assign appropriate weights for each target token during training. We aimed that those meaningful but relatively low-frequency words could be assigned with larger weights in objectives to encourage the model to pay more attention to these tokens. Our method yields consistent improvements in translation quality on ZH-EN, EN-RO, and EN-DE translation tasks, especially on sentences that contain more low-frequency tokens where we can get 1.68, 1.02, and 0.52 BLEU increases compared with baseline, respectively. Further analyses show that our method can also improve the lexical diversity of translation.
pdf
bib
abs
SHIKEBLCU at SemEval-2020 Task 2: An External Knowledge-enhanced Matrix for Multilingual and Cross-Lingual Lexical Entailment
Shike Wang
|
Yuchen Fan
|
Xiangying Luo
|
Dong Yu
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Lexical entailment recognition plays an important role in tasks like Question Answering and Machine Translation. As important branches of lexical entailment, predicting multilingual and cross-lingual lexical entailment (LE) are two subtasks of SemEval2020 Task2. In previous monolingual LE studies, researchers leverage external linguistic constraints to transform word embeddings for LE relation. In our system, we expand the number of external constraints in multiple languages to obtain more specialised multilingual word embeddings. For the cross-lingual subtask, we apply a bilingual word embeddings mapping method in the model. The mapping method takes specialised embeddings as inputs and is able to retain the embeddings’ LE features after operations. Our results for multilingual subtask are about 20% and 10% higher than the baseline in graded and binary prediction respectively.
pdf
bib
abs
BLCU-NLP at SemEval-2020 Task 5: Data Augmentation for Efficient Counterfactual Detecting
Chang Liu
|
Dong Yu
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Counterfactuals describe events counter to facts and hence naturally involve common sense, knowledge, and reasoning. SemEval 2020 task 5 is focusing on this field. We participate in the subtask 1 and we use BERT as our system. Our Innovations are feature extraction and data augmentation. We extract and summarize features of counterfactual statements, augment counterfactual examples in training set with the help of these features, and two general methods of data augmentation is experimented in our work. We demonstrate the effectiveness of our approaches, which achieves 0.95 of subtask 1 in F1 while using only a subset of giving training set to fine-tune the BERT model, and our official submission achieves F1 0.802, which ranks us 16th in the competition.
2019
pdf
bib
abs
BLCU-NLP at COIN-Shared Task1: Stagewise Fine-tuning BERT for Commonsense Inference in Everyday Narrations
Chunhua Liu
|
Dong Yu
Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing
This paper describes our system for COIN Shared Task 1: Commonsense Inference in Everyday Narrations. To inject more external knowledge to better reason over the narrative passage, question and answer, the system adopts a stagewise fine-tuning method based on pre-trained BERT model. More specifically, the first stage is to fine-tune on addi- tional machine reading comprehension dataset to learn more commonsense knowledge. The second stage is to fine-tune on target-task (MCScript2.0) with MCScript (2018) dataset assisted. Experimental results show that our system achieves significant improvements over the baseline systems with 84.2% accuracy on the official test dataset.
pdf
bib
abs
BLCU_NLP at SemEval-2019 Task 7: An Inference Chain-based GPT Model for Rumour Evaluation
Ruoyao Yang
|
Wanying Xie
|
Chunhua Liu
|
Dong Yu
Proceedings of the 13th International Workshop on Semantic Evaluation
Researchers have been paying increasing attention to rumour evaluation due to the rapid spread of unsubstantiated rumours on social media platforms, including SemEval 2019 task 7. However, labelled data for learning rumour veracity is scarce, and labels in rumour stance data are highly disproportionate, making it challenging for a model to perform supervised-learning adequately. We propose an inference chain-based system, which fully utilizes conversation structure-based knowledge in the limited data and expand the training data in minority categories to alleviate class imbalance. Our approach obtains 12.6% improvement upon the baseline system for subtask A, ranks 1st among 21 systems in subtask A, and ranks 4th among 12 systems in subtask B.
pdf
bib
abs
BLCU_NLP at SemEval-2019 Task 8: A Contextual Knowledge-enhanced GPT Model for Fact Checking
Wanying Xie
|
Mengxi Que
|
Ruoyao Yang
|
Chunhua Liu
|
Dong Yu
Proceedings of the 13th International Workshop on Semantic Evaluation
Since the resources of Community Question Answering are abundant and information sharing becomes universal, it will be increasingly difficult to find factual information for questioners in massive messages. SemEval 2019 task 8 is focusing on these issues. We participate in the task and use Generative Pre-trained Transformer (OpenAI GPT) as our system. Our innovations are data extension, feature extraction, and input transformation. For contextual knowledge enhancement, we extend the training set of subtask A, use several features to improve the results of our system and adapt the input formats to be more suitable for this task. We demonstrate the effectiveness of our approaches, which achieves 81.95% of subtask A and 61.08% of subtask B in accuracy on the SemEval 2019 task 8.
2018
pdf
bib
abs
BLCU_NLP at SemEval-2018 Task 12: An Ensemble Model for Argument Reasoning Based on Hierarchical Attention
Meiqian Zhao
|
Chunhua Liu
|
Lu Liu
|
Yan Zhao
|
Dong Yu
Proceedings of the 12th International Workshop on Semantic Evaluation
To comprehend an argument and fill the gap between claims and reasons, it is vital to find the implicit supporting warrants behind. In this paper, we propose a hierarchical attention model to identify the right warrant which explains why the reason stands for the claim. Our model focuses not only on the similar part between warrants and other information but also on the contradictory part between two opposing warrants. In addition, we use the ensemble method for different models. Our model achieves an accuracy of 61%, ranking second in this task. Experimental results demonstrate that our model is effective to make correct choices.
pdf
bib
DEMN: Distilled-Exposition Enhanced Matching Network for Story Comprehension
Chunhua Liu
|
Haiou Zhang
|
Shan Jiang
|
Dong Yu
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation
2017
pdf
bib
abs
Semantic Frame Labeling with Target-based Neural Model
Yukun Feng
|
Dong Yu
|
Jian Xu
|
Chunhua Liu
Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)
This paper explores the automatic learning of distributed representations of the target’s context for semantic frame labeling with target-based neural model. We constrain the whole sentence as the model’s input without feature extraction from the sentence. This is different from many previous works in which local feature extraction of the targets is widely used. This constraint makes the task harder, especially with long sentences, but also makes our model easily applicable to a range of resources and other similar tasks. We evaluate our model on several resources and get the state-of-the-art result on subtask 2 of SemEval 2015 task 15. Finally, we extend the task to word-sense disambiguation task and we also achieve a strong result in comparison to state-of-the-art work.
2016
pdf
bib
An End-to-end Approach to Learning Semantic Frames with Feedforward Neural Network
Yukun Feng
|
Yipei Xu
|
Dong Yu
Proceedings of the NAACL Student Research Workshop
2015
pdf
bib
BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain
Yukun Feng
|
Qiao Deng
|
Dong Yu
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)
2014
pdf
bib
An Introduction to BLCU Personal Attributes Extraction System
Dong Yu
|
Cheng Yu
|
Qin Qu
|
Gongbo Tang
|
Chunhua Liu
|
Yue Tian
|
Jing Yi
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing