Dapeng Chen
2026
The Evolution of Philosophy: A Metaphorical Cognition Perspective
Rui Mao | Dapeng Chen | Zihao Huang | Xulang Zhang | Erik Cambria
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Rui Mao | Dapeng Chen | Zihao Huang | Xulang Zhang | Erik Cambria
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present a large-scale study of philosophical cognition through the lens of Conceptual Metaphor Theory. Using a computational metaphor processing system that extracts target concepts, source concepts, and concept mappings from a curated corpus of 50+ canonical texts (300k sentences) spanning ten schools from antiquity to the late twentieth century, we quantify how metaphor organizes philosophical argument. We model temporal dynamics with year-level cosine series, authorial neighborhoods with PCA projections, and school signatures with heatmaps of normalized frequencies. The study demonstrates that the history of philosophy is structured by stable cross-domain schemas that are selectively recombined to address new problems.
2024
Enhancing Semantic Consistency of Large Language Models through Model Editing: An Interpretability-Oriented Approach
Jingyuan Yang | Dapeng Chen | Yajing Sun | Rongjun Li | Zhiyong Feng | Wei Peng
Findings of the Association for Computational Linguistics: ACL 2024
Jingyuan Yang | Dapeng Chen | Yajing Sun | Rongjun Li | Zhiyong Feng | Wei Peng
Findings of the Association for Computational Linguistics: ACL 2024
A Large Language Model (LLM) tends to generate inconsistent and sometimes contradictory outputs when presented with a prompt that has equivalent semantics but is expressed differently from the original prompt. To achieve semantic consistency of an LLM, one of the key approaches is to finetune the model with prompt-output pairs with semantically equivalent meanings. Despite its effectiveness, a data-driven finetuning method incurs substantial computation costs in data preparation and model optimization. In this regime, an LLM is treated as a “black box”, restricting our ability to gain deeper insights into its internal mechanism. In this paper, we are motivated to enhance the semantic consistency of LLMs through a more interpretable method (i.e., model editing) to this end. We first identify the model components (i.e., attention heads) that have a key impact on the semantic consistency of an LLM. We subsequently inject biases into the output of these model components along the semantic-consistency activation direction. It is noteworthy that these modifications are cost-effective, without reliance on mass manipulations of the original model parameters. Through comprehensive experiments on the constructed NLU and open-source NLG datasets, our method demonstrates significant improvements in the semantic consistency and task performance of LLMs. Additionally, our method exhibits promising generalization capabilities by performing well on tasks beyond the primary tasks.