Yining Zheng


2025

pdf bib
How to Mitigate Overfitting in Weak-to-strong Generalization?
Junhao Shi | Qinyuan Cheng | Zhaoye Fei | Yining Zheng | Qipeng Guo | Xipeng Qiu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Aligning powerful AI models on tasks that surpass human evaluation capabilities is the central problem of **superalignment**. To address this problem, weak-to-strong generalization aims to elicit the capabilities of strong models through weak supervisors and ensure that the behavior of strong models aligns with the intentions of weak supervisors without unsafe behaviors such as deception. Although weak-to-strong generalization exhibiting certain generalization capabilities, strong models exhibit significant overfitting in weak-to-strong generalization: Due to the strong fit ability of strong models, erroneous labels from weak supervisors may lead to overfitting in strong models. In addition, simply filtering out incorrect labels may lead to a degeneration in question quality, resulting in a weak generalization ability of strong models on hard questions. To mitigate overfitting in weak-to-strong generalization, we propose a two-stage framework that simultaneously improves the quality of supervision signals and the quality of input questions. Experimental results in three series of large language models and two mathematical benchmarks demonstrate that our framework significantly improves PGR (Performance Gap Recovered) compared to naive weak-to-strong generalization, even achieving up to 100% PGR on some models.

pdf bib
Perceive the Passage of Time: A Systematic Evaluation of Large Language Model in Temporal Relativity
Shuang Chen | Yining Zheng | Shimin Li | Qinyuan Cheng | Xipeng Qiu
Proceedings of the 31st International Conference on Computational Linguistics

Temporal perception is crucial for Large Language Models(LLMs) to effectively understand the world. However, current benchmarks primarily focus on temporal reasoning, falling short in understanding the temporal characteristics involving temporal perception, particularly in understanding temporal relativity. In this paper, we introduce TempBench, a comprehensive benchmark designed to evaluate the temporal-relative ability of LLMs. TempBench encompasses 4 distinct scenarios: Physiology, Psychology, Cognition and Mixture. We conduct an extensive experiments on GPT-4, a series of Llama and other popular LLMs. The experiment results demonstrate a significant performance gap between LLMs and humans in temporal-relative capability. Furthermore, the error types of temporal-relative ability in LLMs are proposed to thoroughly analyze the impact of multiple aspects and emphasize the associated challenges. We anticipate that TempBench will drive further advancements in enhancing the temporal-perceiving capabilities of L

2020

pdf bib
Heterogeneous Graph Neural Networks for Extractive Document Summarization
Danqing Wang | Pengfei Liu | Yining Zheng | Xipeng Qiu | Xuanjing Huang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

As a crucial step in extractive document summarization, learning cross-sentence relations has been explored by a plethora of approaches. An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships. In this paper, we present a heterogeneous graph-based neural network for extractive summarization (HETERSUMGRAPH), which contains semantic nodes of different granularity levels apart from sentences. These additional nodes act as the intermediary between sentences and enrich the cross-sentence relations. Besides, our graph structure is flexible in natural extension from a single-document setting to multi-document via introducing document nodes. To our knowledge, we are the first one to introduce different types of nodes into graph-based neural networks for extractive document summarization and perform a comprehensive qualitative analysis to investigate their benefits. The code will be released on Github.