Sirui Huang


2025

pdf bib
StructFact: Reasoning Factual Knowledge from Structured Data with Large Language Models
Sirui Huang | Yanggan Gu | Zhonghao Li | Xuming Hu | Li Qing | Guandong Xu
Findings of the Association for Computational Linguistics: ACL 2025

Large language models (LLMs) have made significant strides in natural language processing by leveraging their ability to comprehend and reason with factual knowledge. However, a significant amount of factual knowledge is stored in structured data, which has unique characteristics not typically encountered in the unstructured texts used for pretraining LLMs. To evaluate the capability of LLMs in handling facts structurally stored, we introduce a benchmark called StructFact, which includes meticulously annotated factual questions, spanning five tasks that reflect the intrinsic properties of structured data. This benchmark aims to delineate the strengths and limitations of LLMs in reasoning with structured data for knowledge-intensive tasks in practical applications. Extensive experiments conducted on 10 common LLMs have yielded several insights, one notable finding being that these models struggle significantly with the heterogeneity of structured data during reasoning.

pdf bib
Capturing Nuanced Preferences: Preference-Aligned Distillation for Small Language Models
Yanggan Gu | Junzhuo Li | Sirui Huang | Xin Zou | Zhenghua Li | Xuming Hu
Findings of the Association for Computational Linguistics: ACL 2025

Aligning small language models (SLMs) with human values typically involves distilling preference knowledge from large language models (LLMs). However, existing distillation methods model preference knowledge in teacher LLMs by comparing pairwise responses, overlooking the extent of difference between responses. This limitation hinders student SLMs from capturing the nuanced preferences for multiple responses. In this paper, we propose a Preference-Aligned Distillation (PAD) framework, which models teacher’s preference knowledge as a probability distribution over all potential preferences, thereby providing more nuanced supervisory signals. Our insight in developing PAD is rooted in the demonstration that language models can serve as reward functions, reflecting their intrinsic preferences. Based on this, PAD comprises three key steps: (1) sampling diverse responses using high-temperature; (2) computing rewards for both teacher and student to construct their intrinsic preference; and (3) training the student’s intrinsic preference distribution to align with the teacher’s. Experiments on four mainstream alignment benchmarks demonstrate that PAD consistently and significantly outperforms existing approaches, achieving over 20% improvement on AlpacaEval 2 and Arena-Hard, indicating superior alignment with human preferences. Notably, on MT-Bench, using the Gemma model family, the student trained by PAD surpasses its teacher, further validating the effectiveness of our PAD.

2024

pdf bib
Refiner: Restructure Retrieved Content Efficiently to Advance Question-Answering Capabilities
Zhonghao Li | Xuming Hu | Aiwei Liu | Kening Zheng | Sirui Huang | Hui Xiong
Findings of the Association for Computational Linguistics: EMNLP 2024