Kunlun Liu
2025
CiteEval: Principle-Driven Citation Evaluation for Source Attribution
Yumo Xu
|
Peng Qi
|
Jifan Chen
|
Kunlun Liu
|
Rujun Han
|
Lan Liu
|
Bonan Min
|
Vittorio Castelli
|
Arshit Gupta
|
Zhiguo Wang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Citation quality is crucial in information-seeking systems, directly influencing trust and the effectiveness of information access. Current evaluation frameworks, both human and automatic, mainly rely on Natural Language Inference (NLI) to assess binary or ternary supportiveness from cited sources, which we argue is a suboptimal proxy for citation evaluation. In this work we introduce CiteEval, a citation evaluation framework driven by principles focusing on fine-grained citation assessment within a broad context, encompassing not only the cited sources but the full retrieval context, user query, and generated text. Guided by the proposed framework, we construct CiteBench, a multi-domain benchmark with high-quality human annotations on citation quality. To enable efficient evaluation, we further develop CiteEval-Auto, a suite of model-based metrics that exhibit strong correlation with human judgments. Experiments across diverse systems demonstrate CiteEval-Auto’s superior ability to capture the multifaceted nature of citations compared to existing metrics, offering a principled and scalable approach to evaluate and improve model-generated citations.
2023
Language Agnostic Multilingual Information Retrieval with Contrastive Learning
Xiyang Hu
|
Xinchi Chen
|
Peng Qi
|
Deguang Kong
|
Kunlun Liu
|
William Yang Wang
|
Zhiheng Huang
Findings of the Association for Computational Linguistics: ACL 2023
Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages. We present an effective method to train multilingual IR systems when only English IR training data and some parallel corpora between English and other languages are available. We leverage parallel and non-parallel corpora to improve the pretrained multilingual language models’ cross-lingual transfer ability. We design a semantic contrastive loss to align representations of parallel sentences that share the same semantics in different languages, and a new language contrastive loss to leverage parallel sentence pairs to remove language-specific information in sentence representations from non-parallel corpora. When trained on English IR data with these losses and evaluated zero-shot on non-English data, our model demonstrates significant improvement to prior work on retrieval performance, while it requires much less computational effort. We also demonstrate the value of our model for a practical setting when a parallel corpus is only available for a few languages, but a lack of parallel corpora resources persists for many other low-resource languages. Our model can work well even with a small number of parallel sentences, and be used as an add-on module to any backbones and other tasks.
Search
Fix author
Co-authors
- Peng Qi 2
- Vittorio Castelli 1
- Xinchi Chen 1
- Jifan Chen 1
- Arshit Gupta 1
- show all...