Xingyu Song
2026
JiraiBench: A Bilingual Benchmark for Evaluating Large Language Models’ Detection of Human risky health behavior Content in Jirai Community
Yunze Xiao | Tingyu He | Lionel Z. Wang | Yiming Ma | Xingyu Song | Xiaohang Xu | Mona T. Diab | Irene Li | Ka Chung Ng
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Yunze Xiao | Tingyu He | Lionel Z. Wang | Yiming Ma | Xingyu Song | Xiaohang Xu | Mona T. Diab | Irene Li | Ka Chung Ng
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
In this paper, we present the first cross-lingual dataset that captures a transnational cultural phenomenon, focusing on the Chinese and Japanese "Jirai" subculture and its association with risky health behaviors. Our dataset of more than 15,000 annotated social media posts forms the core of JiraiBench, a benchmark designed to evaluate LLMs on culturally specific content. This unique resource allowed us to uncover an unexpected cross-cultural transfer in which Japanese prompts better handle Chinese content, indicating that cultural context can be more influential than linguistic similarity. Further evidence suggests potential cross-lingual knowledge transfer in fine-tuned models. This work proves the indispensable role of developing culturally informed, cross-lingual datasets for creating effective content moderation tools that can protect vulnerable communities across linguistic borders.
2025
ReAgent: Reversible Multi-Agent Reasoning for Knowledge-Enhanced Multi-Hop QA
Zhao Xinjie | Fan Gao | Xingyu Song | Yingjian Chen | Rui Yang | Yanran Fu | Yuyang Wang | Yusuke Iwasawa | Yutaka Matsuo | Irene Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Zhao Xinjie | Fan Gao | Xingyu Song | Yingjian Chen | Rui Yang | Yanran Fu | Yuyang Wang | Yusuke Iwasawa | Yutaka Matsuo | Irene Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multi-hop question answering (QA) remains challenging, as solutions must reliably integrate and reconcile evidence from multiple sources without succumbing to error propagation. While large language models (LLMs) have achieved substantial improvements via chain-of-thought (CoT) prompting and retrieval-augmented generation, these methods typically adopt a forward-only workflow—early mistakes persist throughout inference, and contradictions discovered later cannot systematically trigger re-evaluation. To address this limitation, we present ReAgent, a reversible multi-agent reasoning framework. Specifically, ReAgent enables agents to backtrack to earlier valid states when conflicts arise, thereby isolating and rectifying flawed assumptions before they undermine subsequent reasoning. Our approach combines explicit local and global rollback protocols with modular role specialization, resulting in a flexible and error-tolerant pipeline. Empirical evaluation on three multi-hop QA benchmarks demonstrates consistent performance gains of approximately 6% over forward-only baselines, in addition to enhanced interpretability. These findings highlight the value of non-monotonic, backtracking-driven inference in complex QA scenarios and point to broader implications for multi-agent collaboration in knowledge-intensive tasks.