2025
pdf
bib
abs
Multi-Programming Language Sandbox for LLMs
Shihan Dou
|
Jiazheng Zhang
|
Jianxiang Zang
|
Yunbo Tao
|
Weikang Zhou
|
Haoxiang Jia
|
Shichun Liu
|
Yuming Yang
|
Shenxi Wu
|
Zhiheng Xi
|
Muling Wu
|
Rui Zheng
|
Changze Lv
|
Limao Xiong
|
Shaoqing Zhang
|
Lin Zhang
|
Wenyu Zhan
|
Rongxiang Weng
|
Jingang Wang
|
Xunliang Cai
|
Yueming Wu
|
Ming Wen
|
Yixin Cao
|
Tao Gui
|
Xipeng Qiu
|
Qi Zhang
|
Xuanjing Huang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs). It can automatically identify the programming language of the code, compiling and executing it within an isolated sub-sandbox to ensure safety and stability. In addition, MPLSandbox integrates both traditional and LLM-based code analysis tools, providing a comprehensive analysis of generated code. It also can be effortlessly integrated into the training and deployment of LLMs to improve the quality and correctness of generated code. It also helps researchers streamline their workflows for various LLM-based code-related tasks, reducing the development cost. To validate the effectiveness of MPLSandbox, we conduct extensive experiments by integrating it into several training and deployment scenarios, and employing it to optimize workflows for a wide range of downstream code tasks. Our goal is to enhance researcher productivity on LLM-based code tasks by simplifying and automating workflows through delegation to MPLSandbox.
pdf
bib
abs
DocFusion: A Unified Framework for Document Parsing Tasks
Mingxu Chai
|
Ziyu Shen
|
Chong Zhang
|
Yue Zhang
|
Xiao Wang
|
Shihan Dou
|
Jihua Kang
|
Jiazheng Zhang
|
Qi Zhang
Findings of the Association for Computational Linguistics: ACL 2025
Document parsing involves layout element detection and recognition, essential for extracting information. However, existing methods often employ multiple models for these tasks, leading to increased system complexity and maintenance overhead. While some models attempt to unify detection and recognition, they often fail to address the intrinsic differences in data representations, thereby limiting performance in document processing. Our research reveals that recognition relies on discrete tokens, whereas detection relies on continuous coordinates, leading to challenges in gradient updates and optimization. To bridge this gap, we propose the Gaussian-Kernel Cross-Entropy Loss (GK-CEL), enabling generative frameworks to handle both tasks simultaneously. Building upon GK-CEL, we propose DocFusion, a unified document parsing model with only 0.28B parameters. Additionally, we construct the DocLatex-1.6M dataset to provide high-quality training support. Experimental results show that DocFusion, equipped with GK-CEL, performs competitively across four core document parsing tasks, validating the effectiveness of our unified approach.
pdf
bib
abs
Better Process Supervision with Bi-directional Rewarding Signals
Wenxiang Chen
|
Wei He
|
Zhiheng Xi
|
Honglin Guo
|
Boyang Hong
|
Jiazheng Zhang
|
Nijun Li
|
Tao Gui
|
Yun Li
|
Qi Zhang
|
Xuanjing Huang
Findings of the Association for Computational Linguistics: ACL 2025
Process supervision, i.e., evaluating each step, is critical for complex large language model (LLM) reasoning and test-time searching with increased inference compute. Existing approaches, represented by process reward models (PRMs), primarily focus on rewarding signals up to the current step, exhibiting a one-directional nature and lacking a mechanism to model the distance to the final target. To address this problem, we draw inspiration from the A* algorithm, which states that an effective supervisory signal should simultaneously consider the incurred cost and the estimated cost for reaching the target. Building on this key insight, we introduce BiRM, a novel process supervision model that not only evaluates the correctness of previous steps but also models the probability of future success. We conduct extensive experiments on mathematical reasoning tasks and demonstrate that BiRM provides more precise evaluations of LLM reasoning steps, achieving an improvement of 3.1% on Gaokao2023 over PRM under the Best-of-N sampling method. Besides, in search-based strategies, BiRM provides more comprehensive guidance and outperforms ORM by 5.0% and PRM by 3.8% respectively on MATH-500.
2023
pdf
bib
abs
Lightweight Spatial Modeling for Combinatorial Information Extraction From Documents
Yanfei Dong
|
Lambert Deng
|
Jiazheng Zhang
|
Xiaodong Yu
|
Ting Lin
|
Francesco Gelli
|
Soujanya Poria
|
Wee Sun Lee
Findings of the Association for Computational Linguistics: EACL 2023
Documents that consist of diverse templates and exhibit complex spatial structures pose a challenge for document entity classification. We propose KNN-Former, which incorporates a new kind of spatial bias in attention calculation based on the K-nearest-neighbor (KNN) graph of document entities. We limit entities’ attention only to their local radius defined by the KNN graph. We also use combinatorial matching to address the one-to-one mapping property that exists in many documents, where one field has only one corresponding entity. Moreover, our method is highly parameter-efficient compared to existing approaches in terms of the number of trainable parameters. Despite this, experiments across various datasets show our method outperforms baselines in most entity types. Many real-world documents exhibit combinatorial properties which can be leveraged as inductive biases to improve extraction accuracy, but existing datasets do not cover these documents. To facilitate future research into these types of documents, we release a new ID document dataset that covers diverse templates and languages. We also release enhanced annotations for an existing dataset.