2025
pdf
bib
abs
A Middle Path for On-Premises LLM Deployment: Preserving Privacy Without Sacrificing Model Confidentiality
Hanbo Huang
|
Yihan Li
|
Bowen Jiang
|
Bo Jiang
|
Lin Liu
|
Zhuotao Liu
|
Ruoyu Sun
|
Shiyu Liang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Privacy-sensitive users require deploying large language models (LLMs) within their own infrastructure (on-premises) to safeguard private data and enable customization. However, vulnerabilities in local environments can lead to unauthorized access and potential model theft. To address this, prior research on small models has explored securing only the output layer within hardware-secured devices to balance model confidentiality and customization. Yet this approach fails to protect LLMs effectively. In this paper, we discover that (1) query-based distillation attacks targeting the secured top layer can produce a functionally equivalent replica of the victim model; (2) securing the same number of layers, bottom layers before a transition layer provide stronger protection against distillation attacks than top layers, with comparable effects on customization performance; and (3) the number of secured layers creates a trade-off between protection and customization flexibility. Based on these insights, we propose SOLID, a novel deployment framework that secures a few bottom layers in a secure environment and introduces an efficient metric to optimize the trade-off by determining the ideal number of hidden layers. Extensive experiments on five models (1.3B to 70B parameters) demonstrate that SOLID outperforms baselines, achieving a better balance between protection and downstream customization.
pdf
bib
abs
OpenHuEval: Evaluating Large Language Model on Hungarian Specifics
Haote Yang
|
Xingjian Wei
|
Jiang Wu
|
Noémi Ligeti-Nagy
|
Jiaxing Sun
|
Yinfan Wang
|
Győző Zijian Yang
|
Junyuan Gao
|
Jingchao Wang
|
Bowen Jiang
|
Shasha Wang
|
Nanjun Yu
|
Zihao Zhang
|
Shixin Hong
|
Hongwei Liu
|
Wei Li
|
Songyang Zhang
|
Dahua Lin
|
Lijun Wu
|
Gábor Prószéky
|
Conghui He
Findings of the Association for Computational Linguistics: ACL 2025
We introduce OpenHuEval, the first benchmark for LLMs focusing on the Hungarian language and specifics. OpenHuEval is constructed from a vast collection of Hungarian-specific materials sourced from multiple origins. In the construction, we incorporated the latest design principles for evaluating LLMs, such as using real user queries from the internet, emphasizing the assessment of LLMs’ generative capabilities, and employing LLM-as-judge to enhance the multidimensionality and accuracy of evaluations. Ultimately, OpenHuEval encompasses eight Hungarian-specific dimensions, featuring five tasks and 3953 questions. Consequently, OpenHuEval provides the comprehensive, in-depth, and scientifically accurate assessment of LLM performance in the context of the Hungarian language and its specifics. We evaluated current mainstream LLMs, including both traditional LLMs and recently developed Large Reasoning Models. The results demonstrate the significant necessity for evaluation and model optimization tailored to the Hungarian language and specifics. We also established the framework for analyzing the thinking processes of LRMs with OpenHuEval, revealing intrinsic patterns and mechanisms of these models in non-English languages, with Hungarian serving as a representative example. We will release OpenHuEval at https://github.com/opendatalab/OpenHuEval .
pdf
bib
abs
AdaptFlow: Adaptive Workflow Optimization via Meta-Learning
Runchuan Zhu
|
Bowen Jiang
|
Lingrui Mei
|
Fangkai Yang
|
Lu Wang
|
Haoxiang Gao
|
Fengshuo Bai
|
Pu Zhao
|
Qingwei Lin
|
Saravan Rajmohan
|
Dongmei Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Recent advances in large language models (LLMs) have sparked growing interest in agentic workflows—structured sequences of LLM invocations designed to solve complex tasks. However, existing approaches often rely on static templates or manually designed workflows, which limit adaptability to diverse tasks and hinder scalability. We propose AdaptFlow, a natural language-based meta-learning framework inspired by model-agnostic meta-learning (MAML). AdaptFlow uses a bi-level optimization process: the inner loop performs task-specific adaptation via LLM-generated feedback, while the outer loop consolidates these refinements into a shared, generalizable initialization. Evaluated across question answering, code generation, and mathematical reasoning benchmarks, AdaptFlow consistently outperforms both manually crafted and automatically searched baselines, achieving state-of-the-art results with strong generalization across tasks and models.
pdf
bib
abs
ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering without Font Annotations
Bowen Jiang
|
Yuan Yuan
|
Xinyi Bai
|
Zhuoqun Hao
|
Alyson Yin
|
Yaojie Hu
|
Wenyu Liao
|
Lyle Ungar
|
Camillo Jose Taylor
Findings of the Association for Computational Linguistics: EMNLP 2025
This work demonstrates that diffusion models can achieve font-controllable multilingual text rendering using just raw images without font label annotations. Visual text rendering remains a significant challenge. While recent methods condition diffusion on glyphs, it is impossible to retrieve exact font annotations from large-scale, real-world datasets, which prevents user-specified font control. To address this, we propose a data-driven solution that integrates the conditional diffusion model with a text segmentation model, utilizing segmentation masks to capture and represent fonts in pixel space in a self-supervised manner, thereby eliminating the need for any ground-truth labels and enabling users to customize text rendering with any multilingual font of their choice. The experiment provides a proof of concept of our algorithm in zero-shot text and font editing across diverse fonts and languages, providing valuable insights for the community and industry toward achieving generalized visual text rendering.
pdf
bib
abs
Towards Rationality in Language and Multimodal Agents: A Survey
Bowen Jiang
|
Yangxinyu Xie
|
Xiaomeng Wang
|
Yuan Yuan
|
Zhuoqun Hao
|
Xinyi Bai
|
Weijie J Su
|
Camillo Jose Taylor
|
Tanwi Mallick
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
This work discusses how to build more rational language and multimodal agents and what criteria define rationality in intelligent systems.Rationality is the quality of being guided by reason, characterized by decision-making that aligns with evidence and logical principles. It plays a crucial role in reliable problem-solving by ensuring well-grounded and consistent solutions. Despite their progress, large language models (LLMs) often fall short of rationality due to their bounded knowledge space and inconsistent outputs. In response, recent efforts have shifted toward developing multimodal and multi-agent systems, as well as integrating modules like external tools, programming codes, symbolic reasoners, utility function, and conformal risk controls rather than relying solely on a single LLM for decision-making. This paper surveys state-of-the-art advancements in language and multimodal agents, assesses their role in enhancing rationality, and outlines open challenges and future research directions. We maintain an open repository at https://github.com/bowen-upenn/Agent_Rationality.
2024
pdf
bib
abs
A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners
Bowen Jiang
|
Yangxinyu Xie
|
Zhuoqun Hao
|
Xiaomeng Wang
|
Tanwi Mallick
|
Weijie J Su
|
Camillo Jose Taylor
|
Dan Roth
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We go beyond evaluating LLMs on accuracy; rather, we aim to investigate their token bias in solving logical reasoning tasks. Specifically, we develop carefully controlled synthetic datasets, featuring conjunction fallacy and syllogistic problems. Our framework outlines a list of hypotheses where token biases are readily identifiable, with all null hypotheses assuming genuine reasoning capabilities of LLMs. The findings in this study suggest, with statistical guarantee, that most LLMs still struggle with logical reasoning. While they may perform well on classic problems, their success largely depends on recognizing superficial patterns with strong token bias, thereby raising concerns about their actual reasoning and generalization abilities.