2025
pdf
bib
abs
OS Agents: A Survey on MLLM-based Agents for Computer, Phone and Browser Use
Xueyu Hu
|
Tao Xiong
|
Biao Yi
|
Zishu Wei
|
Ruixuan Xiao
|
Yurun Chen
|
Jiasheng Ye
|
Meiling Tao
|
Xiangxin Zhou
|
Ziyu Zhao
|
Yuhuai Li
|
Shengze Xu
|
Shenzhi Wang
|
Xinchen Xu
|
Shuofei Qiao
|
Zhaokai Wang
|
Kun Kuang
|
Tieyong Zeng
|
Liang Wang
|
Jiwei Li
|
Yuchen Eleanor Jiang
|
Wangchunshu Zhou
|
Guoyin Wang
|
Keting Yin
|
Zhou Zhao
|
Hongxia Yang
|
Fan Wu
|
Shengyu Zhang
|
Fei Wu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The dream to create AI assistants as capable and versatile as the fictional J.A.R.V.I.S from Iron Man has long captivated imaginations. With the evolution of multi-modal large language models ((M)LLMs), this dream is closer to reality, as (M)LLM-based Agents using computers, mobile phones and web browsers by operating within the environments and interfaces (e.g., Graphical User Interface (GUI) and Command Line Interface (CLI)) provided by operating systems (OS) to automate tasks have significantly advanced. This paper presents a comprehensive survey on these advanced agents, designated as OS Agents. We begin by elucidating the fundamentals of OS Agents, exploring their key components and capabilities. We then examine methodologies for constructing OS Agents, focusing on domain-specific foundation models and agent frameworks. A detailed review of evaluation metrics and benchmarks highlights how OS Agents are assessed across diverse platforms and tasks. Finally, we discuss current challenges and identify promising directions for future research. An open-source GitHub repository is maintained as a dynamic resource to foster further innovation in this field.
pdf
bib
abs
PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment
Zekun Moore Wang
|
Shenzhi Wang
|
King Zhu
|
Jiaheng Liu
|
Ke Xu
|
Jie Fu
|
Wangchunshu Zhou
|
Wenhao Huang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Alignment of large language models (LLMs) involves training models on preference-contrastive output pairs to adjust their responses according to human preferences. To obtain such contrastive pairs, traditional methods like RLHF and RLAIF rely on limited contrasting patterns, such as varying model variants or decoding temperatures. This singularity leads to two issues: (1) alignment is not comprehensive; and thereby (2) models are susceptible to harmful response tendencies. To address these issues, we investigate how to construct more comprehensive and diversified contrasting patterns to enhance preference data (RQ1) and verify the impact of the diversification of contrasting patterns on model alignment (RQ2). For RQ1, we propose PopAlign, a framework that integrates diversified contrasting patterns across the prompt, model, and pipeline levels, introducing six contrasting strategies that do not require additional feedback labeling procedures. Regarding RQ2, we conduct thorough experiments demonstrating that PopAlign significantly outperforms existing methods, leading to more comprehensive alignment.
pdf
bib
abs
Model Surgery: Modulating LLM’s Behavior Via Simple Parameter Editing
Huanqian Wang
|
Yang Yue
|
Rui Lu
|
Jingxin Shi
|
Andrew Zhao
|
Shenzhi Wang
|
Shiji Song
|
Gao Huang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) have demonstrated great potential as generalist assistants, showcasing powerful task understanding and problem-solving capabilities. To deploy LLMs as AI assistants, it is crucial that these models exhibit desirable behavioral traits, such as non-toxicity and resilience against jailbreak attempts. Current approaches for detoxification or preventing jailbreaking usually involve Supervised Fine-Tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), which requires finetuning billions of parameters through gradient descent with substantial computational cost. Furthermore, models modified through SFT and RLHF may deviate from the pretrained models, potentially leading to a degradation in foundational LLM capabilities. In this paper, we observe that surprisingly, directly editing a small subset of parameters can effectively modulate specific behaviors of LLMs, such as detoxification and resistance to jailbreaking, with only inference-level computational resources. Experiments demonstrate that in the detoxification task, our approach achieves reductions of up to 90.0% in toxicity on the RealToxicityPrompts dataset and 49.2% on ToxiGen, while maintaining the LLM’s general capabilities in areas such as common sense, question answering, and mathematics.
2024
pdf
bib
abs
PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents
Qisen Yang
|
Zekun Wang
|
Honghui Chen
|
Shenzhi Wang
|
Yifan Pu
|
Xin Gao
|
Wenhao Huang
|
Shiji Song
|
Gao Huang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Psychological measurement is essential for mental health, self-understanding, and personal development. Traditional methods, such as self-report scales and psychologist interviews, often face challenges with engagement and accessibility. While game-based and LLM-based tools have been explored to improve user interest and automate assessment, they struggle to balance engagement with generalizability. In this work, we propose PsychoGAT (Psychological Game AgenTs) to achieve a generic gamification of psychological assessment. The main insight is that powerful LLMs can function both as adept psychologists and innovative game designers. By incorporating LLM agents into designated roles and carefully managing their interactions, PsychoGAT can transform any standardized scales into personalized and engaging interactive fiction games. To validate the proposed method, we conduct psychometric evaluations to assess its effectiveness and employ human evaluators to examine the generated content across various psychological constructs, including depression, cognitive distortions, and personality traits. Results demonstrate that PsychoGAT serves as an effective assessment tool, achieving statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity. Moreover, human evaluations confirm PsychoGAT’s enhancements in content coherence, interactivity, interest, immersion, and satisfaction.
pdf
bib
abs
Boosting LLM Agents with Recursive Contemplation for Effective Deception Handling
Shenzhi Wang
|
Chang Liu
|
Zilong Zheng
|
Siyuan Qi
|
Shuo Chen
|
Qisen Yang
|
Andrew Zhao
|
Chaofei Wang
|
Shiji Song
|
Gao Huang
Findings of the Association for Computational Linguistics: ACL 2024
Recent advances in large language models (LLMs) have led to significant success in using LLMs as agents. Nevertheless, a common assumption that LLMs always process honest information neglects the widespread deceptive or misleading content in human and AI-generated material. This oversight might expose LLMs to malicious manipulations. To enhance LLMs’ ability to identify and counteract deceptive information, in this paper, inspired by humans’ recursive thinking and perspective-taking, we introduce a novel cognitive framework, Recursive Contemplation (ReCon). ReCon combines formulation and refinement contemplation processes; formulation contemplation produces initial thoughts and speech, while refinement contemplation further polishes them. Additionally, we incorporate first-order and second-order perspective transitions into these processes respectively. Specifically, the first-order allows an LLM agent to infer others’ mental states, and the second-order involves understanding how others perceive the agent’s mental state. After integrating ReCon with various LLMs, extensive experiment results from the Avalon game and BigTom benchmark indicate ReCon’s efficacy in aiding LLMs to discern and maneuver around deceptive information without extra fine-tuning and data. Finally, we demonstrate ReCon’s scaling trend with model parameters, and explore the current limitations of LLMs in terms of safety and reasoning, potentially furnishing insights for subsequent research. Our project page can be found at https://shenzhi-wang.github.io/avalon_recon.