2025
pdf
bib
abs
LATTE: Learning to Think with Vision Specialists
Zixian Ma
|
Jianguo Zhang
|
Zhiwei Liu
|
Jieyu Zhang
|
Juntao Tan
|
Manli Shu
|
Juan Carlos Niebles
|
Shelby Heinecke
|
Huan Wang
|
Caiming Xiong
|
Ranjay Krishna
|
Silvio Savarese
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
While open-source vision-language models perform well on simple question-answering, they still struggle with complex questions that require both perceptual and reasoning capabilities. We propose LATTE, a family of vision-language models that have LeArned to Think wiTh vision spEcialists. By offloading perception to state-of-the-art vision models, our approach enables vision-language models to focus solely on reasoning over high-quality perceptual information. To train LATTE, we synthesize and filter a large dataset of 293K multi-modal reasoning traces over perceptual outputs of vision specialists. LATTE trained on this data achieves significant 4-5% gains over baselines across 6 benchmarks covering both perception and reasoning abilities. Ablation studies reveal that the effectiveness of multi-modal reasoning traces depends on the data sources, formats, and quality of thoughts.
pdf
bib
abs
ActionStudio: A Lightweight Framework for Data and Training of Large Action Models
Jianguo Zhang
|
Thai Quoc Hoang
|
Ming Zhu
|
Zuxin Liu
|
Shiyu Wang
|
Tulika Manoj Awalgaonkar
|
Akshara Prabhakar
|
Haolin Chen
|
Weiran Yao
|
Zhiwei Liu
|
Juntao Tan
|
Juan Carlos Niebles
|
Shelby Heinecke
|
Huan Wang
|
Silvio Savarese
|
Caiming Xiong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Action models are essential for enabling autonomous agents to perform complex tasks. However, training such models remains challenging due to the diversity of agent environments and the complexity of noisy agentic data. Existing infrastructure offers limited support for scalable, agent-specific fine-tuning and standardized agent data processing. We introduce ActionStudio, a lightweight and extensible data and training framework designed for large action models. ActionStudio unifies diverse agent trajectories using our proposed Unified Format 2.0, supports a range of training workflows with optimized multi-node distributed setup, and integrates robust preprocessing and real-time verification tools. ActionStudio demonstrates up to 9× higher throughput compared to existing agentic training frameworks, and our trained models yield top performances across public and realistic agent benchmarks. To support the broader research community, we open-source the ActionStudio framework and release actionstudio-98k, a curated dataset of 98k high-quality trajectories.
pdf
bib
abs
SlackAgents: Scalable Collaboration of AI Agents in Workspaces
Zhiwei Liu
|
Weiran Yao
|
Zuxin Liu
|
Juntao Tan
|
Jianguo Zhang
|
Frank Wang
|
Sukhandeep Nahal
|
Huan Wang
|
Shelby Heinecke
|
Silvio Savarese
|
Caiming Xiong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
In today’s rapidly evolving business landscape, organizations are turning to AI agents to automate tasks, streamline business operations, and improve decision-making processes. However, despite the flexibility offered by existing libraries, the developed agents often struggle with integration into organizational workflows, resulting in limited daily usage for work. In this paper, we present SlackAgents, a multi-agent library for scalable management and collaboration of AI agents on Slack. As an agentic layer developed upon the Slack platform, the framework offers instant AI integration into organizational workflows and enables AI-powered automation of real daily tasks. Furthermore, SLACKAGENTS facilitates scalable collaboration, allowing for effective communication and task orchestration. Our solution bridges existing gaps, offering a robust platform for developing, deploying and managing AI agents for workplace environments.
pdf
bib
abs
PersonaBench: Evaluating AI Models on Understanding Personal Information through Accessing (Synthetic) Private User Data
Juntao Tan
|
Liangwei Yang
|
Zuxin Liu
|
Zhiwei Liu
|
Rithesh R N
|
Tulika Manoj Awalgaonkar
|
Jianguo Zhang
|
Weiran Yao
|
Ming Zhu
|
Shirley Kokane
|
Silvio Savarese
|
Huan Wang
|
Caiming Xiong
|
Shelby Heinecke
Findings of the Association for Computational Linguistics: ACL 2025
Personalization is essential for AI assistants, especially in private AI settings where models are expected to interpret users’ personal data (e.g., conversations, app usage) to understand their background, preferences, and social context. However, due to privacy concerns, existing academic research lacks direct access to such data, making benchmarking difficult. To fill this gap, we propose a synthetic data pipeline that generates realistic user profiles and private documents, enabling the creation of PersonaBench—a benchmark for evaluating models’ ability to understand personal information. Using this benchmark, we assess Retrieval-Augmented Generation (RAG) pipelines on personalized questions and find that current models struggle to accurately extract and answer questions even when provided with the full set of user documents, highlighting the need for improved personalization methods.
pdf
bib
abs
xLAM: A Family of Large Action Models to Empower AI Agent Systems
Jianguo Zhang
|
Tian Lan
|
Ming Zhu
|
Zuxin Liu
|
Thai Quoc Hoang
|
Shirley Kokane
|
Weiran Yao
|
Juntao Tan
|
Akshara Prabhakar
|
Haolin Chen
|
Zhiwei Liu
|
Yihao Feng
|
Tulika Manoj Awalgaonkar
|
Rithesh R N
|
Zeyuan Chen
|
Ran Xu
|
Juan Carlos Niebles
|
Shelby Heinecke
|
Huan Wang
|
Silvio Savarese
|
Caiming Xiong
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Autonomous agents powered by large language models (LLMs) have attracted significant research interest. However, the open-source community faces many challenges in developing specialized models for agent tasks, driven by the scarcity of high-quality agent datasets and the absence of standard protocols in this area. We introduce xLAM, a series of large action models designed for AI agent tasks. The xLAM series includes five models with both dense and mixture-of-expert architectures, ranging from 1B to 8x22B parameters, trained using a scalable, flexible pipeline that unifies, augments, and synthesizes diverse datasets to enhance AI agents’ generalizability and performance across varied environments. Our experimental results demonstrate that xLAM consistently delivers exceptional performance across multiple agent ability benchmarks, notably securing the 1st position on the Berkeley Function-Calling Leaderboard, outperforming GPT-4, Claude-3, and many other models in terms of tool use. By releasing the xLAM series, we aim to advance the performance of open-source LLMs for autonomous AI agents, potentially accelerating progress and democratizing access to high-performance models for agent tasks.
2024
pdf
bib
abs
PRACT: Optimizing Principled Reasoning and Acting of LLM Agent
Zhiwei Liu
|
Weiran Yao
|
Jianguo Zhang
|
Zuxin Liu
|
Liangwei Yang
|
Rithesh R N
|
Tian Lan
|
Ming Zhu
|
Juntao Tan
|
Shirley Kokane
|
Thai Quoc Hoang
|
Juan Carlos Niebles
|
Shelby Heinecke
|
Huan Wang
|
Silvio Savarese
|
Caiming Xiong
Proceedings of the 28th Conference on Computational Natural Language Learning
We introduce the Principled Reasoning and Acting (PRAct) framework, a novel method for learning and enforcing action principles from trajectory data. Central to our approach is the use of text gradients from a reflection and optimization engine to derive these action principles. To adapt action principles to specific task requirements, we propose a new optimization framework, Reflective Principle Optimization (RPO). After execution, RPO employs a reflector to critique current action principles and an optimizer to update them accordingly.We investigate the RPO framework under two scenarios: Reward-RPO, which uses environmental rewards for reflection, and Self-RPO, which conducts self-reflection without external rewards. Additionally, we developed two RPO methods, RPO-Traj and RPO-Batch, to adapt to different settings.Experimental results across four environments demonstrate that the PRAct agent, leveraging the RPO framework, can effectively learn and apply action principles to enhance performance.
2023
pdf
bib
abs
VIP5: Towards Multimodal Foundation Models for Recommendation
Shijie Geng
|
Juntao Tan
|
Shuchang Liu
|
Zuohui Fu
|
Yongfeng Zhang
Findings of the Association for Computational Linguistics: EMNLP 2023
Computer Vision (CV), Natural Language Processing (NLP), and Recommender Systems (RecSys) are three prominent AI applications that have traditionally developed independently, resulting in disparate modeling and engineering methodologies. This has impeded the ability for these fields to directly benefit from each other’s advancements. With the recent development of foundation models, large language models have emerged as a potential general-purpose interface for unifying different modalities and problem formulations. In light of this, we propose the development of a multimodal foundation model (MFM) considering visual, textual, and personalization modalities under the P5 recommendation paradigm, thus named VIP5 (Visual P5), to unify various modalities and recommendation tasks. This will enable the processing of multiple modalities in a shared architecture for improved recommendations. To achieve this, we introduce multimodal personalized prompts to accommodate multiple modalities under a shared format. Additionally, we propose a parameter-efficient training method for foundation models, which involves freezing the P5 backbone and fine-tuning lightweight adapters, resulting in improved recommendation performance and increased efficiency in terms of training time and memory usage. Code and data of VIP5 are available at https://github.com/jeykigung/VIP5.