Heting Ying
2025
Unifying Language Agent Algorithms with Graph-based Orchestration Engine for Reproducible Agent Research
Qianqian Zhang
|
Jiajia Liao
|
Heting Ying
|
Yibo Ma
|
Haozhan Shen
|
Jingcheng Li
|
Peng Liu
|
Lu Zhang
|
Chunxin Fang
|
Kyusong Lee
|
Ruochen Xu
|
Tiancheng Zhao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Language agents powered by large language models (LLMs) have demonstrated remarkable capabilities in understanding, reasoning, and executing complex tasks. However, developing robust agents presents significant challenges: substantial engineering overhead, lack of standardized components, and insufficient evaluation frameworks for fair comparison. We introduce Agent Graph-based Orchestration for Reasoning and Assessment (AGORA), a flexible and extensible framework that addresses these challenges through three key contributions: (1) a modular architecture with a graph-based workflow engine, efficient memory management, and clean component abstraction; (2) a comprehensive suite of reusable agent algorithms implementing state-of-the-art reasoning approaches; and (3) a rigorous evaluation framework enabling systematic comparison across multiple dimensions. Through extensive experiments on mathematical reasoning and multimodal tasks, we evaluate various agent algorithms across different LLMs, revealing important insights about their relative strengths and applicability. Our results demonstrate that while sophisticated reasoning approaches can enhance agent capabilities, simpler methods like Chain-of-Thought often exhibit robust performance with significantly lower computational overhead. AGORA not only simplifies language agent development but also establishes a foundation for reproducible agent research through standardized evaluation protocols.We made a demo video at: https://www.youtube.com/watch?v=WRH-F1zegKI. The comparison agent of algorithms is also available at https://huggingface.co/spaces/omlab/open-agent-leaderboard. Source code of AGORA can be found at https://github.com/om-ai-lab/OmAgent.
2024
OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-Conquer
Lu Zhang
|
Tiancheng Zhao
|
Heting Ying
|
Yibo Ma
|
Kyusong Lee
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recent advancements in Large Language Models (LLMs) have expanded their capabilities to multimodal contexts, including comprehensive video understanding. However, processing extensive videos such as 24-hour CCTV footage or full-length films presents significant challenges due to the vast data and processing demands. Traditional methods, like extracting key frames or converting frames to text, often result in substantial information loss. To address these shortcomings, we develop OmAgent, efficiently stores and retrieves relevant video frames for specific queries, preserving the detailed content of videos. Additionally, it features an Divide-and-Conquer Loop capable of autonomous reasoning, dynamically invoking APIs and tools to enhance query processing and accuracy. This approach ensures robust video understanding, significantly reducing information loss. Experimental results affirm OmAgent’s efficacy in handling various types of videos and complex tasks. Moreover, we have endowed it with greater autonomy and a robust tool-calling system, enabling it to accomplish even more intricate tasks.
Search
Fix author
Co-authors
- Kyusong Lee 2
- Yibo Ma 2
- Lu Zhang 2
- Tiancheng Zhao 2
- Chunxin Fang 1
- show all...