2025
pdf
bib
abs
Data Interpreter: An LLM Agent for Data Science
Sirui Hong
|
Yizhang Lin
|
Bang Liu
|
Bangbang Liu
|
Binhao Wu
|
Ceyao Zhang
|
Danyang Li
|
Jiaqi Chen
|
Jiayi Zhang
|
Jinlin Wang
|
Li Zhang
|
Lingyao Zhang
|
Min Yang
|
Mingchen Zhuge
|
Taicheng Guo
|
Tuo Zhou
|
Wei Tao
|
Robert Tang
|
Xiangtao Lu
|
Xiawu Zheng
|
Xinbing Liang
|
Yaying Fei
|
Yuheng Cheng
|
Yongxin Ni
|
Zhibin Gou
|
Zongze Xu
|
Yuyu Luo
|
Chenglin Wu
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Model (LLM)-based agents have excelled in various domains but face significant challenges when applied to data science workflows due to their complex, multi-stage nature. Current LLM-based agents struggle with non-linear relationships, recursive dependencies, implicit data- and logic-dependent reasoning, and managing extensive context. In this paper, we introduce Data Interpreter, an LLM-based agent that addresses these challenges through hierarchical graph-based modeling to represent the complexity and a progressive strategy for step-by-step verification, refinement, and consistent context management. Extensive experiments confirm the effectiveness of Data Interpreter. On InfiAgent-DABench, it boosts performance by 25% (from 75.9% to 94.9%), and on machine learning and open-ended tasks, it lifts accuracy from 88% to 95% and from 60% to 97%, respectively. Moreover, our method surpasses state-of-the-art baselines by 26% on the MATH dataset. We will release the code upon publication.
2024
pdf
bib
abs
Analyzing the Role of Semantic Representations in the Era of Large Language Models
Zhijing Jin
|
Yuen Chen
|
Fernando Gonzalez Adauto
|
Jiarui Liu
|
Jiayi Zhang
|
Julian Michael
|
Bernhard Schölkopf
|
Mona Diab
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Traditionally, natural language processing (NLP) models often use a rich set of features created by linguistic expertise, such as semantic representations. However, in the era of large language models (LLMs), more and more tasks are turned into generic, end-to-end sequence generation problems. In this paper, we investigate the question: what is the role of semantic representations in the era of LLMs? Specifically, we investigate the effect of Abstract Meaning Representation (AMR) across five diverse NLP tasks. We propose an AMR-driven chain-of-thought prompting method, which we call AMRCOT, and find that it generally hurts performance more than it helps. To investigate what AMR may have to offer on these tasks, we conduct a series of analysis experiments. We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions, named entities, and in the final inference step where the LLM must connect its reasoning over the AMR to its prediction. We recommend focusing on these areas for future work in semantic representations for LLMs. Our code: https://github.com/causalNLP/amr_llm
2022
pdf
bib
abs
C3KG: A Chinese Commonsense Conversation Knowledge Graph
Dawei Li
|
Yanran Li
|
Jiayi Zhang
|
Ke Li
|
Chen Wei
|
Jianwei Cui
|
Bin Wang
Findings of the Association for Computational Linguistics: ACL 2022
Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps. To fill the gap, we curate a large-scale multi-turn human-written conversation corpus, and create the first Chinese commonsense conversation knowledge graph which incorporates both social commonsense knowledge and dialog flow information. To show the potential of our graph, we develop a graph-conversation matching approach, and benchmark two graph-grounded conversational tasks. All the resources in this work will be released to foster future research.
2020
pdf
bib
abs
Focus-Constrained Attention Mechanism for CVAE-based Response Generation
Zhi Cui
|
Yanran Li
|
Jiayi Zhang
|
Jianwei Cui
|
Chen Wei
|
Bin Wang
Findings of the Association for Computational Linguistics: EMNLP 2020
To model diverse responses for a given post, one promising way is to introduce a latent variable into Seq2Seq models. The latent variable is supposed to capture the discourse-level information and encourage the informativeness of target responses. However, such discourse-level information is often too coarse for the decoder to be utilized. To tackle it, our idea is to transform the coarse-grained discourse-level information into fine-grained word-level information. Specifically, we firstly measure the semantic concentration of corresponding target response on the post words by introducing a fine-grained focus signal. Then, we propose a focus-constrained attention mechanism to take full advantage of focus in well aligning the input to the target response. The experimental results demonstrate that by exploiting the fine-grained signal, our model can generate more diverse and informative responses compared with several state-of-the-art models.