2024
pdf
abs
AUTOGEN STUDIO: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems
Victor Dibia
|
Jingya Chen
|
Gagan Bansal
|
Suff Syed
|
Adam Fourney
|
Erkang Zhu
|
Chi Wang
|
Saleema Amershi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Multi-agent systems, where multiple agents (generative AI models + tools) collaborate, are emerging as an effective pattern for solving long-running, complex tasks in numerous do- mains. However, specifying their parameters (such as models, tools, and orchestration mechanisms etc,.) and debugging them remains challenging for most developers. To address this challenge, we present AUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging, and evaluating multi-agent work- flows built upon the AUTOGEN framework. AUTOGEN STUDIO offers a web interface and a Python API for representing LLM-enabled agents using a declarative (JSON-based) specification. It provides an intuitive drag-and-drop UI for agent workflow specification, interactive evaluation and debugging of workflows, and a gallery of reusable agent components. We highlight four design principles for no-code multi-agent developer tools and contribute an open-source implementation. https://github.com/microsoft/autogen/tree/autogenstudio/samples/apps/autogen-studio
2023
pdf
abs
Aligning Offline Metrics and Human Judgments of Value for Code Generation Models
Victor Dibia
|
Adam Fourney
|
Gagan Bansal
|
Forough Poursabzi-Sangdeh
|
Han Liu
|
Saleema Amershi
Findings of the Association for Computational Linguistics: ACL 2023
Large language models have demonstrated great potential to assist programmers in generating code. For such human-AI pair programming scenarios, we empirically demonstrate that while generated code are most often evaluated in terms of their functional correctness (i.e., whether generations pass available unit tests), correctness does not fully capture (e.g., may underestimate) the productivity gains these models may provide. Through a user study with N=49 experienced programmers, we show that while correctness captures high-value generations, programmers still rate code that fails unit tests as valuable if it reduces the overall effort needed to complete a coding task. Finally, we propose a hybrid metric that combines functional correctness and syntactic similarity and show that it achieves a 14% stronger correlation with value and can therefore better represent real-world gains when evaluating and comparing models.
2021
pdf
Do Explanations Help Users Detect Errors in Open-Domain QA? An Evaluation of Spoken vs. Visual Explanations
Ana Valeria González
|
Gagan Bansal
|
Angela Fan
|
Yashar Mehdad
|
Robin Jia
|
Srinivasan Iyer
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2014
pdf
Hierarchical Summarization: Scaling Up Multi-Document Summarization
Janara Christensen
|
Stephen Soderland
|
Gagan Bansal
|
Mausam
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)