pdf
bib
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Pushkar Mishra
|
Smaranda Muresan
|
Tao Yu
pdf
bib
abs
MapQaTor: An Extensible Framework for Efficient Annotation of Map-Based QA Datasets
Mahir Labib Dihan
|
Mohammed Eunus Ali
|
Md Rizwan Parvez
Mapping and navigation services like Google Maps, Apple Maps, OpenStreetMap, are essential for accessing various location-based data, yet they often struggle to handle natural language geospatial queries. Recent advancements in Large Language Models (LLMs) show promise in question answering (QA), but creating reliable geospatial QA datasets from map services remains challenging. We introduce MapQaTor, an extensible open-source framework that streamlines the creation of reproducible, traceable map-based QA datasets. MapQaTor enables seamless integration with any maps API, allowing users to gather and visualize data from diverse sources with minimal setup. By caching API responses, the platform ensures consistent ground truth, enhancing the reliability of the data even as real-world information evolves. MapQaTor centralizes data retrieval, annotation, and visualization within a single platform, offering a unique opportunity to evaluate the current state of LLM-based geospatial reasoning while advancing their capabilities for improved geospatial understanding. Evaluation metrics show that, MapQaTor speeds up the annotation process by at least 30 times compared to manual methods, underscoring its potential for developing geospatial resources, such as complex map reasoning datasets. The website is live at: https://mapqator.github.io/ and a demo video is available at: https://youtu.be/bVv7-NYRsTw.
pdf
bib
abs
PEIRCE: Unifying Material and Formal Reasoning via LLM-Driven Neuro-Symbolic Refinement
Xin Quan
|
Marco Valentino
|
Danilo Carvalho
|
Dhairya Dalal
|
Andre Freitas
A persistent challenge in AI is the effective integration of material and formal inference - the former concerning the plausibility and contextual relevance of arguments, while the latter focusing on their logical and structural validity. Large Language Models (LLMs), by virtue of their extensive pre-training on large textual corpora, exhibit strong capabilities in material inference. However, their reasoning often lacks formal rigour and verifiability. At the same time, LLMs’ linguistic competence positions them as a promising bridge between natural and formal languages, opening up new opportunities for combining these two modes of reasoning.In this paper, we introduce PEIRCE, a neuro-symbolic framework designed to unify material and formal inference through an iterative conjecture–criticism process. Within this framework, LLMs play the central role of generating candidate solutions in natural and formal languages, which are then evaluated and refined via interaction with external critique models. These critiques include symbolic provers, which assess formal validity, as well as soft evaluators that measure the quality of the generated arguments along linguistic and epistemic dimensions such as plausibility, coherence, and parsimony. While PEIRCE is a general-purpose framework, we demonstrate its capabilities in the domain of natural language explanation generation - a setting that inherently demands both material adequacy and formal correctness.
pdf
bib
abs
MERaLiON-AudioLLM: Advancing Speech and Language Understanding for Singapore
Yingxu He
|
Zhuohan Liu
|
Geyu Lin
|
Shuo Sun
|
Bin Wang
|
Wenyu Zhang
|
Xunlong Zou
|
Nancy F. Chen
|
AiTi Aw
We introduce MERaLiON-AudioLLM, the first general-purpose audio-based large language model designed for multitask learning, with a particular focus on Singlish understanding. Trained on 62 million multimodal instruction samples comprising a total of 260k hours of audio, it exhibits strong generalization across a diverse set of tasks, including—but not limited to—automatic speech recognition, spoken question answering, speech translation, and paralinguistic analysis. Our results show significant improvements in local speech recognition and task-specific understanding, making MERaLiON-AudioLLM a leading solution for region-specific AI applications. An interactive demo has been developed to enable user-friendly interactions, supported by a backend with customized caching and load-balancing mechanisms. We benchmark the model across a broad range of multilingual and multitask scenarios, where it demonstrates competitive performance compared to other open-source models. The demo page, model weights and videos are publically accessible.
pdf
bib
abs
NameTag 3: A Tool and a Service for Multilingual/Multitagset NER
Jana Straková
|
Milan Straka
We introduce NameTag 3, an open-source tool and cloud-based web service for multilingual, multidataset, and multitagset named entity recognition (NER), supporting both flat and nested entities. NameTag 3 achieves state-of-the-art results on 21 test datasets in 15 languages and remains competitive on the rest, even against larger models. It is available as a command-line tool and as a cloud-based service, enabling use without local installation. NameTag 3 web service currently provides flat NER for 17 languages, trained on 21 corpora and three NE tagsets, all powered by a single 355M-parameter fine-tuned model; and nested NER for Czech, powered by a 126M fine-tuned model. The source code is licensed under open-source MPL 2.0, while the models are distributed under non-commercial CC BY-NC-SA 4.0. Documentation is available at https://ufal.mff.cuni.cz/nametag, source code at https://github.com/ufal/nametag3, and trained models via https://lindat.cz. The REST service and the web application can be found at https://lindat.mff.cuni.cz/services/nametag/. A demonstration video is available at https://www.youtube.com/watch?v=-gaGnP0IV8A.
pdf
bib
abs
Multi-Programming Language Sandbox for LLMs
Shihan Dou
|
Jiazheng Zhang
|
Jianxiang Zang
|
Yunbo Tao
|
Weikang Zhou
|
Haoxiang Jia
|
Shichun Liu
|
Yuming Yang
|
Shenxi Wu
|
Zhiheng Xi
|
Muling Wu
|
Rui Zheng
|
Changze Lv
|
Limao Xiong
|
Shaoqing Zhang
|
Lin Zhang
|
Wenyu Zhan
|
Rongxiang Weng
|
Jingang Wang
|
Xunliang Cai
|
Yueming Wu
|
Ming Wen
|
Yixin Cao
|
Tao Gui
|
Xipeng Qiu
|
Qi Zhang
|
Xuanjing Huang
We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs). It can automatically identify the programming language of the code, compiling and executing it within an isolated sub-sandbox to ensure safety and stability. In addition, MPLSandbox integrates both traditional and LLM-based code analysis tools, providing a comprehensive analysis of generated code. It also can be effortlessly integrated into the training and deployment of LLMs to improve the quality and correctness of generated code. It also helps researchers streamline their workflows for various LLM-based code-related tasks, reducing the development cost. To validate the effectiveness of MPLSandbox, we conduct extensive experiments by integrating it into several training and deployment scenarios, and employing it to optimize workflows for a wide range of downstream code tasks. Our goal is to enhance researcher productivity on LLM-based code tasks by simplifying and automating workflows through delegation to MPLSandbox.
pdf
bib
abs
FlagEvalMM: A Flexible Framework for Comprehensive Multimodal Model Evaluation
Zheqi He
|
Yesheng Liu
|
Jing-Shu Zheng
|
Xuejing Li
|
Jin-Ge Yao
|
Bowen Qin
|
Richeng Xuan
|
Xi Yang
We present FlagEvalMM, an open-source evaluation framework designed to comprehensively assess multimodal models across a diverse range of vision-language understanding and generation tasks, such as visual question answering, text-to-image/video generation, and image-text retrieval. We decouple model inference from evaluation through an independent evaluation service, thus enabling flexible resource allocation and seamless integration of new tasks and models. Moreover, FlagEvalMM utilizes advanced inference acceleration tools (e.g., vLLM, SGLang) and asynchronous data loading to significantly enhance evaluation efficiency. Extensive experiments show that FlagEvalMM offers accurate and efficient insights into model strengths and limitations, making it a valuable tool for advancing multimodal research. The framework is publicly accessible at https://github.com/flageval-baai/FlagEvalMM, with a demonstration video available at https://youtu.be/L7EtacjoM0k.
pdf
bib
abs
My Climate CoPilot: A Question Answering System for Climate Adaptation in Agriculture
Vincent Nguyen
|
Willow Hallgren
|
Ashley Harkin
|
Mahesh Prakash
|
Sarvnaz Karimi
Accurately answering climate science questions requires scientific literature and climate data. Interpreting climate literature and data, however, presents inherent challenges such as determining relevant climate factors and drivers, interpreting uncertainties in the science and data, and dealing with the sheer volume of data. My Climate CoPilot is a platform that assists a range of potential users, such as farmer advisors, to mitigate and adapt to projected climate changes by providing answers to questions that are grounded in evidence. It emphasises transparency, user privacy, low-resource use, and provides automatic evaluation. It also strives for scientific robustness and accountability. Fifty domain experts carefully evaluated every aspect of My Climate CoPilot and based on their interactions and feedback, the system continues to evolve.
pdf
bib
abs
SPOT: Bridging Natural Language and Geospatial Search for Investigative Journalism
Lynn Khellaf
|
Ipek Baris Schlicht
|
Tilman Mirass
|
Julia Bayer
|
Tilman Wagner
|
Ruben Bouwmeester
OpenStreetMap (OSM) is a vital resource for investigative journalists doing geolocation verification. However, existing tools to query OSM data such as Overpass Turbo require familiarity with complex query languages, creating barriers for non-technical users. We present SPOT, an open source natural language interface that makes OSM’s rich, tag-based geographic data more accessible through intuitive scene descriptions. SPOT interprets user inputs as structured representations of geospatial object configurations using fine-tuned Large Language Models (LLMs), with results being displayed in an interactive map interface. While more general geospatial search tasks are conceivable, SPOT is specifically designed for use in investigative journalism, addressing real-world challenges such as hallucinations in model output, inconsistencies in OSM tagging, and the noisy nature of user input. It combines a novel synthetic data pipeline with a semantic bundling system to enable robust, accurate query generation. To our knowledge, SPOT is the first system to achieve reliable natural language access to OSM data at this level of accuracy. By lowering the technical barrier to geolocation verification, SPOT contributes a practical tool to the broader efforts to support fact-checking and combat disinformation.
pdf
bib
abs
Textagon: Boosting Language Models with Theory-guided Parallel Representations
John P. Lalor
|
Ruiyang Qin
|
David Dobolyi
|
Ahmed Abbasi
Pretrained language models have significantly advanced the state of the art in generating distributed representations of text. However, they do not account for the wide variety of available expert-generated language resources and lexicons that explicitly encode linguistic/domain knowledge. Such lexicons can be paired with learned embeddings to further enhance NLP prediction and linguistic inquiry. In this work we present Textagon, a Python package for generating parallel representations for text based on predefined lexicons and selecting representations that provide the most information. We discuss the motivation behind the software, its implementation, as well as two case studies for its use to demonstrate operational utility.
pdf
bib
abs
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture
Valentin Mamedov
|
Evgenii Kosarev
|
Gregory Leleytner
|
Ilya Shchuckin
|
Valeriy Berezovskiy
|
Daniil Smirnov
|
Dmitry Kozlov
|
Sergei Averkiev
|
Lukyanenko Ivan
|
Aleksandr Proshunin
|
Ainur Israfilova
|
Ivan Baskov
|
Artem Chervyakov
|
Emil Shakirov
|
Mikhail Kolesov
|
Daria Khomich
|
Daria Latortseva
|
Sergei Porkhun
|
Yury Fedorov
|
Oleg Kutuzov
|
Polina Kudriavtseva
|
Sofiia Soldatova
|
Kolodin Egor
|
Stanislav Pyatkin
|
Dzmitry Menshykh
|
Grafov Sergei IUrevich
|
Eldar Damirov
|
Vladimir Karlov
|
Ruslan Gaitukiev
|
Arkadiy Shatenov
|
Alena Fenogenova
|
Nikita Savushkin
|
Fedor Minkin
Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source, aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.
pdf
bib
abs
Unifying Language Agent Algorithms with Graph-based Orchestration Engine for Reproducible Agent Research
Qianqian Zhang
|
Jiajia Liao
|
Heting Ying
|
Yibo Ma
|
Haozhan Shen
|
Jingcheng Li
|
Peng Liu
|
Lu Zhang
|
Chunxin Fang
|
Kyusong Lee
|
Ruochen Xu
|
Tiancheng Zhao
Language agents powered by large language models (LLMs) have demonstrated remarkable capabilities in understanding, reasoning, and executing complex tasks. However, developing robust agents presents significant challenges: substantial engineering overhead, lack of standardized components, and insufficient evaluation frameworks for fair comparison. We introduce Agent Graph-based Orchestration for Reasoning and Assessment (AGORA), a flexible and extensible framework that addresses these challenges through three key contributions: (1) a modular architecture with a graph-based workflow engine, efficient memory management, and clean component abstraction; (2) a comprehensive suite of reusable agent algorithms implementing state-of-the-art reasoning approaches; and (3) a rigorous evaluation framework enabling systematic comparison across multiple dimensions. Through extensive experiments on mathematical reasoning and multimodal tasks, we evaluate various agent algorithms across different LLMs, revealing important insights about their relative strengths and applicability. Our results demonstrate that while sophisticated reasoning approaches can enhance agent capabilities, simpler methods like Chain-of-Thought often exhibit robust performance with significantly lower computational overhead. AGORA not only simplifies language agent development but also establishes a foundation for reproducible agent research through standardized evaluation protocols.We made a demo video at: https://www.youtube.com/watch?v=WRH-F1zegKI. The comparison agent of algorithms is also available at https://huggingface.co/spaces/omlab/open-agent-leaderboard. Source code of AGORA can be found at https://github.com/om-ai-lab/OmAgent.
pdf
bib
abs
Abacus-SQL: A Text-to-SQL System Empowering Cross-Domain and Open-Domain Database Retrieval
Keyan Xu
|
Dingzirui Wang
|
Xuanliang Zhang
|
Qingfu Zhu
|
Wanxiang Che
The existing text-to-SQL systems have made significant progress in SQL query generation, but they still face numerous challenges. Existing systems often lack retrieval capabilities for open-domain databases, requiring users to manually filter relevant databases. Additionally, their cross-domain transferability is limited, making it challenging to accommodate diverse query requirements. To address these issues, we propose Abacus-SQL. Abacus-SQL utilizes database retrieval technology to accurately locate the required databases in an open-domain database environment. It also enhances the system cross-domain transfer ability through data augmentation methods. Moreover, Abacus-SQL employs Pre-SQL and Self-debug methods, thereby enhancing the accuracy of SQL queries. Experimental results demonstrate that Abacus-SQL performs excellently in multi-turn text-to-SQL tasks, effectively validating the approach’s effectiveness.Abacus-SQL is publicly accessible at https://huozi.8wss.com/abacus-sql/.
pdf
bib
abs
Tulun: Transparent and Adaptable Low-resource Machine Translation
Raphael Merx
|
Hanna Suominen
|
Lois Yinghui Hong
|
Nick Thieberger
|
Trevor Cohn
|
Ekaterina Vylomova
Machine translation (MT) systems that support low-resource languages often struggle on specialized domains. While researchers have proposed various techniques for domain adaptation, these approaches typically require model fine-tuning, making them impractical for non-technical users and small organizations. To address this gap, we propose Tulun, a versatile solution for terminology-aware translation, combining neural MT with large language model (LLM)-based post-editing guided by existing glossaries and translation memories.Our open-source web-based platform enables users to easily create, edit, and leverage terminology resources, fostering a collaborative human-machine translation process that respects and incorporates domain expertise while increasing MT accuracy.Evaluations show effectiveness in both real-world and benchmark scenarios: on medical and disaster relief translation tasks for Tetun and Bislama, our system achieves improvements of 16.90-22.41 ChrF++ points over baseline MT systems. Across six low-resource languages on the FLORES dataset, Tulun outperforms both standalone MT and LLM approaches, achieving an average improvement of 2.8 ChrF++ points over NLLB-54B. Tulun is publicly accessible at https://bislama-trans.rapha.dev.
pdf
bib
abs
ReasonGraph: Visualization of Reasoning Methods and Extended Inference Paths
Zongqian Li
|
Ehsan Shareghi
|
Nigel Collier
Large Language Models (LLMs) reasoning processes are challenging to analyze due to their complexity and the lack of organized visualization tools. We present ReasonGraph, a web-based platform for visualizing and analyzing LLM reasoning processes. It supports both sequential and tree-based reasoning methods and extended inference outputs while integrating with major LLM providers and over fifty state-of-the-art models. ReasonGraph incorporates an intuitive UI with meta reasoning method selection, configurable visualization parameters, and a modular framework that facilitates efficient extension. Our evaluation shows high parsing reliability, efficient processing, and excellent usability across various downstream applications. By providing a unified visualization framework, ReasonGraph reduces cognitive load in analyzing complex reasoning paths, improves error identification in logical processes, and enables more effective development of LLM-based applications. The platform is open-source, facilitating accessibility and reproducibility in LLM reasoning analysis.
pdf
bib
abs
Dia-Lingle: A Gamified Interface for Dialectal Data Collection
Jiugeng Sun
|
Rita Sevastjanova
|
Sina Ahmadi
|
Rico Sennrich
|
Mennatallah El-Assady
Dialects suffer from the scarcity of computational textual resources as they exist predominantly in spoken rather than written form and exhibit remarkable geographical diversity. Collecting dialect data and subsequently integrating it into current language technologies present significant obstacles. Gamification has been proven to facilitate remote data collection processes with great ease and on a substantially wider scale. This paper introduces Dia-Lingle, a gamified interface aimed to improve and facilitate dialectal data collection tasks such as corpus expansion and dialect labelling. The platform features two key components: the first challenges users to rewrite sentences in their dialects, identifies them through a classifier and solicits feedback, and the other one asks users to match sentences to their geographical locations. Dia-Lingle combines active learning with gamified difficulty levels, strategically encouraging prolonged user engagement while efficiently enriching the dialect corpus. Usability evaluation shows that our interface demonstrates high levels of user satisfaction. We provide the link to Dia-Lingle: https://dia-lingle.ivia.ch/, and demo video: https://youtu.be/0QyJsB8ym64.
pdf
bib
abs
Token Level Routing Inference System for Edge Devices
Jianshu She
|
Wenhao Zheng
|
Zhengzhong Liu
|
Hongyi Wang
|
Eric P. Xing
|
Huaxiu Yao
|
Qirong Ho
The computational complexity of large language model (LLM) inference significantly constrains their deployment efficiency on edge devices. In contrast, small language models offer faster decoding and lower resource consumption but often suffer from degraded response quality and heightened susceptibility to hallucinations. To address this trade-off, collaborative decoding, in which a large model assists in generating critical tokens, has emerged as a promising solution. This paradigm leverages the strengths of both model types by enabling high-quality inference through selective intervention of the large model, while maintaining the speed and efficiency of the smaller model. In this work, we present a novel collaborative decoding inference system that allows small models to perform on-device inference while selectively consulting a cloud-based large model for critical token generation. Remarkably, the system achieves a 60% performance gain on CommonsenseQA using only a 0.5B model on an M1 MacBook, with under 7% of tokens generation uploaded to the large model in the cloud.
pdf
bib
abs
The ShareLM Collection and Plugin: Contributing Human-Model Chats for the Benefit of the Community
Shachar Don-Yehiya
|
Leshem Choshen
|
Omri Abend
Human-model conversations provide a window into users’ real-world scenarios, behavior, and needs, and thus are a valuable resource for model development and research. While for-profit companies collect user data through the APIs of their models, using it internally to improve their own models, the open source and research community lags behind.We introduce the ShareLM collection, a unified set of human conversations with large language models, and its accompanying plugin, a Web extension for voluntarily contributing user-model conversations. Where few platforms share their chats, the ShareLM plugin adds this functionality, thus, allowing users to share conversations from most platforms. The plugin allows the user to rate their conversations, both at the conversation and the response levels, and delete conversations they prefer to keep private before they ever leave the user’s local storage.
pdf
bib
abs
OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens
Jiacheng Liu
|
Taylor Blanton
|
Yanai Elazar
|
Sewon Min
|
Yen-Sung Chen
|
Arnavi Chheda-Kothary
|
Huy Tran
|
Byron Bischoff
|
Eric Marsh
|
Michael Schmitz
|
Cassidy Trier
|
Aaron Sarnat
|
Jenna James
|
Jon Borchardt
|
Bailey Kuehl
|
Evie Yu-Yen Cheng
|
Karen Farley
|
Taira Anderson
|
David Albright
|
Carissa Schoenick
|
Luca Soldaini
|
Dirk Groeneveld
|
Rock Yuren Pang
|
Pang Wei Koh
|
Noah A. Smith
|
Sophie Lebrecht
|
Yejin Choi
|
Hannaneh Hajishirzi
|
Ali Farhadi
|
Jesse Dodge
We present OLMoTrace, the first system that traces the outputs of language models back to their full, multi-trillion-token training data in real time. OLMoTrace finds and shows verbatim matches between segments of language model output and documents in the training text corpora. Powered by an extended version of infini-gram (Liu et al., 2024), our system returns tracing results within a few seconds. OLMoTrace can help users understand the behavior of language models through the lens of their training data. We showcase how it can be used to explore fact checking, hallucination, and the creativity of language models. OLMoTrace is publicly available and fully open-source.
pdf
bib
abs
AutoAlign: Get Your LLM Aligned with Minimal Annotations
Xinyu Lu
|
Dong Xu
|
Chunkang Zhang
|
Xinyan Guan
|
Junxiang Wang
|
Qingyu Zhang
|
Pengbo Wang
|
Yingzhi Mao
|
Hao Xiang
|
Xueru Wen
|
Zichao Li
|
Yaojie Lu
|
Hongyu Lin
|
Le Sun
|
Xianpei Han
Automated Alignment refers to a set of algorithms designed to align Large Language Models (LLMs) with human intentions and values while minimizing manual intervention. However, it faces challenges such as algorithmic diversity and excessively convoluted workflows. We present AutoAlign, an open-source toolkit that offers:(1) a unified framework integrating mainstream automated algorithms through a consistent interface, and(2) an accessible workflow supporting one-click execution for prompt synthesis, automatic alignment signal construction, and iterative model training. Our toolkit enables easy reproduction of existing results through extensive benchmarks and facilitates the development of novel approaches via modular components. It includes implementations for both highly efficient inference and training, as well as low-resource training. By standardizing automated alignment methodologies and providing accessible implementations, AutoAlign lowers the barriers to building customized aligned models and supports academic research.
pdf
bib
abs
Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models
Jiaxiang Liu
|
Boxuan Xing
|
Chenhao Yuan
|
ChenxiangZhang ChenxiangZhang
|
Di Wu
|
Xiusheng Huang
|
Haida Yu
|
Chuhan Lang
|
Pengfei Cao
|
Jun Zhao
|
Kang Liu
As large language models (LLMs) continue to advance, there is a growing urgency to enhance the interpretability of their internal knowledge mechanisms. Consequently, many interpretation methods have emerged, aiming to unravel the knowledge mechanisms of LLMs from various perspectives. However, current interpretation methods differ in input data formats and interpreting outputs. The tools integrating these methods are only capable of supporting tasks with specific inputs, significantly constraining their practical applications. To address these challenges, we present an open-source **Know**ledge **M**echanisms **R**evealer&**I**nterpreter (**Know-MRI**) designed to analyze the knowledge mechanisms within LLMs systematically. Specifically, we have developed an extensible core module that can automatically match different input data with interpretation methods and consolidate the interpreting outputs. It enables users to freely choose appropriate interpretation methods based on the inputs, making it easier to comprehensively diagnose the model’s internal knowledge mechanisms from multiple perspectives. Our code is available at https://github.com/nlpkeg/Know-MRI. We also provide a demonstration video on https://youtu.be/NVWZABJ43Bs.
pdf
bib
abs
AI4Reading: Chinese Audiobook Interpretation System Based on Multi-Agent Collaboration
Minjiang Huang
|
Jipeng Qiang
|
Yi Zhu
|
Chaowei Zhang
|
Xiangyu Zhao
|
Kui Yu
Audiobook interpretations are attracting increasing attention, as they provide accessible and in-depth analyses of books that offer readers practical insights and intellectual inspiration. However, their manual creation process remains time-consuming and resource-intensive. To address this challenge, we propose AI4Reading, a multi-agent collaboration system leveraging large language models (LLMs) and speech synthesis technology to generate podcast-like audiobook interpretations. The system is designed to meet three key objectives: accurate content preservation, enhanced comprehensibility, and a logical narrative structure. To achieve these goals, We develop a framework composed of 11 specialized agents—including topic analysts, case analysts, editors, a narrator, and proofreaders—that work in concert to explore themes, extract real-world cases, refine content organization, and synthesize natural spoken language. By comparing expert interpretations with our system’s output, the results show that although AI4Reading still has a gap in speech generation quality, the generated interpretative scripts are simpler and more accurate. The code of AI4Reading is publicly accessible , with a demonstration video available .
pdf
bib
abs
DEEP: an automatic bidirectional translator leveraging an ASR for translation of Italian sign language
Nicolas Tagliabue
|
Elisa Colletti
|
Francesco Roberto Dani
|
Roberto Tedesco
|
Sonia Cenceschi
|
Alessandro Trivilini
DEEP is a bidirectional translation system for the Italian Sign Language, tailored to two specific, common use cases: pharmacies and the registry office of the municipality, for which a custom corpus has been collected. Two independent pipelines permit to create a chatlike interaction style, where the deaf subject just signs in front of a camera, and sees a virtual LIS interpreter, while the hearing subject reads and writes messages into a chat UI. The LIS-to-Italian pipeline leverages, in a novel way, a customized Whisper model (a wellknown speech recognition system), by means of “pseudo-spectrograms”. The Italian-to-LIS pipeline leverages a virtual avatar created with Viggle.ai. DEEP has been evaluated with LIS signers, obtaining very promising results.
pdf
bib
abs
VenusFactory: An Integrated System for Protein Engineering with Data Retrieval and Language Model Fine-Tuning
Yang Tan
|
Chen Liu
|
Jingyuan Gao
|
Wu Banghao
|
Mingchen Li
|
Ruilin Wang
|
Lingrong Zhang
|
Huiqun Yu
|
Guisheng Fan
|
Liang Hong
|
Bingxin Zhou
Natural language processing (NLP) has significantly influenced scientific domains beyond human language, including protein engineering, where pre-trained protein language models (PLMs) have demonstrated remarkable success. However, interdisciplinary adoption remains limited due to challenges in data collection, task benchmarking, and application. This work presents VenusFactory, a versatile engine that integrates biological data retrieval, standardized task benchmarking, and modular fine-tuning of PLMs. VenusFactory supports both computer science and biology communities with choices of both a command-line execution and a Gradio-based no-code interface, integrating 40+ protein-related datasets and 40+ popular PLMs. All implementations are open-sourced on https://github.com/ai4protein/VenusFactory. A video introduction is available at https://www.youtube.com/watch?v=MT6lPH5kgCc.
pdf
bib
abs
GenGO Ultra: an LLM-powered ACL Paper Explorer
Sotaro Takeshita
|
Tornike Tsereteli
|
Simone Paolo Ponzetto
The ever-growing number of papers in natural language processing (NLP) poses the challenge of finding relevant papers. In our previous paper, we introduced GenGO, which complements NLP papers with various information, such as aspect-based summaries, to enable efficient paper exploration. While it delivers a better literature search experience, it lacks an interactive interface that dynamically produces information tailored to the user’s needs. To this end, we present an extension to our previous system, dubbed GenGO Ultra, which exploits large language models (LLMs) to dynamically generate responses grounded by published papers. We also conduct multi-granularity experiments to evaluate six text encoders and five LLMs. Our system is designed for transparency – based only on open-weight models, visible system prompts, and an open-source code base – to foster further development and research on top of our system: https://gengo-ultra.sotaro.io/
pdf
bib
abs
SpatialWebAgent: Leveraging Large Language Models for Automated Spatial Information Extraction and Map Grounding
Shunfeng Zheng
|
Meng Fang
|
Ling Chen
Understanding and extracting spatial information from text is vital for a wide range of applications, including geographic information systems (GIS), smart cities, disaster prevention, and logistics planning. This capability empowers decision-makers to gain crucial insights into geographic distributions and trends.However, the inherent complexity of geographic expressions in natural language presents significant hurdles for traditional extraction methods. These challenges stem from variations in place names, vague directional cues, and implicit spatial relationships.To address these challenges, we introduce SpatialWebAgent, an automated agent system that leverages large language models (LLMs). SpatialWebAgent is designed to extract, standardize, and ground spatial information from natural language text directly onto maps. Our system excels at handling the diverse and often ambiguous nature of geographic expressions—from varying place names and vague directions to implicit spatial relationships that demand flexible combinations of localization functions—by tapping into the powerful geospatial reasoning capabilities of LLMs. SpatialWebAgent employs a series of specialized tools to convert this extracted information into precise coordinates, which are then visualized on interactive maps.A demonstration of SpatialWebAgent is available at https://sites.google.com/view/SpatialWebAgent.
pdf
bib
abs
DocSpiral: A Platform for Integrated Assistive Document Annotation through Human-in-the-Spiral
Qiang Sun
|
Sirui Li
|
Tingting Bi
|
Du Q. Huynh
|
Mark Reynolds
|
Yuanyi Luo
|
Wei Liu
Acquiring structured data from domain-specific, image-based documents—such as scanned reports—is crucial for many downstream tasks but remains challenging due to document variability. Many of these documents exist as images rather than as machine-readable text, which requires human annotation to train automated extraction systems.We present
DocSpiral, the first Human-in-the-Spiral assistive document annotation platform, designed to address the challenge of extracting structured information from domain-specific, image-based document collections.Our spiral design establishes an iterative cycle in which human annotations train models that progressively require less manual intervention.
DocSpiral integrates document format normalization, comprehensive annotation interfaces, evaluation metrics dashboard, and API endpoints for the development of AI / ML models into a unified workflow.Experiments demonstrate that our framework reduces annotation time by at least 41% while showing consistent performance gains across three iterations during model training.By making this annotation platform freely accessible, we aim to lower barriers to AI/ML models development in document processing, facilitating the adoption of large language models in image-based, document-intensive fields such as geoscience and healthcare. The system is freely available at:
https://app.ai4wa.com. The demonstration video is available:
https://app.ai4wa.com/docs/docspiral/demo.
pdf
bib
abs
ADEPT-SQL: A High-performance Text-to-SQL Application for Real-World Enterprise-Level Databases
Yongnan Chen
|
Zhuo Chang
|
Shijia Gu
|
Yuanhang Zong
|
Zhang Mei
|
Shiyu Wang
|
Hezixiang Hezixiang
|
Hongzhi Chen
|
Jin Wei
|
Bin Cui
This paper presents Adept-SQL, a domain-adapted Text2SQL system that addresses critical deployment challenges in professional fields. While modern LLM-based solutions excel on academic benchmarks, we identify three persistent limitations in industrial application: domain-specific knowledge barriers, the schemas complexity in the real world, and the prohibitive computational costs of large LLMs. Our framework introduces two key innovations: a three-stage grounding mechanism combining dynamic terminology expansion, focused schema alignment, and historical query retrieval; coupled with a hybrid prompting architecture that decomposes SQL generation into schema-aware hinting, term disambiguation, and few-shot example incorporation phases. This approach enables efficient execution using smaller open-source LLMs while maintaining semantic precision. Deployed in petroleum engineering domains, our system achieves 97% execution accuracy on real-world databases, demonstrating 49% absolute improvement over SOTA baselines. We release implementation code to advance research in professional Text2SQL systems.
pdf
bib
abs
LCDS: A Logic-Controlled Discharge Summary Generation System Supporting Source Attribution and Expert Review
Cheng Yuan
|
Xinkai Rui
|
Yongqi Fan
|
Yawei Fan
|
Boyang Zhong
|
Jiacheng Wang
|
Weiyan Zhang
|
Tong Ruan
Despite the remarkable performance of Large Language Models (LLMs) in automated discharge summary generation, they still suffer from generating inaccurate content or fabricating information without valid sources. To address these issues, we propose LCDS, a tool for empowering LLMs with Logic-Controlled Discharge Summary generation. LCDS constructs a source mapping table by calculating the textual similarity between electronic medical records (EMRs) and discharge summaries, providing a structured reference for generation. Based on a comprehensive set of logical rules, LCDS identifies the structured writing logic of discharge summaries and integrates it with EMRs to generate silver discharge summaries. Furthermore, LCDS traces the provenance of generated content, allowing experts to review, provide feedback, and rectify errors to produce golden discharge summaries, which are subsequently recorded for the incremental fine-tuning of LLMs.Our project and demo video are in the GitHub repository https://github.com/ycycyc02/LCDS.
pdf
bib
abs
Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges
Thilo Spinner
|
Rita Sevastjanova
|
Rebecca Kehlbeck
|
Tobias Stähle
|
Daniel A. Keim
|
Oliver Deussen
|
Andreas Spitz
|
Mennatallah El-Assady
The present popularity of generative language models has amplified interest in interactive methods to guide model outputs. Prompt refinement is considered one of the most effective means to influence output among these methods. We identify several challenges associated with prompting large language models, categorized into data- and model-specific, linguistic, and socio-linguistic challenges. A comprehensive examination of model outputs, including runner-up candidates and their corresponding probabilities, is needed to address these issues. The beam search tree, the prevalent algorithm to sample model outputs, can inherently supply this information. Consequently, we leverage an interactive visual method for investigating the beam search tree, facilitating analysis of the decisions made by the model during generation. Our explorative approach validates existing results and offers additional insights.
pdf
bib
abs
Scholar Inbox: Personalized Paper Recommendations for Scientists
Markus Flicke
|
Glenn Angrabeit
|
Madhav Iyengar
|
Vitalii Protsenko
|
Illia Shakun
|
Jovan Cicvaric
|
Bora Kargi
|
Haoyu He
|
Lukas Schuler
|
Lewin Scholz
|
Kavyanjali Agnihotri
|
Yong Cao
|
Andreas Geiger
Scholar Inbox is a new open-access platform designed to address the challenges researchers face in staying current with the rapidly expanding volume of scientific literature. We provide personalized recommendations, continuous updates from open-access archives (arXiv, bioRxiv, etc.), visual paper summaries, semantic search, and a range of tools to streamline research workflows and promote open research access. The platform’s personalized recommendation system is trained on user ratings, ensuring that recommendations are tailored to individual researchers’ interests. To further enhance the user experience, Scholar Inbox also offers a map of science that provides an overview of research across domains, enabling users to easily explore specific topics. We use this map to address the cold start problem common in recommender systems, as well as an active learning strategy that iteratively prompts users to rate a selection of papers, allowing the system to learn user preferences quickly. We evaluate the quality of our recommendation system on a novel dataset of 800k user ratings, which we make publicly available, as well as via an extensive user study.
pdf
bib
abs
The Open Argument Mining Framework
Debela Gemechu
|
Ramon Ruiz-Dolz
|
Kamila Górska
|
Somaye Moslemnejad
|
Eimear Maguire
|
Dimitra Zografistou
|
Yohan Jo
|
John Lawrence
|
Chris Reed
Despite extensive research in Argument Mining (AM), the field faces significant challenges in limited reproducibility, difficulty in comparing systems due to varying task combinations, and a lack of interoperability caused by the heterogeneous nature of argumentation theory. These challenges are further exacerbated by the absence of dedicated tools, with most advancements remaining isolated research outputs rather than reusable systems. The oAMF (Open Argument Mining Framework) addresses these issues by providing an open-source, modular, and scalable platform that unifies diverse AM methods. Initially released with seventeen integrated modules, the oAMF serves as a starting point for researchers and developers to build, experiment with, and deploy AM pipelines while ensuring interoperability and allowing multiple theories of argumentation to co-exist within the same framework. Its flexible design supports integration via Python APIs, drag-and-drop tools, and web interfaces, streamlining AM development for research and industry setup, facilitating method comparison, and reproducibility.
pdf
bib
abs
Bel Esprit: Multi-Agent Framework for Building AI Model Pipelines
Yunsu Kim
|
Ahmedelmogtaba Abdelaziz
|
Thiago Castro Ferreira
|
Mohamed Al-Badrashiny
|
Hassan Sawaf
As the demand for artificial intelligence (AI) grows to address complex real-world tasks, single models are often insufficient, requiring the integration of multiple models into pipelines. This paper introduces Bel Esprit, a conversational agent designed to construct AI model pipelines based on user requirements. Bel Esprit uses a multi-agent framework where subagents collaborate to clarify requirements, build, validate, and populate pipelines with appropriate models. We demonstrate its effectiveness in generating pipelines from ambiguous user queries, using both human-curated and synthetic data. A detailed error analysis highlights ongoing challenges in pipeline building.Bel Esprit is available for a free trial at https://belesprit.aixplain.com.
pdf
bib
abs
ZeroSumEval: An Extensible Framework For Scaling LLM Evaluation with Inter-Model Competition
Hisham Abdullah Alyahya
|
Haidar Khan
|
Yazeed Alnumay
|
M Saiful Bari
|
Bulent Yener
We introduce ZeroSumEval, a dynamic, competition-based, and evolving evaluation framework for Large Language Models (LLMs) that leverages competitive games. ZeroSumEval encompasses a diverse suite of games, including security challenges (Capture the Flag), classic board games (chess), and knowledge tests (MathQuiz). These games are designed to evaluate a range of capabilities such as strategic reasoning, planning, knowledge application, safety, and adaptability. Building upon recent studies that highlight the effectiveness of game-based evaluations for LLMs, ZeroSumEval enhances these approaches by providing a standardized and extensible framework for easily implementing games and leverages DSPy to provide a better abstraction for LLM player strategies.
pdf
bib
abs
DECAF: A Dynamically Extensible Corpus Analysis Framework
Max Müller-Eberstein
|
Rob Van Der Goot
|
Anna Rogers
The study of generalization in Language Models (LMs) requires controlled experiments that can precisely measure complex linguistic variations between training and testing datasets. We introduce DECAF, a framework that enables the analysis and filtering of linguistically-annotated datasets down to the character level. Rather than creating new resources for each experiment, DECAF starts from datasets with existing linguistic annotations, and leverages them to analyze, filter, and generate highly controlled and reproducible experimental settings targeting specific research questions. We demonstrate DECAF’s functionality by adding 28 morphosyntactic annotation layers to the 115M-word BabyLM corpus and indexing the resulting 1.1B annotations to analyze its internal domain variance, and to create a controlled training data curriculum for a small-scale gender bias study. We release DECAF as an open-source Python library, along with the parsed and indexed version of BabyLM, as resources for future generalization research.
pdf
bib
abs
Dialz: A Python Toolkit for Steering Vectors
Zara Siddique
|
Liam Turner
|
Luis Espinosa-Anke
We introduce *Dialz*, a Python library for advancing research on steering vectors for open-source LMs. Steering vectors allow users to modify activations at inference time to amplify or weaken a ‘concept’, e.g. honesty or positivity, providing a more powerful alternative to prompting or fine-tuning. Dialz supports a diverse set of tasks, including creating contrastive pair datasets, computing and applying steering vectors, and visualizations. Unlike existing libraries, Dialz emphasizes modularity and usability, enabling both rapid prototyping and in-depth analysis. We demonstrate how Dialz can be used to reduce harmful outputs such as stereotypes, while also providing insights into model behaviour across different layers. We release Dialz with full documentation, tutorials, and support for popular open-source models to encourage further research in safe and controllable language generation. Dialz enables faster research cycles and facilitates insights into model interpretability, paving the way for safer, more transparent, and more reliable AI systems.
pdf
bib
abs
FORG3D: Flexible Object Rendering for Generating Vision-Language Spatial Reasoning Data from 3D Scenes
Oscar Pang
|
Freda Shi
We introduce FORG3D, a 3D rendering toolkit developed with Blender and Python, which synthesizes vision-language data for two primary purposes: (1) supporting human cognitive experiments that require fine-grained control over material and (2) analyzing and improving the visual reasoning capabilities of large vision-language models. The toolkit provides flexible and precise control over object placement, orientation, inter-object distances, and camera configurations while automatically generating detailed spatial metadata. Additionally, it includes a built-in feature for integrating AI-generated backgrounds, enhancing the realism of synthetic scenes. FORG3D is publicly available at https://github.com/compling-wat/FORG3D, and a video demonstration is available at https://www.youtube.com/watch?v=QvIqib_PU8A.
pdf
bib
abs
RT-VC: Real-Time Zero-Shot Voice Conversion with Speech Articulatory Coding
Yisi Liu
|
Chenyang Wang
|
Hanjo Kim
|
Raniya Khan
|
Gopala Anumanchipalli
Voice conversion has emerged as a pivotal technology in numerous applications ranging from assistive communication to entertainment. In this paper, we present RT-VC, a zero-shot real-time voice conversion system that delivers ultra-low latency and high-quality performance. Our approach leverages an articulatory feature space to naturally disentangle content and speaker characteristics, facilitating more robust and interpretable voice transformations. Additionally, the integration of differentiable digital signal processing (DDSP) enables efficient vocoding directly from articulatory features, significantly reducing conversion latency. Experimental evaluations demonstrate that, while maintaining synthesis quality comparable to the current state-of-the-art (SOTA) method, RT-VC achieves a CPU latency of 61.4 ms, representing a 13.3% reduction in latency.
pdf
bib
abs
Live Football Commentary System Providing Background Information
Yuichiro Mori
|
Chikara Tanaka
|
Aru Maekawa
|
Satoshi Kosugi
|
Tatsuya Ishigaki
|
Kotaro Funakoshi
|
Hiroya Takamura
|
Manabu Okumura
Previous research on sports commentary generation has primarily focused on describing major events in the match.However, real-world commentary often includes comments beyond what is visible in the video content, e.g., “Florentina has acquired him for 7 million euros.”For enhancing the viewing experience with such background information,we developed an audio commentary system for football matches that generates utterances with background information, as well as play-by-play commentary.Our system first extracts visual information, and determines whether it is an appropriate timing to produce an utterance.Then it decides which type of utterance to generate: play-by-play or background information. In the latter case, the system leverages external knowledge through retrieval-augmented generation.
pdf
bib
abs
GreaterPrompt: A Unified, Customizable, and High-Performing Open-Source Toolkit for Prompt Optimization
Wenliang Zheng
|
Sarkar Snigdha Sarathi Das
|
Yusen Zhang
|
Rui Zhang
LLMs have gained immense popularity among researchers and the general public for its impressive capabilities on a variety of tasks. Notably, the efficacy of LLMs remains significantly dependent on the quality and structure of the input prompts, making prompt design a critical factor for their performance. Recent advancements in automated prompt optimization have introduced diverse techniques that automatically enhance prompts to better align model outputs with user expectations. However, these methods often suffer from the lack of standardization and compatibility across different techniques, limited flexibility in customization, inconsistent performance across model scales, and they often exclusively rely on expensive proprietary LLM APIs. To fill in this gap, we introduce GreaterPrompt, a novel framework that democratizes prompt optimization by unifying diverse methods under a unified, customizable API while delivering highly effective prompts for different tasks. Our framework flexibly accommodates various model scales by leveraging both text feedback-based optimization for larger LLMs and internal gradient-based optimization for smaller models to achieve powerful and precise prompt improvements. Moreover, we provide a user-friendly Web UI that ensures accessibility for non-expert users, enabling broader adoption and enhanced performance across various user groups and application scenarios. GreaterPrompt is available at https://github.com/psunlpgroup/GreaterPrompt via GitHub, PyPI, and web user interfaces.
pdf
bib
abs
iPET: An Interactive Emotional Companion Dialogue System with LLM-Powered Virtual Pet World Simulation
Zheyong Xie
|
Shaosheng Cao
|
Zuozhu Liu
|
Zheyu Ye
|
Zihan Niu
|
Chonggang Lu
|
Tong Xu
|
Enhong Chen
|
Zhe Xu
|
Yao Hu
|
Wei Lu
The rapid advancement of large language models (LLMs) has unlocked transformative potential for role-playing emotional companion products, enabling systems that support emotional well-being, educational development, and therapeutic applications. However, existing approaches often lack sustained personalization and contextual adaptability, limiting their effectiveness in real-world settings. In this paper, we introduce iPET, an LLM-powered virtual pet agent designed to enhance user engagement through rich, dynamic pet behaviors and interactions tailored to individual preferences. iPET comprises three core components: a dialogue module that instantiates virtual pet agents for emotionally interactive conversations; a memory module that stores and synthesizes records of both agent and user experiences; and a world simulation module that generates diverse, preference-driven pet behaviors guided by high-level reflections. Deployed for over 200 days in a real-world, non-commercial product, iPET has served millions of users – providing emotional support to psychologically distressed individuals and demonstrating its effectiveness in practical applications.
pdf
bib
abs
LiDARR: Linking Document AMRs with Referents Resolvers
Jon Cai
|
Kristin Wright-Bettner
|
Zekun Zhao
|
Shafiuddin Rehan Ahmed
|
Abijith Trichur Ramachandran
|
Jeffrey Flanigan
|
Martha Palmer
|
James Martin
In this paper, we present LiDARR (**Li**nking **D**ocument **A**MRs with **R**eferents **R**esolvers), a web tool for semantic annotation at the document level using the formalism of Abstract Meaning Representation (AMR). LiDARR streamlines the creation of comprehensive knowledge graphs from natural language documents through semantic annotation. The tool features a visualization and interactive user interface, transforming document-level AMR annotation into an models-facilitated verification process. This is achieved through the integration of an AMR-to-surface alignment model and a coreference resolution model. Additionally, we incorporate PropBank rolesets into LiDARR to extend implicit roles in annotated AMR, allowing implicit roles to be linked through the coreference chains via AMRs.
pdf
bib
abs
SlimLM: An Efficient Small Language Model for On-Device Document Assistance
Thang M. Pham
|
Phat T. Nguyen
|
Seunghyun Yoon
|
Viet Dac Lai
|
Franck Dernoncourt
|
Trung Bui
While small language models (SLMs) show promises for mobile deployment, their real world performance and applications on smartphones remain underexplored. We present SlimLM, a series of SLMs optimized for document assistance tasks on mobile devices. Through extensive experiments on a Samsung Galaxy S24, we identify the sweet spot between model size (ranging from 125M to 8B parameters), context length, and inference time for efficient on-device processing. SlimLM is pretrained on SlimPajama-627B and fine-tuned on DocAssist, our constructed dataset for summarization, question answering, and suggestion tasks. Our smallest model demonstrates efficient performance on S24, while larger variants offer enhanced capabilities within mobile constraints. We evaluate SlimLM against existing SLMs, showing comparable or superior performance and offering a benchmark for future research in on-device language models. We provide an Android application allowing users to experience SlimLM’s document assistance capabilities, offering valuable insights for mobile developers, researchers, and companies seeking privacy-preserving on-device alternatives to server-based language models.
pdf
bib
abs
VeriMinder: Mitigating Analytical Vulnerabilities in NL2SQL
Shubham Mohole
|
Sainyam Galhotra
Application systems using natural language interfaces to databases (NLIDBs) have democratized data analysis. This positive development has also brought forth an urgent challenge to help users who might use these systems without a background in statistical analysis to formulate bias-free analytical questions. Although significant research has focused on text-to-SQL generation accuracy, addressing cognitive biases in analytical questions remains underexplored. We present [VeriMinder](https://veriminder.ai), an interactive system for detecting and mitigating such analytical vulnerabilities. Our approach introduces three key innovations: (1) a contextual semantic mapping framework for biases relevant to specific analysis contexts (2) an analytical framework that operationalizes the Hard-to-Vary principle and guides users in systematic data analysis (3) an optimized LLM-powered system that generates high-quality, task-specific prompts using a structured process involving multiple candidates, critic feedback, and self-reflection.User testing confirms the merits of our approach. In direct user experience evaluation, 82.5% participants reported positively impacting the quality of the analysis. In comparative evaluation, VeriMinder scored significantly higher than alternative approaches, at least 20% better when considered for metrics of the analysis’s concreteness, comprehensiveness, and accuracy. Our system, implemented as a web application, is set to help users avoid “wrong question” vulnerability during data analysis. VeriMinder [code base](https://reproducibility.link/veriminder) with prompts is available as an MIT-licensed open-source software to facilitate further research and adoption within the community.
pdf
bib
abs
DocAgent: A Multi-Agent System for Automated Code Documentation Generation
Dayu Yang
|
Antoine Simoulin
|
Xin Qian
|
Xiaoyi Liu
|
Yuwei Cao
|
Zhaopu Teng
|
Grey Yang
High-quality code documentation is crucial for software development especially in the era of AI. However, generating it automatically using Large Language Models (LLMs) remains challenging, as existing approaches often produce incomplete, unhelpful, or factually incorrect outputs. We introduce DocAgent, a novel multi-agent collaborative system using topological code processing for incremental context building. Specialized agents (Reader, Searcher, Writer, Verifier, Orchestrator) then collaboratively generate documentation. We also propose a multi-faceted evaluation framework assessing Completeness, Helpfulness, and Truthfulness. Comprehensive experiments show DocAgent significantly outperforms baselines consistently. Our ablation study confirms the vital role of the topological processing order. DocAgent offers a robust approach for reliable code documentation generation in complex and proprietary repositories.
pdf
bib
abs
DISPUTool 3.0: Fallacy Detection and Repairing in Argumentative Political Debates
Pierpaolo Goffredo
|
Deborah Dore
|
Elena Cabrio
|
Serena Villata
This paper introduces and evaluates a novel web-based application designed to identify and repair fallacious arguments in political debates. DISPUTool 3.0 offers a comprehensive tool for argumentation analysis of political debate, integrating state-of-the-art natural language processing techniques to mine and classify argument components and relations. DISPUTool 3.0 builds on the ElecDeb60to20 dataset, covering US presidential debates from 1960 to 2020. In this paper, we introduce a novel task which is integrated as a new module in DISPUTool, i.e., the automatic detection and classification of fallacious arguments, and the automatic repairing of such misleading arguments. The goal is to show to the user a tool which not only identifies fallacies in political debates, but it also shows how the argument looks like once the veil of fallacy falls down. An extensive evaluation of the module is addressed employing both automated metrics and human assessments. With the inclusion of this module, DISPUTool 3.0 advances even more user critical thinking in front of the augmenting spread of such nefarious kind of content in political debates and beyond. The tool is publicly available here: https://3ia-demos.inria.fr/disputool/
pdf
bib
abs
MedDecXtract: A Clinician-Support System for Extracting, Visualizing, and Annotating Medical Decisions in Clinical Narratives
Mohamed Elgaar
|
Hadi Amiri
|
Mitra Mohtarami
|
Leo Anthony Celi
Clinical notes contain crucial information about medical decisions, including diagnosis, treatment choices, and follow-up plans. However, these decisions are embedded within unstructured text, making it challenging to systematically analyze decision-making patterns or support clinical workflows. We present MedDecXtract, an open-source interactive system that automatically extracts and visualizes medical decisions from clinical text. The system combines a RoBERTa-based model for identifying ten categories of medical decisions (e.g., diagnosis, treatment, follow-up) according to the DICTUM framework, with an intuitive interface for exploration, visualization, and annotation. The system enables various applications including clinical decision support, research on decision patterns, and creation of training data for improved medical language models. The system and its source code can be accessed at https://mohdelgaar-clinical-decisions.hf.space. A video demo is available at https://youtu.be/19j6-XtIE_s.
pdf
bib
abs
CiteLab: Developing and Diagnosing LLM Citation Generation Workflows via the Human-LLM Interaction
Jiajun Shen
|
Tong Zhou
|
Yubo Chen
|
Kang Liu
|
Jun Zhao
The emerging paradigm of enabling Large Language Models (LLMs) to generate citations in Question-Answering (QA) tasks is lacking in a unified framework to standardize and fairly compare different citation generation methods, leading to difficulties in reproduction and innovation. Therefore, we introduce Citeflow, an open-source and modular framework fostering reproduction and the implementation of new designs. Citeflow is highly extensible, allowing users to utilize four main modules and 14 components to construct a pipeline, evaluate an existing method, and understand the attributing LLM-generated contents. The framework is also paired with a visual interface, Citefix, facilitating case study and modification of existing citation generation methods. Users can use this interface to conduct LLM-powered case studies according to different scenarios. Citeflow and Citefix are highly integrated into the toolkit CiteLab, and we use an authentic process of multiple rounds of improvement through the Human-LLM interaction interface to demonstrate the efficiency of our toolkit on implementing and modifying citation generation pipelines. Citelab is released at https://github.com/SjJ1017/Citelab
pdf
bib
abs
CodeArena: A Collective Evaluation Platform for LLM Code Generation
Mingzhe Du
|
Anh Tuan Luu
|
Bin Ji
|
Xiaobao Wu
|
Yuhao Qing
|
Dong Huang
|
Terry Yue Zhuo
|
Qian Liu
|
See-Kiong Ng
Large Language Models (LLMs) have reshaped code generation by synergizing their exceptional comprehension of natural language and programming syntax, thereby substantially boosting developer productivity. These advancements have prompted numerous efforts to quantitatively evaluate their coding capabilities. However, persistent challenges, such as benchmark leakage, data dissipation, and limited system accessibility, continue to impede a timely and accurate assessment. To address these limitations, we introduce CodeArena, an online evaluation framework tailored for LLM code generation. Its key innovation is a collective evaluation mechanism, which dynamically recalibrates individual model scores based on the holistic performance of all participating models, mitigating score biases caused by widespread benchmark leakage. In addition, CodeArena ensures open access to all submitted solutions and test cases and provides automation-friendly APIs to streamline the code evaluation workflow. Our main contributions are: (1) a collective evaluation system for unbiased assessment, (2) a public repository of solutions and test cases, and (3) automation-ready APIs for seamless integration.
pdf
bib
abs
Ai2 Scholar QA: Organized Literature Synthesis with Attribution
Amanpreet Singh
|
Joseph Chee Chang
|
Dany Haddad
|
Aakanksha Naik
|
Jena D. Hwang
|
Rodney Kinney
|
Daniel S Weld
|
Doug Downey
|
Sergey Feldman
Retrieval-augmented generation is increasingly effective in answering scientific questions from literature, but many state-of-the-art systems are expensive and closed-source. We introduce Ai2 Scholar QA, a free online scientific question answering application. To facilitate research, we make our entire pipeline public: as a customizable open-source Python package and interactive web app, along with paper indexes accessible through public APIs and downloadable datasets. We describe our system in detail and present experiments analyzing its key design decisions. In an evaluation on a recent scientific QA benchmark, we find that Ai2 Scholar QA outperforms competing systems.
pdf
bib
abs
gec-metrics: A Unified Library for Grammatical Error Correction Evaluation
Takumi Goto
|
Yusuke Sakai
|
Taro Watanabe
We introduce gec-metrics, a library for using and developing grammatical error correction (GEC) evaluation metrics through a unified interface. Our library enables fair system comparisons by ensuring that everyone conducts evaluations using a consistent implementation. Moreover, it is designed with a strong focus on API usage, making it highly extensible. It also includes meta-evaluation functionalities and provides analysis and visualization scripts, contributing to developing GEC evaluation metrics. Our code is released under the MIT license1 and is also distributed as an installable package2. The video is available at YouTube3.1GitHub: https://github.com/gotutiyan/gec-metrics2PyPi: https://pypi.org/project/gec-metrics/3Video: https://youtu.be/cor6dkN6EfI
pdf
bib
abs
AI2Agent: An End-to-End Framework for Deploying AI Projects as Autonomous Agents
Jiaxiang Chen
|
Jingwei Shi
|
Lei Gan
|
Jiale Zhang
|
Qingyu Zhang
|
Dongqian Zhang
|
Pang Xin
|
Zhucong Li
|
Xu Yinghui
As AI technology advances, it is driving innovation across industries, increasing the demand for scalable AI project deployment. However, deployment remains a critical challenge due to complex environment configurations, dependency conflicts, cross-platform adaptation, and debugging difficulties, which hinder automation and adoption.This paper introduces AI2Agent, an end-to-end framework that automates AI project deployment through guideline-driven execution, self-adaptive debugging, and case & solution accumulation. AI2Agent dynamically analyzes deployment challenges, learns from past cases, and iteratively refines its approach, significantly reducing human intervention.To evaluate its effectiveness, we conducted experiments on 30 AI deployment cases, covering TTS, text-to-image generation, image editing, and other AI applications. Results show that AI2Agent significantly reduces deployment time and improves success rates. The code and demo video are now publicly accessible.
pdf
bib
abs
Efficient Annotator Reliability Assessment with EffiARA
Owen Cook
|
Jake A Vasilakes
|
Ian Roberts
|
Xingyi Song
Data annotation is an essential component of the machine learning pipeline; it is also a costly and time-consuming process. With the introduction of transformer-based models, annotation at the document level is increasingly popular; however, there is no standard framework for structuring such tasks. The EffiARA annotation framework is, to our knowledge, the first project to support the whole annotation pipeline, from understanding the resources required for an annotation task to compiling the annotated dataset and gaining insights into the reliability of individual annotators as well as the dataset as a whole. The framework’s efficacy is supported by two previous studies: one improving classification performance through annotator-reliability-based soft-label aggregation and sample weighting, and the other increasing the overall agreement among annotators through removing identifying and replacing an unreliable annotator. This work introduces the EffiARA Python package and its accompanying webtool, which provides an accessible graphical user interface for the system. We open-source the EffiARA Python package at https://github.com/MiniEggz/EffiARA and the webtool is publicly accessible at https://effiara.gate.ac.uk.
pdf
bib
abs
TRANSLATIONCORRECT: A Unified Framework for Machine Translation Post-Editing with Predictive Error Assistance
Syed Mekael Wasti
|
Shou-Yi Hung
|
Christopher Collins
|
En-Shiun Annie Lee
Machine translation (MT) post-editing and research data collection often rely on inefficient, disconnected workflows. We introduce **TranslationCorrect**, an integrated framework designed to streamline these tasks. **TranslationCorrect** combines MT generation using models like NLLB, automated error prediction using models like XCOMET or LLM APIs (providing detailed reasoning), and an intuitive post-editing interface within a single environment. Built with human-computer interaction (HCI) principles in mind to minimize cognitive load, as confirmed by a user study. For translators, it enables them to correct errors and batch translate efficiently. For researchers, **TranslationCorrect** exports high-quality span-based annotations in the Error Span Annotation (ESA) format, using an error taxonomy inspired by Multidimensional Quality Metrics (MQM). These outputs are compatible with state-of-the-art error detection models and suitable for training MT or post-editing systems. Our user study confirms that **TranslationCorrect** significantly improves translation efficiency and user satisfaction over traditional annotation methods.
pdf
bib
abs
First-AID: the first Annotation Interface for grounded Dialogues
Stefano Menini
|
Daniel Russo
|
Alessio Palmero Aprosio
|
Marco Guerini
The swift advancement of Large Language Models (LLMs) has led to their widespread use across various tasks and domains, demonstrating remarkable generalization capabilities. However, achieving optimal performance in specialized tasks often requires fine-tuning LLMs with task-specific resources. The creation of high-quality, human-annotated datasets for this purpose is challenging due to financial constraints and the limited availability of human experts. To address these limitations, we propose First-AID, a novel human-in-the-loop (HITL) data collection framework for the knowledge-driven generation of synthetic dialogues using LLM prompting. In particular, our framework implements different strategies of data collection that require different user intervention during dialogue generation to reduce post-editing efforts and enhance the quality of generated dialogues. We also evaluated First-AID on misinformation and hate countering dialogues collection, demonstrating (1) its potential for efficient and high-quality data generation and (2) its adaptability to different practical constraints thanks to the three data collection strategies.
pdf
bib
abs
Mergenetic: a Simple Evolutionary Model Merging Library
Adrian Robert Minut
|
Tommaso Mencattini
|
Andrea Santilli
|
Donato Crisostomi
|
Emanuele Rodolà
Model merging allows combining the capabilities of existing models into a new one—post hoc, without additional training. This has made it increasingly popular thanks to its low cost and the availability of libraries that support merging on consumer GPUs. Recent work shows that pairing merging with evolutionary algorithms can boost performance, but no framework currently supports flexible experimentation with such strategies in language models. We introduce Mergenetic, an open-source library for evolutionary model merging. Mergenetic enables easy composition of merging methods and evolutionary algorithms, while incorporating lightweight fitness estimators to reduce evaluation costs. We describe its design and demonstrate that Mergenetic produces competitive results across tasks and languages using modest hardware. A video demo showcasing its main features is also provided.
pdf
bib
abs
FlagEval-Arena: A Side-by-Side Comparative Evaluation Platform for Large Language Models and Text-Driven AIGC
Jing-Shu Zheng
|
Richeng Xuan
|
Bowen Qin
|
Zheqi He
|
Tongshuai.ren Tongshuai.ren
|
Xuejing Li
|
Jin-Ge Yao
|
Xi Yang
We introduce FlagEval-Arena, an evaluation platform for side-by-side comparisons of large language models and text-driven AIGC systems.Compared with the well-known LM Arena (LMSYS Chatbot Arena), we reimplement our own framework with the flexibility to introduce new mechanisms or features. Our platform enables side-by-side evaluation not only for language models or vision-language models, but also text-to-image or text-to-video synthesis. We specifically target at Chinese audience with a more focus on the Chinese language, more models developed by Chinese institutes, and more general usage beyond the technical community. As a result, we currently observe very interesting differences from usual results presented by LM Arena. Our platform is available via this URL:
https://flageval.baai.org/#/arena.
pdf
bib
abs
IRIS: Interactive Research Ideation System for Accelerating Scientific Discovery
Aniketh Garikaparthi
|
Manasi Patwardhan
|
Lovekesh Vig
|
Arman Cohan
The rapid advancement in capabilities of large language models (LLMs) raises a pivotal question: How can LLMs accelerate scientific discovery? This work tackles the crucial first stage of research, generating novel hypotheses. While recent work on automated hypothesis generation focuses on multi-agent frameworks and extending test-time compute, none of the approaches effectively incorporate transparency and steerability through a synergistic Human-in-the-loop (HITL) approach. To address this gap, we introduce IRIS for interactive hypothesis generation, an open-source platform designed for researchers to leverage LLM-assisted scientific ideation. IRIS incorporates innovative features to enhance ideation, including adaptive test-time compute expansion via Monte Carlo Tree Search (MCTS), fine-grained feedback mechanism, and query-based literature synthesis. Designed to empower researchers with greater control and insight throughout the ideation process. We additionally conduct a user study with researchers across diverse disciplines, validating the effectiveness of our system in enhancing ideation. We open-source our code at https://github.com/Anikethh/IRIS-Interactive-Research-Ideation-System.
pdf
bib
abs
ROGRAG: A Robustly Optimized GraphRAG Framework
Zhefan Wang
|
Huanjun Kong
|
Jie Ying
|
Wanli Ouyang
|
Nanqing Dong
Large language models (LLMs) commonly struggle with specialized or emerging topics which are rarely seen in the training corpus. Graph-based retrieval-augmented generation (GraphRAG) addresses this by structuring domain knowledge as a graph for dynamic retrieval. However, existing pipelines involve complex engineering workflows, making it difficult to isolate the impact of individual components. It is also challenging to evaluate the retrieval effectiveness due to the overlap between the pretraining and evaluation datasets. In this work, we introduce ROGRAG, a Robustly Optimized GraphRAG framework. Specifically, we propose a multi-stage retrieval mechanism that integrates dual-level with logic form retrieval methods to improve retrieval robustness without increasing computational cost. To further refine the system, we incorporate various result verification methods and adopt an incremental database construction approach. Through extensive ablation experiments, we rigorously assess the effectiveness of each component. Our implementation includes comparative experiments on SeedBench, where Qwen2.5-7B-Instruct initially underperformed. ROGRAG significantly improves the score from 60.0% to 75.0% and outperforms mainstream methods. Experiments on domain-specific datasets reveal that dual-level retrieval enhances fuzzy matching, while logic form retrieval improves structured reasoning, highlighting the importance of multi-stage retrieval. ROGRAG is released as an open-source resource https://github.com/tpoisonooo/ROGRAG and supports installation with pip.
pdf
bib
abs
LECTURE4ALL: A Lightweight Approach to Precise Timestamp Detection in Online Lecture Videos
Viktoria Wrobel
|
Simon Kazemi
|
Frank Hammerschmidt
|
Torben Hannemann
|
Gregor Stange
|
Seid Muhie Yimam
|
Robert Geislinger
This paper presents LECTURE4ALL, a web application developed to improve the search experience of educational video platforms. Lecture2Go provides a vast collection of recorded lectures, but locating specific content within videos can be time-consuming. LECTURE4ALL addresses this issue by leveraging a vector database and a streamlined user interface to enable direct retrieval of precise video timestamps. By enhancing search accuracy and efficiency, LECTURE4ALL significantly improves the accessibility and usability of educational video platforms.
pdf
bib
abs
FlexRAG: A Flexible and Comprehensive Framework for Retrieval-Augmented Generation
Zhang Zhuocheng
|
Yang Feng
|
Min Zhang
Retrieval-Augmented Generation (RAG) plays a pivotal role in modern large language model applications, with numerous existing frameworks offering a wide range of functionalities to facilitate the development of RAG systems.However, we have identified several persistent challenges in these frameworks, including lack of new techniques, difficulties in algorithm reproduction and sharing, and high system overhead.To address these limitations, we introduce **FlexRAG**, an open-source framework specifically designed for research and prototyping.FlexRAG supports text-based, multimodal, and network-based RAG, providing comprehensive lifecycle support alongside efficient asynchronous processing and persistent caching capabilities.By offering a robust and flexible solution, FlexRAG enables researchers to rapidly develop, deploy, and share advanced RAG systems.Our toolkit and resources are available at https://github.com/ictnlp/FlexRAG.
pdf
bib
abs
ComfyUI-Copilot: An Intelligent Assistant for Automated Workflow Development
Zhenran Xu
|
Yangxue Yangxue
|
Yiyu Wang
|
Qingli Hu
|
Zijiao Wu
|
Baotian Hu
|
Longyue Wang
|
Weihua Luo
|
Kaifu Zhang
We introduce **ComfyUI-Copilot**, a large language model-powered plugin designed to enhance the usability and efficiency of ComfyUI, an open-source platform for AI-driven art creation. Despite its flexibility and user-friendly interface, ComfyUI can present challenges to newcomers, including limited documentation, model misconfigurations, and the complexity of workflow design. ComfyUI-Copilot addresses these challenges by offering intelligent node and model recommendations, along with automated one-click workflow construction. At its core, the system employs a hierarchical multi-agent framework comprising a central assistant agent for task delegation and specialized worker agents for different usages, supported by our curated ComfyUI knowledge bases to streamline debugging and deployment. We validate the effectiveness of ComfyUI-Copilot through both offline quantitative evaluations and online user feedback, showing that it accurately recommends nodes and accelerates workflow development. Additionally, use cases illustrate that ComfyUI-Copilot lowers entry barriers for beginners and enhances workflow efficiency for experienced users. The ComfyUI-Copilot installation package and a demo video are available at https://github.com/AIDC-AI/ComfyUI-Copilot.
pdf
bib
abs
PRAISE: Enhancing Product Descriptions with LLM-Driven Structured Insights
Adnan Qidwai
|
Srija Mukhopadhyay
|
Prerana Khatiwada
|
Dan Roth
|
Vivek Gupta
Accurate and complete product descriptions are crucial for e-commerce, yet seller-provided information often falls short. Customer reviews offer valuable details but are laborious to sift through manually. We present PRAISE: Product Review Attribute Insight Structuring Engine, a novel system that uses Large Language Models (LLMs) to automatically extract, compare, and structure insights from customer reviews and seller descriptions. PRAISE provides users with an intuitive interface to identify missing, contradictory, or partially matching details between these two sources, presenting the discrepancies in a clear, structured format alongside supporting evidence from reviews. This allows sellers to easily enhance their product listings for clarity and persuasiveness, and buyers to better assess product reliability. Our demonstration showcases PRAISE’s workflow, its effectiveness in generating actionable structured insights from unstructured reviews, and its potential to significantly improve the quality and trustworthiness of e-commerce product catalogs.
pdf
bib
abs
ATGen: A Framework for Active Text Generation
Akim Tsvigun
|
Daniil Vasilev
|
Ivan Tsvigun
|
Ivan Lysenko
|
Talgat Bektleuov
|
Aleksandr Medvedev
|
Uliana Vinogradova
|
Nikita Severin
|
Mikhail Mozikov
|
Andrey Savchenko
|
Ilya Makarov
|
Grigorev Rostislav
|
Ramil Kuleev
|
Fedor Zhdanov
|
Artem Shelmanov
Active learning (AL) has demonstrated remarkable potential in reducing the annotation effort required for training machine learning models. However, despite the surging popularity of natural language generation (NLG) tasks in recent years, the application of AL to NLG has been limited. In this paper, we introduce Active Text Generation (ATGen) - a comprehensive framework that bridges AL with text generation tasks, enabling the application of state-of-the-art AL strategies to NLG. Our framework simplifies AL-empowered annotation in NLG tasks using both human annotators and automatic annotation agents based on large language models (LLMs). The framework supports LLMs deployed as a service, such as ChatGPT and Claude, or operated on-premises. Furthermore, ATGen provides a unified platform for smooth implementation and benchmarking of novel AL strategies tailored to NLG tasks. Finally, we present experimental results across multiple text generation tasks where we compare the performance of state-of-the-art AL strategies in various settings. We demonstrate that ATGen can reduce both the effort of human annotators and costs for API calls to automatic annotation agents based on LLMs.
pdf
bib
abs
Value Compass Benchmarks: A Comprehensive, Generative and Self-Evolving Platform for LLMs’ Value Evaluation
Jing Yao
|
Xiaoyuan Yi
|
Shitong Duan
|
Jindong Wang
|
Yuzhuo Bai
|
Muhua Huang
|
Yang Ou
|
Scarlett Li
|
Peng Zhang
|
Tun Lu
|
Zhicheng Dou
|
Maosong Sun
|
James Evans
|
Xing Xie
As large language models (LLMs) are gradually integrated into human daily life, assessing their underlying values becomes essential for understanding their risks and alignment with specific preferences. Despite growing efforts, current value evaluation methods face two key challenges. C1. Evaluation Validity: Static benchmarks fail to reflect intended values or yield informative results due to data contamination or a ceiling effect. C2. Result Interpretation: They typically reduce the pluralistic and often incommensurable values to one-dimensional scores, which hinders users from gaining meaningful insights and guidance. To address these challenges, we present Value Compass Benchmarks, the first dynamic, online and interactive platform specially devised for comprehensive value diagnosis of LLMs. It (1) grounds evaluations in multiple basic value systems from social science; (2) develops a generative evolving evaluation paradigm that automatically creates real-world test items co-evolving with ever-advancing LLMs; (3) offers multi-faceted result interpretation, including (i) fine-grained scores and case studies across 27 value dimensions for 33 leading LLMs, (ii) customized comparisons, and (iii) visualized analysis of LLMs’ alignment with cultural values. We hope Value Compass Benchmarks serves as a navigator for further enhancing LLMs’ safety and alignment, benefiting their responsible and adaptive development.