Thomas Berkane
2026
The AI Committee: A Multi-Agent Framework for Automated Validation and Remediation of Web-Sourced Data
Sunith Vallabhaneni | Thomas Berkane | Maimuna S. Majumder
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Sunith Vallabhaneni | Thomas Berkane | Maimuna S. Majumder
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Many research areas rely on data from theweb to gain insights and test their methods.However, collecting comprehensive researchdatasets often demands manually reviewingmany web pages to identify and record relevantdata points, which is labor-intensive and sus-ceptible to error. While the emergence of largelanguage models (LLM)-powered web agentshas begun to automate parts of this process,they often struggle to ensure the validity of thedata they collect. Indeed, these agents exhibitseveral recurring failure modes—including hal-lucinating or omitting values, misinterpretingpage semantics, and failing to detect invalidinformation—which are subtle and difficultto detect and correct manually. To addressthis, we introduce the AI Committee, a novelmodel-agnostic multi-agent system that auto-mates the process of validating and remediatingweb-sourced datasets. Each agent is special-ized in a distinct task in the data quality assur-ance pipeline, from source scrutiny and fact-checking to data remediation and integrity val-idation. The AI Committee leverages variousLLM capabilities—including in-context learn-ing for dataset adaptation, chain-of-thought rea-soning for complex semantic validation, and aself-correction loop for data remediation—allwithout task-specific training. We demonstratethe effectiveness of our system by applyingit to three real-world datasets, showing that itgeneralizes across LLMs and significantly out-performs baseline approaches, achieving datacompleteness up to 73.3% and precision up to97.3%. We additionally conduct an ablationstudy demonstrating the contribution of eachagent to the Committee’s performance. Thiswork is released as an open-source tool for theresearch community
2025
LLM-Based Web Data Collection for Research Dataset Creation
Thomas Berkane | Marie-Laure Charpignon | Maimuna S. Majumder
Findings of the Association for Computational Linguistics: EMNLP 2025
Thomas Berkane | Marie-Laure Charpignon | Maimuna S. Majumder
Findings of the Association for Computational Linguistics: EMNLP 2025
Researchers across many fields rely on web data to gain new insights and validate methods. However, assembling accurate and comprehensive datasets typically requires manual review of numerous web pages to identify and record only those data points relevant to specific research objectives. The vast and scattered nature of online information makes this process time-consuming and prone to human error. To address these challenges, we present a human-in-the-loop framework that automates web-scale data collection end-to-end using large language models (LLMs). Given a textual description of a target dataset, our framework (1) automatically formulates search engine queries, (2) navigates the web to identify relevant web pages, (3) extracts the data points of interest, and (4) performs quality control to produce a structured, research-ready dataset. Importantly, users remain in the loop throughout the process and can inspect and adjust the framework’s decisions to ensure alignment with their needs. We introduce techniques to mitigate both search engine bias and LLM hallucinations during data extraction. Experiments across three diverse data collection tasks show that our framework greatly outperforms existing methods, while a user evaluation demonstrates its practical utility. We release our code at https://github.com/tberkane/web-data-collection to help other researchers create custom datasets more efficiently.