Emergent Convergence in Multi-Agent LLM Annotation

Angelina Parfenova, Alexander Denzler, Jürgen Pfeffer


Abstract
Large language models (LLMs) are increasingly deployed in collaborative settings, yet little is known about how they coordinate when treated as black-box agents. We simulate 7,500 multi-agent, multi-round discussions in an inductive coding task, generating over 125,000 utterances that capture both final annotations and their interactional histories. We introduce process-level metrics—code stability, semantic self-consistency, and lexical confidence—alongside sentiment and convergence measures, to track coordination dynamics. To probe deeper alignment signals, we analyze the evolving geometry of output embeddings, showing that intrinsic dimensionality declines over rounds, suggesting semantic compression. The results reveal that LLM groups converge lexically and semantically, develop asymmetric influence patterns, and exhibit negotiation-like behaviors despite the absence of explicit role prompting. This work demonstrates how black-box interaction analysis can surface emergent coordination strategies, offering a scalable complement to internal probe-based interpretability methods.
Anthology ID:
2025.blackboxnlp-1.12
Volume:
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Yonatan Belinkov, Aaron Mueller, Najoung Kim, Hosein Mohebbi, Hanjie Chen, Dana Arad, Gabriele Sarti
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
206–225
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.blackboxnlp-1.12/
DOI:
Bibkey:
Cite (ACL):
Angelina Parfenova, Alexander Denzler, and Jürgen Pfeffer. 2025. Emergent Convergence in Multi-Agent LLM Annotation. In Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 206–225, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Emergent Convergence in Multi-Agent LLM Annotation (Parfenova et al., BlackboxNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.blackboxnlp-1.12.pdf