Awes, Laws, and Flaws From Today’s LLM Research

Adrian de Wynter


Abstract
We perform a critical examination of the scientific methodology behind contemporary large language model (LLM) research. For this we assess over 2,000 research works released between 2020 and 2024 based on criteria typical of what is considered good research (e.g. presence of statistical tests and reproducibility), and cross-validate it with arguments that are at the centre of controversy (e.g., claims of emergent behaviour). We find multiple trends, such as declines in ethics disclaimers, a rise of LLMs as evaluators, and an increase on claims of LLM reasoning abilities without leveraging human evaluation. We note that conference checklists are effective at curtailing some of these issues, but balancing velocity and rigour in research cannot solely rely on these. We tie all these findings to findings from recent meta-reviews and extend recommendations on how to address what does, does not, and should work in LLM research.
Anthology ID:
2025.findings-acl.664
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12834–12854
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.664/
DOI:
10.18653/v1/2025.findings-acl.664
Bibkey:
Cite (ACL):
Adrian de Wynter. 2025. Awes, Laws, and Flaws From Today’s LLM Research. In Findings of the Association for Computational Linguistics: ACL 2025, pages 12834–12854, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Awes, Laws, and Flaws From Today’s LLM Research (de Wynter, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.664.pdf