QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums

Varun Nagaraj Rao, Eesha Agarwal, Samantha Dalal, Dana Calacci, Andrés Monroy-Hernández


Abstract
Online discussion forums provide crucial data to understand the concerns of a wide range of real-world communities. However, the typical qualitative and quantitative methodologies used to analyze those data, such as thematic analysis and topic modeling, are infeasible to scale or require significant human effort to translate outputs to human readable forms. This study introduces QuaLLM, a novel LLM-based framework to analyze and extract quantitative insights from text data on online forums. The framework consists of a novel prompting and human evaluation methodology. We applied this framework to analyze over one million comments from two of Reddit’s rideshare worker communities, marking the largest study of its type. We uncover significant worker concerns regarding AI and algorithmic platform decisions, responding to regulatory calls about worker insights. In short, our work sets a new precedent for AI-assisted quantitative data analysis to surface concerns from online forums.
Anthology ID:
2025.findings-naacl.74
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1355–1369
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.74/
DOI:
Bibkey:
Cite (ACL):
Varun Nagaraj Rao, Eesha Agarwal, Samantha Dalal, Dana Calacci, and Andrés Monroy-Hernández. 2025. QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 1355–1369, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums (Nagaraj Rao et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.74.pdf