Small Models, Big Results: Achieving Superior Intent Extraction through Decomposition
Danielle Cohen, Yoni Halpern, Noam Kahlon, Joel Oren, Omri Berkovitch, Sapir Caduri, Ido Dagan, Anatoly Efros
Abstract
Understanding user intents from UI interaction trajectories remains a challenging, yet crucial, frontier in intelligent agent development. While massive, datacenter-based, multi-modal large language models (MLLMs) possess greater capacity to handle the complexities of such sequences, smaller models which can run on-device to provide a privacy-preserving, low-cost, and low-latency user experience, struggle with accurate intent inference. We address these limitations by introducing a novel decomposed approach: first, we perform structured interaction summarization, capturing key information from each user action. Second, we perform intent extraction using a fine-tuned model operating on the aggregated summaries. This method improves intent understanding in resource-constrained models, even surpassing the base performance of large MLLMs.- Anthology ID:
- 2025.emnlp-main.949
- Volume:
- Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 18791–18810
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.949/
- DOI:
- Cite (ACL):
- Danielle Cohen, Yoni Halpern, Noam Kahlon, Joel Oren, Omri Berkovitch, Sapir Caduri, Ido Dagan, and Anatoly Efros. 2025. Small Models, Big Results: Achieving Superior Intent Extraction through Decomposition. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18791–18810, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Small Models, Big Results: Achieving Superior Intent Extraction through Decomposition (Cohen et al., EMNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.949.pdf