Noam Kahlon


2025

pdf bib
Small Models, Big Results: Achieving Superior Intent Extraction through Decomposition
Danielle Cohen | Yoni Halpern | Noam Kahlon | Joel Oren | Omri Berkovitch | Sapir Caduri | Ido Dagan | Anatoly Efros
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Understanding user intents from UI interaction trajectories remains a challenging, yet crucial, frontier in intelligent agent development. While massive, datacenter-based, multi-modal large language models (MLLMs) possess greater capacity to handle the complexities of such sequences, smaller models which can run on-device to provide a privacy-preserving, low-cost, and low-latency user experience, struggle with accurate intent inference. We address these limitations by introducing a novel decomposed approach: first, we perform structured interaction summarization, capturing key information from each user action. Second, we perform intent extraction using a fine-tuned model operating on the aggregated summaries. This method improves intent understanding in resource-constrained models, even surpassing the base performance of large MLLMs.

2022

pdf bib
Dyna-bAbI: unlocking bAbI’s potential with dynamic synthetic benchmarking
Ronen Tamari | Kyle Richardson | Noam Kahlon | Aviad Sar-shalom | Nelson F. Liu | Reut Tsarfaty | Dafna Shahaf
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics

While neural language models often perform surprisingly well on natural language understanding (NLU) tasks, their strengths and limitations remain poorly understood. Controlled synthetic tasks are thus an increasingly important resource for diagnosing model behavior. In this work we focus on story understanding, a core competency for NLU systems. However, the main synthetic resource for story understanding, the bAbI benchmark, lacks such a systematic mechanism for controllable task generation. We develop Dyna-bAbI, a dynamic framework providing fine-grained control over task generation in bAbI. We demonstrate our ideas by constructing three new tasks requiring compositional generalization, an important evaluation setting absent from the original benchmark. We tested both special-purpose models developed for bAbI as well as state-of-the-art pre-trained methods, and found that while both approaches solve the original tasks (99% accuracy), neither approach succeeded in the compositional generalization setting, indicating the limitations of the original training data. We explored ways to augment the original data, and found that though diversifying training data was far more useful than simply increasing dataset size, it was still insufficient for driving robust compositional generalization (with 70% accuracy for complex compositions). Our results underscore the importance of highly controllable task generators for creating robust NLU systems through a virtuous cycle of model and data development.