Can an LLM Elicit Information from Users in Simple Optimization Modelling Dialogues?

Yelaman Abdullin, Diego Mollá, Bahadorreza Ofoghi, Vicky Mak-Hau, John Yearwood


Abstract
For a natural language dialogue system to engage in a goal-oriented conversation, it must elicit information from a user. Research on large language models (LLMs) often focuses on aligning them with user goals. Consequently, studies show these models can serve as chat assistants and answer the user questions. However, their information-elicitation abilities remain understudied. This work evaluates these abilities in goal-oriented dialogues for optimisation modelling. We compare two GPT-4-based settings that generate conversations between a modeller and a user over NL4Opt, a collection of simple optimisation problem descriptions, and analyse the modeller’s information elicitation. In the first, the modeller LLM has access to problem details and asks targeted questions, simulating an informed modeller. In the second, the LLM infers problem details through interaction — asking clarifying questions, interpreting responses, and gradually constructing an understanding of the task. This comparison assesses whether LLMs can elicit information and navigate problem discovery without prior knowledge of the problem. We compare modeller turns in both settings using human raters across criteria at the whole-dialogue and turn levels. Results show that a non-informed LLM can elicit information nearly as well as an informed one, producing high-quality dialogues. In particular, the success levels of both agents in the system without modeller access to the problem details are comparable to those in a system with full access. Dialogues rate well on coherence, and a post-annotation error analysis identified useful types for improving quality. GPT-4’s capability to elicit information in optimisation modelling dialogues suggests newer LLMs may possess even greater capability.
Anthology ID:
2025.alta-main.5
Volume:
Proceedings of The 23rd Annual Workshop of the Australasian Language Technology Association
Month:
November
Year:
2025
Address:
Sydney, Australia
Editors:
Jonathan K. Kummerfeld, Aditya Joshi, Mark Dras
Venue:
ALTA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–75
Language:
URL:
https://preview.aclanthology.org/ingest-alta/2025.alta-main.5/
DOI:
Bibkey:
Cite (ACL):
Yelaman Abdullin, Diego Mollá, Bahadorreza Ofoghi, Vicky Mak-Hau, and John Yearwood. 2025. Can an LLM Elicit Information from Users in Simple Optimization Modelling Dialogues?. In Proceedings of The 23rd Annual Workshop of the Australasian Language Technology Association, pages 64–75, Sydney, Australia. Association for Computational Linguistics.
Cite (Informal):
Can an LLM Elicit Information from Users in Simple Optimization Modelling Dialogues? (Abdullin et al., ALTA 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-alta/2025.alta-main.5.pdf