Molly Q Feldman
2025
Substance Beats Style: Why Beginning Students Fail to Code with LLMs
Francesca Lucchetti
|
Zixuan Wu
|
Arjun Guha
|
Molly Q Feldman
|
Carolyn Jane Anderson
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Although LLMs are increasing the productivity of professional programmers, existing work shows that beginners struggle to prompt LLMs to solve text-to-code tasks (Nguyen et al., 2024; Prather et al., 2024b; Mordechai et al., 2024). Why is this the case? This paper explores two competing hypotheses about the cause of student-LLM miscommunication: (1) students simply lack the technical vocabulary needed to write good prompts, and (2) students do not understand the extent of information that LLMs need to solve code generation tasks. We study (1) with a causal intervention experiment on technical vocabulary and (2) by analyzing graphs that abstract how students edit prompts and the different failures that they encounter. We find that substance beats style: a poor grasp of technical vocabulary is merely correlated with prompt failure; that the information content of prompts predicts success; that students get stuck making trivial edits; and more. Our findings have implications for the use of LLMs in programming education, and for efforts to make computing more accessible with LLMs.
2024
StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code
Hannah McLean Babe
|
Sydney Nguyen
|
Yangtian Zi
|
Arjun Guha
|
Molly Q Feldman
|
Carolyn Jane Anderson
Findings of the Association for Computational Linguistics: ACL 2024
Code LLMs have the potential to make it easier for non-experts to understand and write code. However, current CodeLLM benchmarks rely on a single expert-written prompt per problem, making it hard to generalize their success to non-expert users. In this paper, we present a new natural-language-to-code benchmark of prompts written by a key population of non-experts: beginning programmers. StudentEval contains 1,749 prompts written by 80 students who have only completed one introductory Python course. StudentEval contains numerous non-expert prompts describing the same problem, enabling exploration of key factors in prompt success. We use StudentEval to evaluate 12 Code LLMs and find that StudentEval is a better discriminator of model performance than existing benchmarks. Our analysis of student prompting strategies reveals that nondeterministic LLM sampling can mislead students about the quality of their descriptions, a finding with key implications for Code LLMs in education.
Search
Fix data
Co-authors
- Carolyn Jane Anderson 2
- Arjun Guha 2
- Hannah McLean Babe 1
- Francesca Lucchetti 1
- Sydney Nguyen 1
- show all...