2023
pdf
abs
Aligning Offline Metrics and Human Judgments of Value for Code Generation Models
Victor Dibia
|
Adam Fourney
|
Gagan Bansal
|
Forough Poursabzi-Sangdeh
|
Han Liu
|
Saleema Amershi
Findings of the Association for Computational Linguistics: ACL 2023
Large language models have demonstrated great potential to assist programmers in generating code. For such human-AI pair programming scenarios, we empirically demonstrate that while generated code are most often evaluated in terms of their functional correctness (i.e., whether generations pass available unit tests), correctness does not fully capture (e.g., may underestimate) the productivity gains these models may provide. Through a user study with N=49 experienced programmers, we show that while correctness captures high-value generations, programmers still rate code that fails unit tests as valuable if it reduces the overall effort needed to complete a coding task. Finally, we propose a hybrid metric that combines functional correctness and syntactic similarity and show that it achieves a 14% stronger correlation with value and can therefore better represent real-world gains when evaluating and comparing models.
2017
pdf
bib
abs
Evaluating Visual Representations for Topic Understanding and Their Effects on Manually Generated Topic Labels
Alison Smith
|
Tak Yeon Lee
|
Forough Poursabzi-Sangdeh
|
Jordan Boyd-Graber
|
Niklas Elmqvist
|
Leah Findlater
Transactions of the Association for Computational Linguistics, Volume 5
Probabilistic topic models are important tools for indexing, summarizing, and analyzing large document collections by their themes. However, promoting end-user understanding of topics remains an open research problem. We compare labels generated by users given four topic visualization techniques—word lists, word lists with bars, word clouds, and network graphs—against each other and against automatically generated labels. Our basis of comparison is participant ratings of how well labels describe documents from the topic. Our study has two phases: a labeling phase where participants label visualized topics and a validation phase where different participants select which labels best describe the topics’ documents. Although all visualizations produce similar quality labels, simple visualizations such as word lists allow participants to quickly understand topics, while complex visualizations take longer but expose multi-word expressions that simpler visualizations obscure. Automatic labels lag behind user-created labels, but our dataset of manually labeled topics highlights linguistic patterns (e.g., hypernyms, phrases) that can be used to improve automatic topic labeling algorithms.
2016
pdf
ALTO: Active Learning with Topic Overviews for Speeding Label Induction and Document Labeling
Forough Poursabzi-Sangdeh
|
Jordan Boyd-Graber
|
Leah Findlater
|
Kevin Seppi
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2015
pdf
Speeding Document Annotation with Topic Models
Forough Poursabzi-Sangdeh
|
Jordan Boyd-Graber
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop