Ashton Anderson
2025
ChatBench: From Static Benchmarks to Human-AI Evaluation
Serina Chang
|
Ashton Anderson
|
Jake M. Hofman
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
With the rapid adoption of LLM-based chat-bots, there is a pressing need to evaluate what humans and LLMs can achieve together. However, standard benchmarks, such as MMLU, measure LLM capabilities in isolation (i.e., “AI-alone”). Here, we design and conduct a user study to convert MMLU questions into user-AI conversations, by seeding the user with the question and having them carry out a conversation with the LLM to answer their question. We release ChatBench, a new dataset with AI-alone, user-alone, and user-AI data for 396 questions and two LLMs, including 144K answers and 7,336 user-AI conversations. We find that AI-alone accuracy fails to predict user-AI accuracy, with significant differences across multiple subjects (math, physics, and moral reasoning), and we analyze the user-AI conversations to provide insight into how they diverge from AI-alone benchmarks. Finally, we show that fine-tuning a user simulator on a subset of ChatBench improves its ability to estimate user-AI accuracies, increasing correlation on held-out questions by more than 20 points, creating possibilities for scaling interactive evaluation.
2024
SPIN: Sparsifying and Integrating Internal Neurons in Large Language Models for Text Classification
Difan Jiao
|
Yilun Liu
|
Zhenwei Tang
|
Daniel Matter
|
Jürgen Pfeffer
|
Ashton Anderson
Findings of the Association for Computational Linguistics: ACL 2024
Among the many tasks that Large Language Models (LLMs) have revolutionized is text classification. Current text classification paradigms, however, rely solely on the output of the final layer in the LLM, with the rich information contained in internal neurons largely untapped. In this study, we present SPIN: a model-agnostic framework that sparsifies and integrates internal neurons of intermediate layers of LLMs for text classification. Specifically, SPIN sparsifies internal neurons by linear probing-based salient neuron selection layer by layer, avoiding noise from unrelated neurons and ensuring efficiency. The cross-layer salient neurons are then integrated to serve as multi-layered features for the classification head. Extensive experimental results show our proposed SPIN significantly improves text classification accuracy, efficiency, and interpretability.
2012
Towards a Computational History of the ACL: 1980-2008
Ashton Anderson
|
Dan Jurafsky
|
Daniel A. McFarland
Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries
Search
Fix author
Co-authors
- Serina Chang 1
- Jake M. Hofman 1
- Difan Jiao 1
- Dan Jurafsky 1
- Yilun Liu 1
- show all...