Dany Haddad
2025
Ai2 Scholar QA: Organized Literature Synthesis with Attribution
Amanpreet Singh
|
Joseph Chee Chang
|
Dany Haddad
|
Aakanksha Naik
|
Jena D. Hwang
|
Rodney Kinney
|
Daniel S Weld
|
Doug Downey
|
Sergey Feldman
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Retrieval-augmented generation is increasingly effective in answering scientific questions from literature, but many state-of-the-art systems are expensive and closed-source. We introduce Ai2 Scholar QA, a free online scientific question answering application. To facilitate research, we make our entire pipeline public: as a customizable open-source Python package and interactive web app, along with paper indexes accessible through public APIs and downloadable datasets. We describe our system in detail and present experiments analyzing its key design decisions. In an evaluation on a recent scientific QA benchmark, we find that Ai2 Scholar QA outperforms competing systems.
OLMES: A Standard for Language Model Evaluations
Yuling Gu
|
Oyvind Tafjord
|
Bailey Kuehl
|
Dany Haddad
|
Jesse Dodge
|
Hannaneh Hajishirzi
Findings of the Association for Computational Linguistics: NAACL 2025
Progress in AI is often demonstrated by new models claiming improved performance on tasks measuring model capabilities. Evaluating language models can be particularly challenging, as choices of how a model is evaluated on a task can lead to large changes in measured performance. There is no common standard setup, so different models are evaluated on the same tasks in different ways, leading to claims about which models perform best not being reproducible. We propose OLMES, a completely documented, practical, open standard for reproducible LLM evaluations. In developing this standard, we identify and review the varying factors in evaluation practices adopted by the community - such as details of prompt formatting, choice of in-context examples, probability normalizations, and task formulation. In particular, OLMES supports meaningful comparisons between smaller base models that require the unnatural “cloze” formulation of multiple-choice questions against larger models that can utilize the original formulation. OLMES includes well-considered, documented recommendations guided by results from existing literature as well as new experiments resolving open questions.
Search
Fix author
Co-authors
- Joseph Chee Chang 1
- Jesse Dodge 1
- Doug Downey 1
- Sergey Feldman 1
- Yuling Gu 1
- show all...