Nitzan Barzilay
2025
SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
David Wadden
|
Kejian Shi
|
Jacob Morrison
|
Alan Li
|
Aakanksha Naik
|
Shruti Singh
|
Nitzan Barzilay
|
Kyle Lo
|
Tom Hope
|
Luca Soldaini
|
Shannon Zejiang Shen
|
Doug Downey
|
Hannaneh Hajishirzi
|
Arman Cohan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We present ScIRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following instances for training and evaluation, covering 54 tasks. These tasks span five core scientific literature understanding capabilities: information extraction, summarization, question answering, claim verification, and classification. ScIRIFF is unique in being the only entirely expert-written, high-quality instruction-following dataset designed for extracting and synthesizing information from research literature across diverse scientific fields. It features complex instructions with long input contexts, detailed task descriptions, and structured outputs. To demonstrate its utility, we finetune a series of large language models (LLMs) using a mix of general domain and ScIRIFF instructions. On nine out-of-distribution held-out tasks (referred to as SciRIFF-Eval), LLMs finetuned on SciRIFF achieve 70.6% average improvement over our baselines trained only on general-domain instructions. ScIRIFF facilitates the development and evaluation of LLMs to help researchers navigate the rapidly growing body of scientific literature.
Search
Fix author
Co-authors
- Arman Cohan 1
- Doug Downey 1
- Hannaneh Hajishirzi 1
- Tom Hope 1
- Alan Li 1
- show all...