MuLD: The Multitask Long Document Benchmark

George Hudson, Noura Al Moubayed


Abstract
The impressive progress in NLP techniques has been driven by the development of multi-task benchmarks such as GLUE and SuperGLUE. While these benchmarks focus on tasks for one or two input sentences, there has been exciting work in designing efficient techniques for processing much longer inputs. In this paper, we present MuLD: a new long document benchmark consisting of only documents over 10,000 tokens. By modifying existing NLP tasks, we create a diverse benchmark which requires models to successfully model long-term dependencies in the text. We evaluate how existing models perform, and find that our benchmark is much more challenging than their ‘short document’ equivalents. Furthermore, by evaluating both regular and efficient transformers, we show that models with increased context length are better able to solve the tasks presented, suggesting that future improvements in these models are vital for solving similar long document problems. We release the data and code for baselines to encourage further research on efficient NLP models.
Anthology ID:
2022.lrec-1.392
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
3675–3685
Language:
URL:
https://aclanthology.org/2022.lrec-1.392
DOI:
Bibkey:
Cite (ACL):
George Hudson and Noura Al Moubayed. 2022. MuLD: The Multitask Long Document Benchmark. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3675–3685, Marseille, France. European Language Resources Association.
Cite (Informal):
MuLD: The Multitask Long Document Benchmark (Hudson & Al Moubayed, LREC 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.lrec-1.392.pdf
Code
 ghomashudson/muld
Data
MuLDNarrativeQA