Dan Hendrycks
2023
MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding
Steven Wang
|
Antoine Scardigli
|
Leonard Tang
|
Wei Chen
|
Dmitry Levkin
|
Anya Chen
|
Spencer Ball
|
Thomas Woodside
|
Oliver Zhang
|
Dan Hendrycks
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association’s 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
2020
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks
|
Xiaoyuan Liu
|
Eric Wallace
|
Adam Dziedzic
|
Rishabh Krishnan
|
Dawn Song
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers’ performance declines are substantially smaller. Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness.
Search
Co-authors
- Xiaoyuan Liu 1
- Eric Wallace 1
- Adam Dziedzic 1
- Rishabh Krishnan 1
- Dawn Song 1
- show all...