Leonard Tang
2023
MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding
Steven Wang
|
Antoine Scardigli
|
Leonard Tang
|
Wei Chen
|
Dmitry Levkin
|
Anya Chen
|
Spencer Ball
|
Thomas Woodside
|
Oliver Zhang
|
Dan Hendrycks
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association’s 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
2022
LILA: A Unified Benchmark for Mathematical Reasoning
Swaroop Mishra
|
Matthew Finlayson
|
Pan Lu
|
Leonard Tang
|
Sean Welleck
|
Chitta Baral
|
Tanmay Rajpurohit
|
Oyvind Tafjord
|
Ashish Sabharwal
|
Peter Clark
|
Ashwin Kalyan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Mathematical reasoning skills are essential for general-purpose intelligentsystems to perform tasks from grocery shopping to climate modeling.Towards evaluating and improving AI systems in this domain, we proposeLILA, a unified mathematical reasoning benchmark consisting of 23 diversetasks along four dimensions:(i) mathematical abilities e.g., arithmetic, calculus (ii) language format e.g., question-answering, fill-in-the-blanks (iii) language diversity e.g., no language, simple language (iv) external knowledge e.g., commonsense, physics. We construct our benchmark by extending 20 datasets benchmark by collecting task instructions and solutions in the form of Python programs,thereby obtaining explainable solutions in addition to the correct answer.We additionally introduce two evaluation datasets to measure out-of-distribution performance and robustness to language perturbation.Finally, we introduce BHASKARA,a general-purpose mathematical reasoning model trained on LILA. Importantly, we find that multi-tasking leads to significant improvements (average relative improvement of 21.83% F1 score vs. single-task models),while the best performing model only obtains 60.40%,indicating the room for improvement in general mathematical reasoning and understanding.
Search
Co-authors
- Steven Wang 1
- Antoine Scardigli 1
- Wei Chen 1
- Dmitry Levkin 1
- Anya Chen 1
- show all...