Kevin D. Ashley

Also published as: Kevin Ashley


2024

pdf
Adding Argumentation into Human Evaluation of Long Document Abstractive Summarization: A Case Study on Legal Opinions
Mohamed Elaraby | Huihui Xu | Morgan Gray | Kevin Ashley | Diane Litman
Proceedings of the Fourth Workshop on Human Evaluation of NLP Systems (HumEval) @ LREC-COLING 2024

Human evaluation remains the gold standard for assessing abstractive summarization. However, current practices often prioritize constructing evaluation guidelines for fluency, coherence, and factual accuracy, overlooking other critical dimensions. In this paper, we investigate argument coverage in abstractive summarization by focusing on long legal opinions, where summaries must effectively encapsulate the document’s argumentative nature. We introduce a set of human-evaluation guidelines to evaluate generated summaries based on argumentative coverage. These guidelines enable us to assess three distinct summarization models, studying the influence of including argument roles in summarization. Furthermore, we utilize these evaluation scores to benchmark automatic summarization metrics against argument coverage, providing insights into the effectiveness of automated evaluation methods.

2021

pdf
Discovering Explanatory Sentences in Legal Case Decisions Using Pre-trained Language Models
Jaromir Savelka | Kevin Ashley
Findings of the Association for Computational Linguistics: EMNLP 2021

Legal texts routinely use concepts that are difficult to understand. Lawyers elaborate on the meaning of such concepts by, among other things, carefully investigating how they have been used in the past. Finding text snippets that mention a particular concept in a useful way is tedious, time-consuming, and hence expensive. We assembled a data set of 26,959 sentences, coming from legal case decisions, and labeled them in terms of their usefulness for explaining selected legal concepts. Using the dataset we study the effectiveness of transformer models pre-trained on large language corpora to detect which of the sentences are useful. In light of models’ predictions, we analyze various linguistic properties of the explanatory sentences as well as their relationship to the legal concept that needs to be explained. We show that the transformer-based models are capable of learning surprisingly sophisticated features and outperform the prior approaches to the task.

2017

pdf bib
Proceedings of the 4th Workshop on Argument Mining
Ivan Habernal | Iryna Gurevych | Kevin Ashley | Claire Cardie | Nancy Green | Diane Litman | Georgios Petasis | Chris Reed | Noam Slonim | Vern Walker
Proceedings of the 4th Workshop on Argument Mining

pdf bib
Sentence Boundary Detection in Adjudicatory Decisions in the United States
Jaromir Savelka | Vern R. Walker | Matthias Grabmair | Kevin D. Ashley
Traitement Automatique des Langues, Volume 58, Numéro 2 : Traitement automatique de la langue juridique [Legal Natural Language Processing]

2016

pdf
Extracting Case Law Sentences for Argumentation about the Meaning of Statutory Terms
Jaromír Šavelka | Kevin D. Ashley
Proceedings of the Third Workshop on Argument Mining (ArgMining2016)

2014

pdf bib
Proceedings of the First Workshop on Argumentation Mining
Nancy Green | Kevin Ashley | Diane Litman | Chris Reed | Vern Walker
Proceedings of the First Workshop on Argumentation Mining

1986

pdf
Hypotheticals as Heuristic Device
Edwina L. Rissland | Kevin D. Ashley
Strategic Computing - Natural Language Workshop: Proceedings of a Workshop Held at Marina del Rey, California, May 1-2, 1986