Leslie Barrett


2025

pdf bib
LexTime: A Benchmark for Temporal Ordering of Legal Events
Claire Barale | Leslie Barrett | Vikram Sunil Bajaj | Michael Rovatsos
Findings of the Association for Computational Linguistics: EMNLP 2025

Understanding temporal relationships and accurately reconstructing the event timeline is important for case law analysis, compliance monitoring, and legal summarization. However, existing benchmarks lack specialized language evaluation, leaving a gap in understanding how LLMs handle event ordering in legal contexts. We introduce LexTime, a dataset designed to evaluate LLMs’ event ordering capabilities in legal language, consisting of 512 instances from U.S. Federal Complaints with annotated event pairs and their temporal relations. Our findings show that (1) LLMs are more accurate on legal event ordering than on narrative texts (up to +10.5%); (2) longer input contexts and implicit events boost accuracy, reaching 80.8% for implicit-explicit event pairs; (3) legal linguistic complexities and nested clauses remain a challenge. While performance is promising, specific features of legal texts remain a bottleneck for legal temporal event reasoning, and we propose concrete modeling directions to better address them.

pdf bib
Can LLMs Find a Needle in a Haystack? A Look at Anomaly Detection Language Modeling
Leslie Barrett | Vikram Sunil Bajaj | Robert John Kingan
Findings of the Association for Computational Linguistics: EMNLP 2025

Anomaly detection (AD), also known as Outlier Detection, is a longstanding problem in machine learning, which has recently been applied to text data. In these datasets, a textual anomaly is a part of the text that does not fit the overall topic of the text. Some recent approaches to textual AD have used transformer models, achieving positive results but with trade-offs in pre-training time and inflexibility with respect to new domains. Others have used linear models which are fast and more flexible but not always competitive on certain datasets. We introduce a new approach based on Large Pre-trained Language Models in three modalities. Our findings indicate that LLMs beat baselines when AD is presented as an imbalanced classification problem regardless of the concentration of anomalous samples. However, their performance is markedly worse on unsupervised AD, suggesting that the concept of “anomaly” may somehow elude the LLM reasoning process.

pdf bib
Can LLMs Be Efficient Predictors of Conversational Derailment?
Kaustubh Olpadkar | Vikram Sunil Bajaj | Leslie Barrett
Findings of the Association for Computational Linguistics: EMNLP 2025

Conversational derailment — when online discussions stray from their intended topics due to toxic or inappropriate remarks — is a common issue on online platforms. These derailments can have negative impacts on users and the online community. While previous work has focused on post hoc identification of toxic content, recent efforts emphasize proactive prediction of derailments before they occur, enabling early moderation. However, forecasting derailment is difficult due to the context-dependent emergence of toxicity and the need for timely alerts. We prompt pre-trained large language models (LLMs) to predict conversational derailment without task-specific fine-tuning. We compare a range of prompting strategies, including chain-of-thought reasoning (CoT) and few-shot exemplars, across small and large scale models, and evaluate their performance and inference-cost trade-offs on derailment benchmarks. Our experiments show that the best prompting configuration attains state-of-the-art performance, and forecasts derailments earlier than existing approaches. These results demonstrate that LLMs, even without fine-tuning, can serve as an effective tool for proactive conversational moderation.

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2025
Nikolaos Aletras | Ilias Chalkidis | Leslie Barrett | Cătălina Goanță | Daniel Preoțiuc-Pietro | Gerasimos Spanakis
Proceedings of the Natural Legal Language Processing Workshop 2025

2024

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2024
Nikolaos Aletras | Ilias Chalkidis | Leslie Barrett | Cătălina Goanță | Daniel Preoțiuc-Pietro | Gerasimos Spanakis
Proceedings of the Natural Legal Language Processing Workshop 2024

2023

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2023
Daniel Preoțiuc-Pietro | Catalina Goanta | Ilias Chalkidis | Leslie Barrett | Gerasimos Spanakis | Nikolaos Aletras
Proceedings of the Natural Legal Language Processing Workshop 2023

2022

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2022
Nikolaos Aletras | Ilias Chalkidis | Leslie Barrett | Cătălina Goanță | Daniel Preoțiuc-Pietro
Proceedings of the Natural Legal Language Processing Workshop 2022

pdf bib
A Lightweight Yet Robust Approach to Textual Anomaly Detection
Leslie Barrett | Robert Kingan | Alexandra Ortan | Madhavan Seshadri
Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)

Highly imbalanced textual datasets continue to pose a challenge for supervised learning models. However, viewing such imbalanced text data as an anomaly detection (AD) problem has advantages for certain tasks such as detecting hate speech, or inappropriate and/or offensive language in large social media feeds. There the unwanted content tends to be both rare and non-uniform with respect to its thematic character, and better fits the definition of an anomaly than a class. Several recent approaches to textual AD use transformer models, achieving good results but with trade-offs in pre-training and inflexibility with respect to new domains. In this paper we compare two linear models within the NMF family, which also have a recent history in textual AD. We introduce a new approach based on an alternative regularization of the NMF objective. Our results surpass other linear AD models and are on par with deep models, performing comparably well even in very small outlier concentrations.

2021

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2021
Nikolaos Aletras | Ion Androutsopoulos | Leslie Barrett | Catalina Goanta | Daniel Preotiuc-Pietro
Proceedings of the Natural Legal Language Processing Workshop 2021

2019

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2019
Nikolaos Aletras | Elliott Ash | Leslie Barrett | Daniel Chen | Adam Meyers | Daniel Preotiuc-Pietro | David Rosenberg | Amanda Stent
Proceedings of the Natural Legal Language Processing Workshop 2019

2005

pdf bib
Usability Considerations for a Cellular-based Text Translator
Leslie Barrett | Robert Levin
Proceedings of Machine Translation Summit X: Posters

This paper describes a cellular-telephone-based text-to-text translation system developed at Transclick, Inc. The application translates messages bi-directionally in English, French, German, Italian, Spanish and Portuguese. This paper describes design features uniquely suited to hand-held-device based translation systems. In particular, we discuss some of the usability conditions unique to this type of application and present strategies for overcoming usability obstacles encountered in the design phase of the product.

2003

pdf bib
Considerations of methodology and human factors in rating a suite of translated sentences
Leslie Barrett
Workshop on Systemizing MT Evaluation

1998

pdf bib
Using NOMLEX to Produce Nominalization Patterns for Information Extraction
Adam Meyers | Catherine Macleod | Roman Yangarber | Ralph Grishman | Leslie Barrett | Ruth Reeves
The Computational Treatment of Nominals