Spencer Poff
2025
SwiLTra-Bench: The Swiss Legal Translation Benchmark
Joel Niklaus
|
Jakob Merane
|
Luka Nenadic
|
Sina Ahmadi
|
Yingqiang Gao
|
Cyrill A. H. Chevalley
|
Claude Humbel
|
Christophe Gösken
|
Lorenzo Tanzi
|
Thomas Lüthi
|
Stefan Palombo
|
Spencer Poff
|
Boling Yang
|
Nan Wu
|
Matthew Guillod
|
Robin Mamié
|
Daniel Brunner
|
Julio Pereyra
|
Niko Grupen
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In Switzerland legal translation is uniquely important due to the country’s four official languages and requirements for multilingual legal documentation. However, this process traditionally relies on professionals who must be both legal experts and skilled translators—creating bottlenecks and impacting effective access to justice. To address this challenge, we introduce SwiLTra-Bench, a comprehensive multilingual benchmark of over 180K aligned Swiss legal translation pairs comprising laws, headnotes, and press releases across all Swiss languages along with English, designed to evaluate LLM-based translation systems. Our systematic evaluation reveals that frontier models achieve superior translation performance across all document types, while specialized translation systems excel specifically in laws but under-perform in headnotes. Through rigorous testing and human expert validation, we demonstrate that while fine-tuning open SLMs significantly improves their translation quality, they still lag behind the best zero-shot prompted frontier models such as Claude-3.5-Sonnet. Additionally, we present SwiLTra-Judge, a specialized LLM evaluation system that aligns best with human expert assessments.
2021
Retrieval Augmentation Reduces Hallucination in Conversation
Kurt Shuster
|
Spencer Poff
|
Moya Chen
|
Douwe Kiela
|
Jason Weston
Findings of the Association for Computational Linguistics: EMNLP 2021
Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue models often suffer from factual incorrectness and hallucination of knowledge (Roller et al., 2020). In this work we explore the use of neural-retrieval-in-the-loop architectures - recently shown to be effective in open-domain QA (Lewis et al., 2020b; Izacard and Grave, 2020) - for knowledge-grounded dialogue, a task that is arguably more challenging as it requires querying based on complex multi-turn dialogue context and generating conversationally coherent responses. We study various types of architectures with multiple components - retrievers, rankers, and encoder-decoders - with the goal of maximizing knowledgeability while retaining conversational ability. We demonstrate that our best models obtain state-of-the-art performance on two knowledge-grounded conversational tasks. The models exhibit open-domain conversational capabilities, generalize effectively to scenarios not within the training data, and, as verified by human evaluations, substantially reduce the well-known problem of knowledge hallucination in state-of-the-art chatbots.
Search
Fix author
Co-authors
- Sina Ahmadi 1
- Daniel Brunner 1
- Moya Chen 1
- Cyrill A. H. Chevalley 1
- Yingqiang Gao 1
- show all...