Jiaen Liu


2025

pdf bib
ViDove: A Translation Agent System with Multimodal Context and Memory-Augmented Reasoning
Yichen Lu | Wei Dai | Jiaen Liu | Ching Wing Kwok | Zongheng Wu | Xudong Xiao | Ao Sun | Sheng Fu | Jianyuan Zhan | Yian Wang | Takatomo Saito | Sicheng Lai
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

LLM-based translation agents have achieved highly human-like translation results and are capable of handling longer and more complex contexts with greater efficiency. However, they are typically limited to text-only inputs. In this paper, we introduce ViDove, a translation agent system designed for multimodal input. Inspired by the workflow of human translators, ViDove leverages visual and contextual background information to enhance the translation process. Additionally, we integrate a multimodal memory system and long-short term memory modules enriched with domain-specific knowledge, enabling the agent to perform more accurately and adaptively in real-world scenarios. As a result, ViDove achieves significantly higher translation quality in both subtitle generation and general translation tasks, with a 28% improvement in BLEU scores and a 15% improvement in SubER compared to previous state-of-the-art baselines. Moreover, we introduce DoveBench, a new benchmark for long-form automatic video subtitling and translation, featuring 17 hours of high-quality, human-annotated data. Our demo is available here: https://vidove.willbe03.com/

pdf bib
RealHarm: A Collection of Real-World Language Model Application Failures
Pierre Le Jeune | Jiaen Liu | Luca Rossi | Matteo Dora
Proceedings of the The First Workshop on LLM Security (LLMSEC)

Language model deployments in consumer-facing applications introduce numerous risks. While existing research on harms and hazards of such applications follows top-down approaches derived from regulatory frameworks and theoretical analyses, empirical evidence of real-world failure modes remains underexplored. In this work, we introduce RealHarm, a dataset of annotated problematic interactions with AI agents built from a systematic review of publicly reported incidents. Analyzing harms, causes, and hazards specifically from the deployer’s perspective, we find that reputational damage constitutes the predominant organizational harm, while misinformation emerges as the most common hazard category. We empirically evaluate state-of-the-art guardrails and content moderation systems to probe whether such systems would have prevented the incidents, revealing a significant gap in the protection of AI applications.