Tu Minh Phuong
2026
VIVID: A Culturally Grounded Benchmark Exposing the Figurative Language Gap in Vietnamese NLP
Tu Tran Do | Nhat Ngoc Nguyen | Tung Khanh Tran | Hoang D. Nguyen | Tu Minh Phuong | Long Hoang Dang
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Tu Tran Do | Nhat Ngoc Nguyen | Tung Khanh Tran | Hoang D. Nguyen | Tu Minh Phuong | Long Hoang Dang
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present VIVID (Vietnamese Idioms for Validation and Interpretation Depth), the first systematic benchmark for evaluating culturally grounded figurative language understanding in Vietnamese. VIVID comprises 1,636 idioms and proverbs annotated with five complexity traits (literal expressions, pragmatic nuances, Sino-Vietnamese terms, uncommon vocabulary, folk knowledge) and seven semantic themes. We establish an evaluation framework combining generative and discriminative tasks, proposing an LLM-as-a-Judge approach with aspect-based prompting validated against human judgment (Cohen’s κ = 0.792). Evaluating eight state-of-the-art models reveals critical gaps: Vietnamese-specialized models drastically underperform multilingual systems (VinaLLaMA-7B: 0.13 vs. GPT-4o: 2.46), and even top models achieve less than 50% of maximum scores. Notably, few-shot prompting does not universally improve performance, with GPT-4o exhibiting degradation due to stylistic overfitting. Our analysis exposes systematic failures including literal over-interpretation, lexical gaps, and pragmatic flattening, demonstrating that current models lack cultural competence for nuanced figurative interpretation. VIVID provides an essential tool for advancing figurative language understanding in culturally rich contexts.
2020
Answering Legal Questions by Learning Neural Attentive Text Representation
Phi Manh Kien | Ha-Thanh Nguyen | Ngo Xuan Bach | Vu Tran | Minh Le Nguyen | Tu Minh Phuong
Proceedings of the 28th International Conference on Computational Linguistics
Phi Manh Kien | Ha-Thanh Nguyen | Ngo Xuan Bach | Vu Tran | Minh Le Nguyen | Tu Minh Phuong
Proceedings of the 28th International Conference on Computational Linguistics
Text representation plays a vital role in retrieval-based question answering, especially in the legal domain where documents are usually long and complicated. The better the question and the legal documents are represented, the more accurate they are matched. In this paper, we focus on the task of answering legal questions at the article level. Given a legal question, the goal is to retrieve all the correct and valid legal articles, that can be used as the basic to answer the question. We present a retrieval-based model for the task by learning neural attentive text representation. Our text representation method first leverages convolutional neural networks to extract important information in a question and legal articles. Attention mechanisms are then used to represent the question and articles and select appropriate information to align them in a matching process. Experimental results on an annotated corpus consisting of 5,922 Vietnamese legal questions show that our model outperforms state-of-the-art retrieval-based methods for question answering by large margins in terms of both recall and NDCG.