Son Luu
Also published as: Son T. Luu, Son T.Luu, Son T. Luu
2026
ViGoEmotions: A Benchmark Dataset For Fine-grained Emotion Detection on Vietnamese Texts
Tran Quang Hung | Pham Tien Nam | Son T. Luu | Kiet Van Nguyen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Tran Quang Hung | Pham Tien Nam | Son T. Luu | Kiet Van Nguyen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Emotion classification plays a significant role in emotion prediction and harmful content detection. Recent advancements in NLP, particularly through large language models (LLMs), have greatly improved outcomes in this field. This study introduces ViGoEmotions - a Vietnamese emotion corpus comprising 20,664 social media comments in which each comment is classified into 27 fine-grained distinct emotions. To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated under three preprocessing strategies: preserving original emojis with rule-based normalization, converting emojis into textual descriptions, and applying ViSoLex, a model-based lexical normalization system. Results show that converting emojis into text often improves the performance of several BERT-based baselines, while preserving emojis yields the best results for ViSoBERT and CafeBERT. In contrast, removing emojis generally leads to lower performance. ViSoBERT achieved the highest Macro F1-score of 61.50% and Weighted F1-score of 63.26%. Strong performance was also observed from CafeBERT and PhoBERT. These findings highlight that while the proposed corpus can support diverse architectures effectively, preprocessing strategies and annotation quality remain key factors influencing downstream performance.
2025
VLSP 2025 MLQA-TSR Challenge: Vietnamese Multimodal Legal Question Answering on Traffic Sign Regulation
Son T.Luu | Trung Vo | Hiep Nguyen | Khanh Quoc Tran | Kiet Van Nguyen | Vu Tran | Ngan Luu-Thuy Nguyen | Le-Minh Nguyen
Proceedings of the 11th International Workshop on Vietnamese Language and Speech Processing
Son T.Luu | Trung Vo | Hiep Nguyen | Khanh Quoc Tran | Kiet Van Nguyen | Vu Tran | Ngan Luu-Thuy Nguyen | Le-Minh Nguyen
Proceedings of the 11th International Workshop on Vietnamese Language and Speech Processing
DocIE@XLLM25: ZeroSemble - Robust and Efficient Zero-Shot Document Information Extraction with Heterogeneous Large Language Model Ensembles
Nguyen Pham Hoang Le | An Dinh Thien | Son T. Luu | Kiet Van Nguyen
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
Nguyen Pham Hoang Le | An Dinh Thien | Son T. Luu | Kiet Van Nguyen
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
The schematization of knowledge, including the extraction of entities and relations from documents, poses significant challenges to traditional approaches because of the document’s ambiguity, heterogeneity, and high cost domain-specific training. Although Large Language Models (LLMs) allow for extraction without prior training on the dataset, the requirement of fine-tuning along with low precision, especially in relation extraction, serves as an obstacle. In absence of domain-specific training, we present a new zero-shot ensemble approach using DeepSeek-R1-Distill-Llama-70B, Llama-3.3-70B, and Qwen-2.5-32B. Our key innovation is a two-stage pipeline that first consolidates high-confidence entities through ensemble techniques, then leverages Qwen-2.5-32B with engineered prompts to generate precise semantic triples. This approach effectively resolves the low precision problem typically encountered in relation extraction. Experiments demonstrate significant gains in both accuracy and efficiency across diverse domains, with our method ranking in the top 2 on the official leaderboard in Shared Task-IV of The 1st Joint Workshop on Large Language Models and Structure Modeling. This competitive performance validates our approach as a compelling solution for practitioners seeking robust document-level information extraction without the burden of task-specific fine-tuning. Our code can be found at https://github.com/dinhthienan33/ZeroSemble.
2024
VlogQA: Task, Dataset, and Baseline Models for Vietnamese Spoken-Based Machine Reading Comprehension
Thinh Ngo | Khoa Dang | Son Luu | Kiet Nguyen | Ngan Nguyen
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Thinh Ngo | Khoa Dang | Son Luu | Kiet Nguyen | Ngan Nguyen
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
This paper presents the development process of a Vietnamese spoken language corpus for machine reading comprehension (MRC) tasks and provides insights into the challenges and opportunities associated with using real-world data for machine reading comprehension tasks. The existing MRC corpora in Vietnamese mainly focus on formal written documents such as Wikipedia articles, online newspapers, or textbooks. In contrast, the VlogQA consists of 10,076 question-answer pairs based on 1,230 transcript documents sourced from YouTube – an extensive source of user-uploaded content, covering the topics of food and travel. By capturing the spoken language of native Vietnamese speakers in natural settings, an obscure corner overlooked in Vietnamese research, the corpus provides a valuable resource for future research in reading comprehension tasks for the Vietnamese language. Regarding performance evaluation, our deep-learning models achieved the highest F1 score of 75.34% on the test set, indicating significant progress in machine reading comprehension for Vietnamese spoken language data. In terms of EM, the highest score we accomplished is 53.97%, which reflects the challenge in processing spoken-based content and highlights the need for further improvement.
2022
UIT-ViCoV19QA: A Dataset for COVID-19 Community-based Question Answering on Vietnamese Language
Triet Thai | Ngan Chu Thao-Ha | Anh Vo | Son Luu
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation
Triet Thai | Ngan Chu Thao-Ha | Anh Vo | Son Luu
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation
2021
UIT-ISE-NLP at SemEval-2021 Task 5: Toxic Spans Detection with BiLSTM-CRF and ToxicBERT Comment Classification
Son T. Luu | Ngan Nguyen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Son T. Luu | Ngan Nguyen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
We present our works on SemEval-2021 Task 5 about Toxic Spans Detection. This task aims to build a model for identifying toxic words in whole posts. We use the BiLSTM-CRF model combining with ToxicBERT Classification to train the detection model for identifying toxic words in posts. Our model achieves 62.23% by F1-score on the Toxic Spans Detection task.
2020
Empirical Study of Text Augmentation on Social Media Text in Vietnamese
Son Luu | Kiet Nguyen | Ngan Nguyen
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation
Son Luu | Kiet Nguyen | Ngan Nguyen
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation
BANANA at WNUT-2020 Task 2: Identifying COVID-19 Information on Twitter by Combining Deep Learning and Transfer Learning Models
Tin Huynh | Luan Thanh Luan | Son T. Luu
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
Tin Huynh | Luan Thanh Luan | Son T. Luu
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
The outbreak COVID-19 virus caused a significant impact on the health of people all over the world. Therefore, it is essential to have a piece of constant and accurate information about the disease with everyone. This paper describes our prediction system for WNUT-2020 Task 2: Identification of Informative COVID-19 English Tweets. The dataset for this task contains size 10,000 tweets in English labeled by humans. The ensemble model from our three transformer and deep learning models is used for the final prediction. The experimental result indicates that we have achieved F1 for the INFORMATIVE label on our systems at 88.81% on the test set.