Quan Nguyen
2026
Reason-to-Learn (R2L): Multi-Agent Knowledge Distillation for Lightweight LLMs in Sentiment Analysis
Le-Huy Tu | Quan Nguyen | Vincent NGUYEN | Johanna Bjorklund | Xuan-Son Vu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Le-Huy Tu | Quan Nguyen | Vincent NGUYEN | Johanna Bjorklund | Xuan-Son Vu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large Language Models (LLMs) boast remarkable capabilities but face deployment challenges due to computational demands. We introduce Reason-to-Learn (R2L), a novel multi-agent collaborative knowledge distillation framework enabling small LLMs to learn from a distributed system of specialized agent models. Our architecture employs multiple autonomous teacher agents, each with distinct expertise and reasoning capabilities, coordinated by a meta-agent that orchestrates knowledge synthesis and conflict resolution. Unlike prior methods, our flexible four-phase process (Detection, Processing, Rationale Generation, Aggregation) leverages agent-based communication protocols and consensus mechanisms for cross-architecture knowledge transfer, demonstrated primarily on Vietnamese sentiment analysis. Experimental results are definitive: our lightweight R2L-Students (1-1.5B) consistently outperform the individual specialized agents (Qwen32B, Llama70B) and the GPT-4o meta-agent coordinator, especially on complex ABSA tasks. Ablation studies confirm our multi-agent collaborative approach outperformed traditional fine-tuning and single-agent distillation. Furthermore, R2L enhance generalizability of lightweight LLMs: our Vietnamese-trained student achieves strong zero-shot cross-lingual performance on Swedish ABSA (Svensk ABSAbank-Imm), with Krippendorff’s Alpha scores competitive with the specialized agents. R2L offers an efficient path to compact, high-performing specialist models through coordinated multi-agent learning.
2010
Annotation of Human Gesture using 3D Skeleton Controls
Quan Nguyen | Michael Kipp
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
Quan Nguyen | Michael Kipp
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
The manual transcription of human gesture behavior from video for linguistic analysis is a work-intensive process that results in a rather coarse description of the original motion. We present a novel approach for transcribing gestural movements: by overlaying an articulated 3D skeleton onto the video frame(s) the human coder can replicate original motions on a pose-by-pose basis by manipulating the skeleton. Our tool is integrated in the ANVIL tool so that both symbolic interval data and 3D pose data can be entered in a single tool. Our method allows a relatively quick annotation of human poses which has been validated in a user study. The resulting data are precise enough to create animations that match the original speaker's motion which can be validated with a realtime viewer. The tool can be applied for a variety of research topics in the areas of conversational analysis, gesture studies and intelligent virtual agents.