Hung Le
Deakin University
Unverified author pages with similar names: Hung Le
2025
SimSMoE: Toward Efficient Training Mixture of Experts via Solving Representational Collapse
Giang Do | Hung Le | Truyen Tran
Findings of the Association for Computational Linguistics: NAACL 2025
Giang Do | Hung Le | Truyen Tran
Findings of the Association for Computational Linguistics: NAACL 2025
Sparse mixture of experts (SMoE) have emerged as an effective approach for scaling large language models while keeping a constant computational cost. Regardless of several notable successes of SMoE, effective training such architecture remains elusive due to the representation collapse problem, which in turn harms model performance and causes parameter redundancy. In this work, we present Similarity-based Sparse Mixture of Experts (SimSMoE), a novel similarity of neural network algorithm, that guarantees a solution to address the representation collapse issue between experts given a fixed FLOPs budget. We conduct extensive empirical evaluations on three large language models for both Pre-training and Fine-tuning tasks to illustrate the efficacy, robustness, and scalability of our method. The results demonstrate that SimSMoE significantly enhances existing routing policy and outperforms other SMoE routing methods in performance for the tasks. Our implementation is publicly available at https://github.com/giangdip2410/SimSMoE.
Dynamic Steering With Episodic Memory For Large Language Models
Van Dai Do | Quan Hung Tran | Svetha Venkatesh | Hung Le
Findings of the Association for Computational Linguistics: ACL 2025
Van Dai Do | Quan Hung Tran | Svetha Venkatesh | Hung Le
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) exhibit emergent in-context learning (ICL) capabilities, allowing them to adapt to unseen tasks based on example demonstrations. Traditional ICL embeds examples within the prompt, while activation steering, uses a vector derived from examples to guide the latent states of LLMs toward desired behaviors. However, traditional ICL is difficult to control quantitatively and consumes valuable context space. Existing activation steering methods apply a single sentence-level steering vector uniformly across all tokens, ignoring LLMs’ token-wise, auto-regressive nature. This coarse control can lead to inconsistencies and suboptimal adjustments during generation. To address this problem, we introduce Dynamic Steering with Episodic Memory (DSEM), a novel training-free framework that aligns LLMs to given demonstrations by steering at the token level conditioned on the input query. DSEM employs a key-value memory to store associations between generated tokens and steering vectors. During inference, it uses a nearest-neighbor mechanism to dynamically compute steering vectors for each token chunk, enabling more precise and adaptive guidance. Our method surpasses strong baselines across diverse alignment tasks - including safety, style transfer, and role-playing - demonstrating improved alignment as demonstration size scales.
Sample Efficient Alignment Learning With Episodic Control
Van Dai Do | Quan Hung Tran | Ahmed Kirmani | Lu Zhang | Hung Le
Findings of the Association for Computational Linguistics: EMNLP 2025
Van Dai Do | Quan Hung Tran | Ahmed Kirmani | Lu Zhang | Hung Le
Findings of the Association for Computational Linguistics: EMNLP 2025
Aligning large language models (LLMs) with specific task objectives is challenging, especially when access to feedback signals for guiding the model is limited. While existing parametric methods perform reasonably, they rely heavily on large datasets and frequent feedback, making them impractical in scenarios with limited human feedback. We introduce Alignment Learning with Episodic Control (ALEC), a non-parametric framework that aligns LLM outputs during inference without fine-tuning. ALEC employs a key-value memory to store the associations between generated text and its corresponding values. It leverages a novel confidence-based writing scheme to update these stored values, maximizing the use of available data. During inference, ALEC utilizes a nearest-neighbor mechanism to estimate the values of generated texts, enabling the selection of the optimal text for decoding. Our method outperforms state-of-the-art baselines on harmless, helpful, and summarization tasks, demonstrating improved alignment with minimal interactions with the true reward model.
SuperRAG: Beyond RAG with Layout-Aware Graph Modeling
Chening Yang | Duy-Khanh Vu | Minh-Tien Nguyen | Xuan-Quang Nguyen | Linh Nguyen | Hung Le
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
Chening Yang | Duy-Khanh Vu | Minh-Tien Nguyen | Xuan-Quang Nguyen | Linh Nguyen | Hung Le
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
This paper introduces layout-aware graph modeling for multimodal RAG. Different from traditional RAG methods that only deal with flat text chunks, the proposed method takes into account the relationship of multimodalities by using a graph structure. To do that, a graph modeling structure is defined based on document layout parsing. The structure of an input document is retained with the connection of text chunks, tables, and figures. This representation allows the method to handle complex questions that require information from multimodalities. To confirm the efficiency of the graph modeling, a flexible RAG pipeline is developed using robust components. Experimental results on four benchmark test sets confirm the contribution of the layout-aware modeling for performance improvement of the RAG pipeline.
2022
Make The Most of Prior Data: A Solution for Interactive Text Summarization with Preference Feedback
Duy-Hung Nguyen | Nguyen Viet Dung Nghiem | Bao-Sinh Nguyen | Dung Tien Tien Le | Shahab Sabahi | Minh-Tien Nguyen | Hung Le
Findings of the Association for Computational Linguistics: NAACL 2022
Duy-Hung Nguyen | Nguyen Viet Dung Nghiem | Bao-Sinh Nguyen | Dung Tien Tien Le | Shahab Sabahi | Minh-Tien Nguyen | Hung Le
Findings of the Association for Computational Linguistics: NAACL 2022
For summarization, human preferences is critical to tame outputs of the summarizer in favor of human interests, as ground-truth summaries are scarce and ambiguous. Practical settings require dynamic exchanges between humans and AI agents wherein feedback is provided in an online manner, a few at a time. In this paper, we introduce a new framework to train summarization models with preference feedback interactively. By properly leveraging offline data and a novel reward model, we improve the performance regarding ROUGE scores and sample-efficiency. Our experiments on three various datasets confirm the benefit of the proposed framework in active, few-shot and online settings of preference learning.