Binh Nguyen


2025

pdf bib
What You Read Isn’t What You Hear: Linguistic Sensitivity in Deepfake Speech Detection
Binh Nguyen | Shuju Shi | Ryan Ofman | Thai Le
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Recent advances in text-to-speech technology have enabled highly realistic voice generation, fueling audio-based deepfake attacks such as fraud and impersonation. While audio anti-spoofing systems are critical for detecting such threats, prior research has predominantly focused on acoustic-level perturbations, leaving **the impact of linguistic variation largely unexplored**. In this paper, we investigate the linguistic sensitivity of both open-source and commercial anti-spoofing detectors by introducing **TAPAS** (Transcript-to-Audio Perturbation Anti-Spoofing), a novel framework for transcript-level adversarial attacks. Our extensive evaluation shows that even minor linguistic perturbations can significantly degrade detection accuracy: attack success rates exceed **60%** on several open-source detector–voice pairs, and the accuracy of one commercial detector drops from **100%** on synthetic audio to just **32%**. Through a comprehensive feature attribution analysis, we find that linguistic complexity and model-level audio embedding similarity are key factors contributing to detector vulnerabilities. To illustrate the real-world risks, we replicate a recent Brad Pitt audio deepfake scam and demonstrate that TAPAS can bypass commercial detectors. These findings underscore the **need to move beyond purely acoustic defenses** and incorporate linguistic variation into the design of robust anti-spoofing systems. Our source code is available at https://github.com/nqbinh17/audio_linguistic_adversarial.

pdf bib
Task-driven Layerwise Additive Activation Intervention
Hieu Trung Nguyen | Bao Nguyen | Binh Nguyen | Viet Anh Nguyen
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Modern language models (LMs) have significantly advanced generative modeling in natural language processing (NLP). Despite their success, LMs often struggle with adaptation to new contexts in real-time applications. A promising approach to task adaptation is activation intervention, which steers the LMs’ generation process by identifying and manipulating the activations. However, existing interventions rely heavily on heuristic rules or require many prompt inputs to determine effective interventions. In this paper, we propose a layer-wise additive activation intervention framework that optimizes the intervention process, thereby enhancing sample efficiency. We evaluate our framework on various datasets, demonstrating improvements in the accuracy of pretrained LMs and competing intervention baselines.

2023

pdf bib
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts
Truong Giang Do | Le Khiem | Quang Pham | TrungTin Nguyen | Thanh-Nam Doan | Binh Nguyen | Chenghao Liu | Savitha Ramasamy | Xiaoli Li | Steven Hoi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations: (i) the policy derived from random routers might be sub-optimal, and (ii) it requires extensive resources during training and evaluation, leading to limited efficiency gains. This work introduces HyperRouter, which dynamically generates the router’s parameters through a fixed hypernetwork and trainable embeddings to achieve a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of HyperRouter compared to existing routing methods. Our implementation is publicly available at https://github.com/giangdip2410/HyperRouter.

pdf bib
ViASR: A Novel Benchmark Dataset and Methods for Vietnamese Automatic Speech Recognition
Binh Nguyen | Son Huynh | Quoc Khanh Tran | An Le Tran-Hoai | Trong An Nguyen | Nguyen Tung Doan Tran | Thuy An Phan Thi | Le Thanh Nguyen | Hieu Nghia Nguyen | Dang Huynh
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

2022

pdf bib
Multi-level Community-awareness Graph Neural Networks for Neural Machine Translation
Binh Nguyen | Long Nguyen | Dien Dinh
Proceedings of the 29th International Conference on Computational Linguistics

Neural Machine Translation (NMT) aims to translate the source- to the target-language while preserving the original meaning. Linguistic information such as morphology, syntactic, and semantics shall be grasped in token embeddings to produce a high-quality translation. Recent works have leveraged the powerful Graph Neural Networks (GNNs) to encode such language knowledge into token embeddings. Specifically, they use a trained parser to construct semantic graphs given sentences and then apply GNNs. However, most semantic graphs are tree-shaped and too sparse for GNNs which cause the over-smoothing problem. To alleviate this problem, we propose a novel Multi-level Community-awareness Graph Neural Network (MC-GNN) layer to jointly model local and global relationships between words and their linguistic roles in multiple communities. Intuitively, the MC-GNN layer substitutes a self-attention layer at the encoder side of a transformer-based machine translation model. Extensive experiments on four language-pair datasets with common evaluation metrics show the remarkable improvements of our method while reducing the time complexity in very long sentences.