Abstract
Linguistic bias in Deep Neural Network (DNN) based Natural Language Processing (NLP) systems is a critical problem that needs attention. The problem further intensifies in the case of security systems, such as speaker verification, where fairness is essential. Speaker verification systems are intelligent systems that determine if two speech recordings belong to the same speaker. Such human-oriented security systems should be usable by diverse people speaking varied languages. Thus, a speaker verification system trained on speech in one language should generalize when tested for other languages. However, DNN-based models are often language-dependent. Previous works explore domain adaptation to fine-tune the pre-trained model for out-of-domain languages. Fine-tuning the model individually for each existing language is expensive. Hence, it limits the usability of the system. This paper proposes the cost-effective idea of integrating a lightweight embedding with existing speaker verification systems to mitigate linguistic bias without adaptation. This work is motivated by the theoretical hypothesis that attentive-frames could help generate language-agnostic embeddings. For scientific validation of this hypothesis, we propose two frame-attentive networks and investigate the effect of their integration with baselines for twelve languages. Empirical results suggest that frame-attentive embedding can cost-effectively reduce linguistic bias and enhance the usability of baselines.- Anthology ID:
- 2022.findings-naacl.93
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2022
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, United States
- Editors:
- Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1247–1258
- Language:
- URL:
- https://aclanthology.org/2022.findings-naacl.93
- DOI:
- 10.18653/v1/2022.findings-naacl.93
- Cite (ACL):
- Divya Sharma and Arun Balaji Buduru. 2022. FAtNet: Cost-Effective Approach Towards Mitigating the Linguistic Bias in Speaker Verification Systems. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1247–1258, Seattle, United States. Association for Computational Linguistics.
- Cite (Informal):
- FAtNet: Cost-Effective Approach Towards Mitigating the Linguistic Bias in Speaker Verification Systems (Sharma & Buduru, Findings 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.93.pdf
- Code
- vdivyas/fatnet
- Data
- AISHELL-1, LibriSpeech, VoxCeleb1, VoxCeleb2