Daksh Jain
2025
UnityAI Guard: Pioneering Toxicity Detection Across Low-Resource Indian Languages
Himanshu Beniwal
|
Reddybathuni Venkat
|
Rohit Kumar
|
Birudugadda Srivibhav
|
Daksh Jain
|
Pavan Deekshith Doddi
|
Eshwar Dhande
|
Adithya Ananth
|
Kuldeep
|
Mayank Singh
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
This work introduces UnityAI-Guard, a framework for binary toxicity classification targeting low-resource Indian languages. While existing systems predominantly cater to high-resource languages, UnityAI-Guard addresses this critical gap by developing state-of-the-art models for identifying toxic content across diverse Brahmic/Indic scripts. Our approach achieves an impressive average F1-score of 84.23% across seven languages, leveraging a dataset of 567k training instances and 30k manually verified test instances. By advancing multilingual content moderation for linguistically diverse regions, UnityAI-Guard also provides public API access to foster broader adoption and application.
Search
Fix author
Co-authors
- Adithya Ananth 1
- Himanshu Beniwal 1
- Eshwar Dhande 1
- Pavan Deekshith Doddi 1
- Kuldeep 1
- show all...