Josemaria Loza Vera
2025
Role-Aware Language Models for Secure and Contextualized Access Control in Organizations
Saeed Almheiri
|
Yerulan Kongrat
|
Adrian Santosh
|
Ruslan Tasmukhanov
|
Josemaria Loza Vera
|
Muhammad Dehan Al Kautsar
|
Fajri Koto
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
As large language models (LLMs) are increasingly deployed in enterprise settings, controlling model behavior based on user roles becomes an essential requirement. Existing safety methods typically assume uniform access and focus on preventing harmful or toxic outputs, without addressing role-specific access constraints. In this work, we investigate whether LLMs can be fine-tuned to generate responses that reflect the access privileges associated with different organizational roles. We explore three modeling strategies: a BERT-based classifier, an LLM-based classifier, and role-conditioned generation. To evaluate these approaches, we construct two complementary datasets. The first is adapted from existing instruction-tuning corpora through clustering and role labeling, while the second is synthetically generated to reflect realistic, role-sensitive enterprise scenarios. We assess model performance across varying organizational structures and analyze robustness to prompt injection, role mismatch, and jailbreak attempts.
Search
Fix author
Co-authors
- Muhammad Dehan Al Kautsar 1
- Saeed Almheiri 1
- Yerulan Kongrat 1
- Fajri Koto 1
- Adrian Santosh 1
- show all...