Iman Barati


2025

pdf bib
Matina: A Culturally-Aligned Persian Language Model Using Multiple LoRA Experts
Sara Bourbour Hosseinbeigi | MohammadAli SeifKashani | Javad Seraj | Fatemeh Taherinezhad | Ali Nafisi | Fatemeh Nadi | Iman Barati | Hosein Hasani | Mostafa Amiri | Mostafa Masoudi
Findings of the Association for Computational Linguistics: ACL 2025

Large language models (LLMs) are powerful tools for a variety of applications, but to interact effectively with users, they must align with the cultural values and linguistic nuances of their audience. However, existing LLMs often fall short in adequately modeling underrepresented languages and cultures, such as Persian, limiting their applicability and acceptance. To address this, we construct diverse, high-quality datasets specifically tailored to Persian linguistic and cultural contexts, ensuring a more authentic and context-aware training process. Using these datasets, we develop Matina, a Persian-focused multi-expert model designed to embody Iranian cultural values and linguistic structures. Matina is trained by fine-tuning LLaMA3.1 8B-Instruct models across five domains: culinary, tourism, socio-culture, translation, and summarization. These experts are combined using a classifier to create a unified multi-expert system. By leveraging culturally aligned datasets, Matina outperforms baseline models in both task performance and user satisfaction, demonstrating the importance of data-driven cultural adaptation in LLM development.