Mengxiao Zhu
Also published as: 孟笑 朱
2025
VEEF-Multi-LLM: Effective Vocabulary Expansion and Parameter Efficient Finetuning Towards Multilingual Large Language Models
Jiu Sha
|
Mengxiao Zhu
|
Chong Feng
|
Yuming Shang
Proceedings of the 31st International Conference on Computational Linguistics
Large Language Models(LLMs) have brought significant transformations to various aspects of human life and productivity. However, the heavy reliance on vast amounts of data in developing these models has resulted in a notable disadvantage for low-resource languages, such as Nuosu and others, which lack large datasets. Moreover, many LLMs exhibit significant performance discrepancies between high-and lowresource languages, thereby restricting equitable access to technological advances for all linguistic communities. To address these challenges, this paper propose a low-resource multilingual large language model, termed VEEF-Multi-LLM, constructed through effective vocabulary expansion and parameter-efficient fine-tuning. We introduce a series of innovative methods to address challenges in low-resource languages. First, we adopt Byte-level Byte-Pair Encoding to expand the vocabulary for broader multilingual support. We separate input and output embedding weights to boost performance, and apply RoPE for long-context handling, as well as RMSNorm for efficient training. To generate high-quality supervised fine-tuning (SFT) data, we use self-training and selective translation, and refine the resulting dataset with the assistance of native speakers to ensure cultural and linguistic accuracy. Our model, VEEF-Multi-LLM-8B, is trained on 600 billion tokens across 50 natural and 16 programming languages. Experimental results show that the model excels in multilingual instruction-following tasks, particularly in translation, outperforming competing models in benchmarks such as XCOPA and XStoryCloze. Although it lags slightly behind English-centric models in some tasks (e.g., m-MMLU), it prioritizes safety, reliability, and inclusivity, making it valuable for diverse linguistic communities. We open-source our models on GitHub and Huggingface.
2024
面向心理健康咨询的藏语数据集及大语言模型构建(Construction of Tibetan Datasets and Large Language Models for Psychological Health Counseling)
Mengxiao Zhu (朱孟笑)
|
Jiu Sha (沙九)
|
Chong Feng (冯冲)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“焦虑、抑郁已成为人们常见的心理障碍,适度的疏导对于缓解人们精神、心理压力具有重要意义。然而由于病耻感等原因,很多人得不到及时的疏导和治疗。随着人工智能的发展,大语言模型(LLMs)优越的知识融会贯通能力和思维链能力,使得其成为心理疏导的有效工具。然而,现有少量面向心理健康咨询的大语言模型通常针对英文、中文等资源丰富的语种,而对于低资源语言,LLMs在心理咨询领域的应用尚缺少研究。本文以藏语作为低资源语言的代表,研究藏语心理咨询数据集的构建和藏语心理健康大语言模型的构建方法。首先,通过收集现有高质量的中文心理咨询对话数据,并对数据进行处理,生成心理健康多轮对话数据集;其次,构建汉藏翻译工具将其翻译成藏语多轮对话数据,并结合多种机制对数据进行筛选、过滤生成高质量藏语心理健康多轮对话数据;基于构造的数据,采用现有通用大语言模型Baichuan2和LLaMA2模型进行指令调优训练,形成藏语心理健康大语言模型,并将开源用于科学研究。最后通过实验验证了本文发布的藏语心理健康多轮对话数据集以及藏语心理健康咨询大语言模型的有效性。”