Jie Pang

Also published as:


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2024

pdf bib
Ko-LLaMA:基于LLaMA的朝鲜语大语言模型(Ko-LLaMA: A Korean Large Language Model Based on LLaMA)
Jie Pang (庞杰) | Xiaodong Yan (闫晓东) | Xiaobing Zhao (赵小兵)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“大语言模型在这两年受到了非常广泛的关注,像ChatGPT和GPT-4这样的大型语言模型(LLMs)极大地改变了自然语言处理研究,并在通向人工通用智能(AGI)的道路上迈出了令人兴奋的步伐。尽管已经开源了LLaMA等几个大型语言模型,但这些模型主要关注英文和中文语料库,对其他语言的适用性有限。而对于少数民族语言如朝鲜语来说,大语言模型的适用性更加有限。在本文中,我们通过扩展LLaMA现有的词表,增加了额外的20000个朝鲜语Token,从而提高了其对朝鲜语的编码和语义理解的能力;并且进一步使用朝鲜语数据进行继续预训练,使用朝鲜语指令微调数据集对模型进行SFT(Supervised Fine-Tuning),并分析了不同数据量对指令精调效果的影响,经过继续预训练和指令微调后的模型显著提高了理解和遵循朝鲜语指令的能力。通过上述训练,极大增强了LLaMA的理解和生成朝鲜语文本的能力,并增强了其遵循指令的能力。实验结果表明,新提出的模型Ko-LLaMA显著提高了原版LLaMA在理解和生成朝鲜语内容方面的能力。此外,在鲜语文本分类数据集YNAT上对Ko-LLaMA与擅长少数民族语言的CINO模型及CINO的多种模型组合以及原版LLaMA和GPT3.5进行了效果对比。结果表明,Ko-LLaMA的朝鲜语文本分类能力远超CINO和CINO的组合模型以及LLaMA和GPT3.5等未经过朝鲜语语料进行词表扩充和继续预训练的大语言模型。”