Xiaobing Zhao

Also published as: 小兵


2025

pdf bib
Enhancing Cross-Lingual Transfer through Reversible Transliteration: A Huffman-Based Approach for Low-Resource Languages
Wenhao Zhuang | Yuan Sun | Xiaobing Zhao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

As large language models (LLMs) are trained on increasingly diverse and extensive multilingual corpora, they demonstrate cross-lingual transfer capabilities. However, these capabilities often fail to effectively extend to low-resource languages, particularly those utilizing non-Latin scripts. While transliterating low-resource languages into Latin script presents a natural solution, there currently lacks a comprehensive framework for integrating transliteration into LLMs training and deployment. Taking a pragmatic approach, this paper innovatively combines character transliteration with Huffman coding to design a complete transliteration framework. Our proposed framework offers the following advantages: 1) Compression: Reduces storage requirements for low-resource language content, achieving up to 50% reduction in file size and 50-80% reduction in token count. 2) Accuracy: Guarantees 100% lossless conversion from transliterated text back to the source language. 3) Efficiency: Eliminates the need for vocabulary expansion for low-resource languages, improving training and inference efficiency. 4) Scalability: The framework can be extended to other low-resource languages. We validate the effectiveness of our framework across multiple downstream tasks, including text classification, machine reading comprehension, and machine translation. Experimental results demonstrate that our method significantly enhances the model’s capability to process low-resource languages while maintaining performance on high-resource languages. Our data and code are publicly available at https://github.com/CMLI-NLP/HuffmanTranslit.

pdf bib
Proceedings of the Eighth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2025)
Atul Kr. Ojha | Chao-hong Liu | Ekaterina Vylomova | Flammie Pirinen | Jonathan Washington | Nathaniel Oco | Xiaobing Zhao
Proceedings of the Eighth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2025)

2024

pdf bib
Ko-LLaMA:基于LLaMA的朝鲜语大语言模型(Ko-LLaMA: A Korean Large Language Model Based on LLaMA)
Jie Pang (庞杰) | Xiaodong Yan (闫晓东) | Xiaobing Zhao (赵小兵)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“大语言模型在这两年受到了非常广泛的关注,像ChatGPT和GPT-4这样的大型语言模型(LLMs)极大地改变了自然语言处理研究,并在通向人工通用智能(AGI)的道路上迈出了令人兴奋的步伐。尽管已经开源了LLaMA等几个大型语言模型,但这些模型主要关注英文和中文语料库,对其他语言的适用性有限。而对于少数民族语言如朝鲜语来说,大语言模型的适用性更加有限。在本文中,我们通过扩展LLaMA现有的词表,增加了额外的20000个朝鲜语Token,从而提高了其对朝鲜语的编码和语义理解的能力;并且进一步使用朝鲜语数据进行继续预训练,使用朝鲜语指令微调数据集对模型进行SFT(Supervised Fine-Tuning),并分析了不同数据量对指令精调效果的影响,经过继续预训练和指令微调后的模型显著提高了理解和遵循朝鲜语指令的能力。通过上述训练,极大增强了LLaMA的理解和生成朝鲜语文本的能力,并增强了其遵循指令的能力。实验结果表明,新提出的模型Ko-LLaMA显著提高了原版LLaMA在理解和生成朝鲜语内容方面的能力。此外,在鲜语文本分类数据集YNAT上对Ko-LLaMA与擅长少数民族语言的CINO模型及CINO的多种模型组合以及原版LLaMA和GPT3.5进行了效果对比。结果表明,Ko-LLaMA的朝鲜语文本分类能力远超CINO和CINO的组合模型以及LLaMA和GPT3.5等未经过朝鲜语语料进行词表扩充和继续预训练的大语言模型。”

pdf bib
TiLamb:基于增量预训练的藏文大语言模型(TiLamb: A Tibetan Large Language Model Based on Incremental Pre-training)
Wenhao Zhuang (庄文浩) | Yuan Sun (孙媛) | Xiaobing Zhao (赵小兵)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“基于“预训练+微调”范式的语言模型展现了卓越的性能,随着模型规模和训练数据量的扩增,其解决多种自然语言处理任务的能力得到了显著的提高。当前的大语言模型主要支持英汉等主流语言,这限制了藏语等低资源语言在该领域的研究。针对藏语数据稀缺、现有藏语预训练模型效果不够好、下游任务可扩展性差等问题,本文汇总清洗得到26.43GB藏文数据,以开源的LLaMA2-7B作为基座模型,扩充LLaMA2现有词表,增加了约30,000个藏文tokens,提高其藏文编码效率和对藏文的语义理解能力,通过增量预训练得到藏文大语言模型基座TiLamb。根据多种藏文下游任务分别制作数千到几万条不等的微调数据集,微调后的TiLamb在藏文新闻分类、藏文实体关系分类、藏文机器阅读理解、藏文分词、藏文摘要、藏文问题回答、藏文问题生成共七个下游任务中进行验证,多项指标结果相较传统方法和其他藏文预训练模型有大幅提升。本文将TiLamb和部分资源开放供研究使用,https://github.com/NLP-Learning/TiLamb。”

pdf bib
融合多元特征表示的藏文命名实体识别方法赵小兵∗2(Research on Tibetan Named Entity Recognition Using Multi-Feature Fusion Representation)
Cairang Ejian (俄见才让) | Maoke Zhou (周毛克) | Bo Chen (陈波) | Xiaobing Zhao (赵小兵)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“本文针对基于音节嵌入方式的藏文命名实体识别(TNER)中词汇信息和音节部件信息忽略的问题,提出了基于交叉Transformer架构的MECT-TL模型,融合了藏文音节信息、词汇信息和音节部件信息的多元数据特征。MECT-TL通过平面网络结构将藏文音节与词汇信息结合,并整合音节部件信息,有效提升了藏文实体识别的准确性。实验结果显示,相较于主流的TNER基准模型BiLSTM-CRF,本文模型在F1值上提高了5.14个百分点,与基于Transformer架构的TENER模型相比提高了4.18个百分点。这表明,融合藏文词汇和音节部件信息的方法可以显著提高TNER任务的性能。”

pdf bib
基于生成式语言模型的立场检测探究(Research on Stance Detection with Generative Language Model)
Yuanshuo Zhang (张袁硕) | Aohua Li (李澳华) | Zhaoning Yin (尹召宁) | Panyi Wang (王潘怡) | Bo Chen (陈波) | Xiaobing Zhao (赵小兵)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“近年来,立场检测任务受到越来越多的关注,但相关标注数据在范围和规模上都有限,不能有效支撑基于神经网络的立场检测。为此,本文探索在零样本阯少样本场景下生成式语言模型在立场检测任务上的能力。首先,构建了一个全新的面向立场检测的数据集,包含5个主题,共2500个人工标注样例;然后,在此数据集上进行了一系列探索实验,实验结果表明:生成式语言模型在零样本设定下,采用结构化的提示学习表现良好;增加额外信息能够显著提升模型性能;在少样本设定下,提供相同目标的示例能够明显提升模型性能,而不同目标示例产生了负面作用;使用思维链可以显著提升模型性能;受提示学习的启发,微调预训练语言模型进一步论证提供额外信息对立场检测的增益显著。”

pdf bib
Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)
Atul Kr. Ojha | Chao-hong Liu | Ekaterina Vylomova | Flammie Pirinen | Jade Abbott | Jonathan Washington | Nathaniel Oco | Valentin Malykh | Varvara Logacheva | Xiaobing Zhao
Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)

2023

pdf bib
Improving Low-resource Question Answering by Augmenting Question Information
Andong Chen | Yuan Sun | Xiaobing Zhao | Rosella Galindo Esparza | Kehai Chen | Yang Xiang | Tiejun Zhao | Min Zhang
Findings of the Association for Computational Linguistics: EMNLP 2023

In the era of large models, low-resource question-answering tasks lag, emphasizing the importance of data augmentation - a key research avenue in natural language processing. The main challenges include leveraging the large model’s internal knowledge for data augmentation, determining which QA data component - the question, passage, or answer - benefits most from augmentation, and retaining consistency in the augmented content without inducing excessive noise. To tackle these, we introduce PQQ, an innovative approach for question data augmentation consisting of Prompt Answer, Question Generation, and Question Filter. Our experiments reveal that ChatGPT underperforms on the experimental data, yet our PQQ method excels beyond existing augmentation strategies. Further, its universal applicability is validated through successful tests on high-resource QA tasks like SQUAD1.1 and TriviaQA.

pdf bib
Proceedings of the Sixth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2023)
Atul Kr. Ojha | Chao-hong Liu | Ekaterina Vylomova | Flammie Pirinen | Jade Abbott | Jonathan Washington | Nathaniel Oco | Valentin Malykh | Varvara Logacheva | Xiaobing Zhao
Proceedings of the Sixth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2023)

2022

pdf bib
机器音译研究综述(Survey on Machine Transliteration)
Zhuo Li (李卓) | Zhijuan Wang (王志娟) | Xiaobing Zhao (赵小兵)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“机器音译是基于语音相似性自动将文本从一种语言转换为另一种语言的过程,它是机器翻译的一个子任务,侧重于语音信息的翻译。音译后可知道源单词在另一种语言中的发音,使不熟悉源语言的人更容易理解该语言,有益于消除语言和拼写障碍。机器音译在多语言文本处理、语料库对齐、信息抽取等自然语言应用中发挥着重要作用。本文阐述了目前机器音译任务中存在的挑战,对主要的音译方法进行了剖析、分类和整理,对音译数据集进行了罗列汇总,并列出了常用的音译效果评价指标,最后对该领域目前存在的问题进行了说明并对音译学的未来进行了展望。本文以期对进入该领域的新人提供快速的入门指南,或供其他研究者参考。”

pdf bib
Question Generation Based on Grammar Knowledge and Fine-grained Classification
Yuan Sun | Sisi Liu | Zhengcuo Dan | Xiaobing Zhao
Proceedings of the 29th International Conference on Computational Linguistics

Question generation is the task of automatically generating questions based on given context and answers, and there are problems that the types of questions and answers do not match. In minority languages such as Tibetan, since the grammar rules are complex and the training data is small, the related research on question generation is still in its infancy. To solve the above problems, this paper constructs a question type classifier and a question generator. We perform fine-grained division of question types and integrate grammatical knowledge into question type classifiers to improve the accuracy of question types. Then, the types predicted by the question type classifier are fed into the question generator. Our model improves the accuracy of interrogative words in generated questions, and the BLEU-4 on SQuAD reaches 17.52, the BLEU-4 on HotpotQA reaches 19.31, the BLEU-4 on TibetanQA reaches 25.58.

pdf bib
Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022)
Atul Kr. Ojha | Chao-Hong Liu | Ekaterina Vylomova | Jade Abbott | Jonathan Washington | Nathaniel Oco | Tommi A Pirinen | Valentin Malykh | Varvara Logacheva | Xiaobing Zhao
Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022)

2021

pdf bib
基于枢轴语言系统融合的词汇混淆网络神经机器翻译(Neural Machine Translation for Vocabulary Confusion Network Based on Pivotal Language System Fusion)
Xiaobing Zhao (赵小兵) | Bo Jin (金波) | Yuan Sun (孙媛)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

神经机器翻译在低资源语言的翻译任务中存在翻译难度大、译文质量不佳的问题。本文针对低资源语言与汉语之间没有双语平行语料的情况,采用正反向枢轴翻译的方法,生成了三种低资源语言到汉语的平行句对,采用词汇级的系统融合技术,将Transformer模型和对偶学习模型翻译生成的目标语言译文进行融合,然后通过混淆神经网络进行词汇选择,生成了更为优质的目标语言译文。实验证明,本文提出的多模型融合方法在爱沙尼亚语-汉语、拉脱维亚语-汉语、罗马尼亚语-汉语这三种低资源语言翻译任务中均优于独立模型的翻译效果,进一步提升了低资源语言神经机器翻译的译文质量。

pdf bib
JCapsR: 一种联合胶囊神经网络的藏语知识图谱表示学习模型(JCapsR: A Joint Capsule Neural Network for Tibetan Knowledge Graph Representation Learning)
Yuan Sun (孙媛) | Jiaya Liang (梁家亚) | Andong Chen (陈安东) | Xiaobing Zhao (赵小兵)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

知识图谱表示学习是自然语言处理的一项关键技术,现有的知识图谱表示研究主要集中在英语、汉语等语言,而低资源语言的知识图谱表示学习研究还处于探索阶段,例如藏语。本文基于前期构建的藏语知识图谱,提出了一种联合胶囊神经网络(JCapsR)的藏语知识图谱表示学习模型。首先,我们使用TransR模型生成藏语知识图谱的结构化信息表示。其次,采用融合多头注意力和关系注意力的Transformer模型表示藏语实体的文本描述信息。最后,采用JCapsR进一步提取三元组在知识图谱语义空间中的关系,将实体文本描述信息和结构化信息融合,得到藏语知识图谱的表示。实验结果表明,相比基线系统,联合胶囊神经网络JCapsR模型提高了藏语知识图谱表示学习的效果,相关研究为其它低资源语言知识图谱表示学习的拓展优化提供了参考借鉴意义。

pdf bib
面向机器阅读理解的高质量藏语数据集构建(Construction of High-quality Tibetan Dataset for Machine Reading Comprehension)
Yuan Sun (孙媛) | Sisi Liu (刘思思) | Chaofan Chen (陈超凡) | Zhengcuo Dan (旦正错) | Xiaobing Zhao (赵小兵)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

机器阅读理解是通过算法让机器根据给定的上下文回答问题,从而测试机器理解自然语言的程度。其中,数据集的构建是机器阅读理解的主要任务。目前,相关算法模型在大多数流行的英语数据集上都取得了显著的成绩,甚至超过了人类的表现。但对于低资源语言,由于缺乏相应的数据集,机器阅读理解研究还处于起步阶段。本文以藏语为例,人工构建了藏语机器阅读理解数据集(TibetanQA),其中包含20000个问题答案对和1513篇文章。本数据集的文章均来自云藏网,涵盖了自然、文化和教育等12个领域的知识,问题形式多样且具有一定的难度。另外,该数据集在文章收集、问题构建、答案验证、回答多样性和推理能力等方面,均采用严格的流程以确保数据的质量,同时采用基于语言特征消融输入的验证方法说明了数据集的质量。最后,本文初步探索了三种经典的英语阅读理解模型在TibetanQA数据集上的表现,其结果难以媲美人类,这表明在藏语机器阅读理解任务上还需要更进一步的探索。

pdf bib
Ti-Reader: 基于注意力机制的藏文机器阅读理解端到端网络模型(Ti-Reader: An End-to-End Network Model Based on Attention Mechanisms for Tibetan Machine Reading Comprehension)
Yuan Sun (孙媛) | Chaofan Chen (陈超凡) | Sisi Liu (刘思思) | Xiaobing Zhao (赵小兵)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

机器阅读理解旨在教会机器去理解一篇文章并且回答与之相关的问题。为了解决低资源语言上机器阅读理解模型性能低的问题,本文提出了一种基于注意力机制的藏文机器阅读理解端到端网络模型Ti-Reader。首先,为了编码更细粒度的藏文文本信息,本文将音节和词相结合进行词表示,然后采用词级注意力机制去关注文本中的关键词,采用重读机制去捕捉文章和问题之间的语义信息,采用自注意力机制去匹配问题与答案的隐变量本身,为答案预测提供更多的线索。最后,实验结果表明,Ti-Reader模型提升了藏文机器阅读理解的性能,并且在英文数据集SQuAD上也有较好的表现。

2020

pdf bib
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages
Alina Karakanta | Atul Kr. Ojha | Chao-Hong Liu | Jade Abbott | John Ortega | Jonathan Washington | Nathaniel Oco | Surafel Melaku Lakew | Tommi A Pirinen | Valentin Malykh | Varvara Logacheva | Xiaobing Zhao
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages

2019

pdf bib
Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages
Alina Karakanta | Atul Kr. Ojha | Chao-Hong Liu | Jonathan Washington | Nathaniel Oco | Surafel Melaku Lakew | Valentin Malykh | Xiaobing Zhao
Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages

2018

pdf bib
Tibetan-Chinese Neural Machine Translation based on Syllable Segmentation
Wen Lai | Xiaobing Zhao | Wei Bao
Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages (LoResMT 2018)