Junyi Xiang

Also published as: 军毅


2025

pdf bib
Exploiting Phonetics and Glyph Representation at Radical-level for Classical Chinese Understanding
Junyi Xiang | Maofu Liu
Findings of the Association for Computational Linguistics: ACL 2025

The diachronic gap between classical and modern Chinese arises from century-scale language evolution through cumulative changes in phonological, syntactic, and lexical systems, resulting in substantial semantic variation that poses significant challenges for the computational modeling of historical texts. Current methods always enhance classical Chinese understanding of pre-trained language models through corpus pre-training or semantic integration. However, they overlook the synergistic relationship between phonetic and glyph features within Chinese characters, which is a critical factor in deciphering characters’ semantics. In this paper, we propose RPGCM, a radical-level phonetics and glyph representation enhanced Chinese model, with powerful fine-grained semantic modeling capabilities. Our model establishes robust contextualized representations through: (1) rules-based radical decomposition and bype pair encoder (BPE) based radical aggregated for structural pattern recognition, (2) phonetic-glyph semantic mapping, and (3) dynamic semantic fusion. Experimental results on CCMRC, WYWEB, and C³Bench benchmarks demonstrate the RPGCM’s superiority and validate that explicit radical-level modeling effectively mitigates semantic variations.

2020

pdf bib
基于BERTCA的新闻实体与正文语义相关度计算模型(Semantic Relevance Computing Model of News Entity and Text based on BERTCA)
Junyi Xiang (向军毅) | Huijun Hu (胡慧君) | Ruibin Mao (毛瑞彬) | Maofu Liu (刘茂福)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

目前的搜索引擎仍然存在“重形式,轻语义”的问题,无法做到对搜索关键词和文本的深层次语义理解,因此语义检索成为当代搜索引擎中亟需解决的问题。为了提高搜索引擎的语义理解能力,提出一种语义相关度的计算方法。首先标注金融类新闻标题实体与新闻正文语义相关度语料1万条,然后建立新闻实体与正文语义相关度计算的BERTCA(Bidirectional Encoder Representation from Transformers Co-Attention)模型,通过使用BERT预训练模型,综合考虑细粒度的实体和粗粒度的正文的语义信息,然后经过协同注意力,实现实体与正文的语义匹配,不仅能计算出金融新闻实体与新闻正文之间的相关度,还能根据相关度阈值来判定相关度类别,实验表明该模型在1万条标注语料上准确率超过95%,优于目前主流模型,最后通过具体搜索示例展现该模型的优秀性能。