Zhiqiang Ma


2022

pdf
面向 Transformer 模型的蒙古语语音识别词特征编码方法(Researching of the Mongolian word encoding method based on Transformer Mongolian speech recognition)
Xiaoxu Zhang (张晓旭) | Zhiqiang Ma (马志强) | Zhiqiang Liu (刘志强) | Caijilahu Bao (宝财吉拉呼)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“针对 Transformer 模型在蒙古语语音识别任务中无法学习到带有控制符的蒙古语词和语音之间的对应关系,造成模型对蒙古语的不适应问题。提出一种面向 Transformer 模型的蒙古语词编码方法,方法使用蒙古语字母特征与词特征进行混合编码,通过结合蒙古语字母信息使 Transformer 模型能够区分带有控制符的蒙古语词,学习到蒙古语词与语音之间的对应关系。在 IMUT-MC 数据集上,构建 Transformer 模型并进行了词特征编码方法的消融实验和对比实验。消融实验结果表明,词特征编码方法在 HWER、WER、SER 上分别降低了 23.4%、6.9%、2.6%;对比实验结果表明,词特征编码方法领先于所有方法,HWER 和 WER 分别达到 11.8%、19.8%。”

pdf
基于注意力的蒙古语说话人特征提取方法(Attention based Mongolian Speaker Feature Extraction)
Fangyuan Zhu (朱方圆) | Zhiqiang Ma (马志强) | Zhiqiang Liu (刘志强) | Caijilahu Bao (宝财吉拉呼) | Hongbin Wang (王洪彬)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“说话人特征提取模型提取到的说话人特征之间区分性低,使得蒙古语声学模型无法学习到区分性信息,导致模型无法适应不同说话人。提出一种基于注意力的说话人自适应方法,方法引入神经图灵机进行自适应,增加记忆模块存放说话人特征,采用注意力机制计算记忆模块中说话人特征与当前语音说话人特征的相似权重矩阵,通过权重矩阵重新组合成说话人特征s-vector,进而提高说话人特征之间的区分性。在IMUT-MCT数据集上,进行说话人特征提取方法的消融实验、模型自适应实验和案例分析。实验结果表明,对比不同说话人特征s-vector、i-vector与d-vector,s-vector比其他两种方法的SER和WER分别降低4.96%、1.08%;在不同的蒙古语声学模型上进行比较,提出的方法相对于基线均有性能提升。”

pdf
ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering
Zhiyu Chen | Shiyang Li | Charese Smiley | Zhiqiang Ma | Sameena Shah | William Yang Wang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA.

2018

pdf
The USTC-NEL Speech Translation system at IWSLT 2018
Dan Liu | Junhua Liu | Wu Guo | Shifu Xiong | Zhiqiang Ma | Rui Song | Chongliang Wu | Quan Liu
Proceedings of the 15th International Conference on Spoken Language Translation

This paper describes the USTC-NEL (short for ”National Engineering Laboratory for Speech and Language Information Processing University of science and technology of china”) system to the speech translation task of the IWSLT Evaluation 2018. The system is a conventional pipeline system which contains 3 modules: speech recognition, post-processing and machine translation. We train a group of hybrid-HMM models for our speech recognition, and for machine translation we train transformer based neural machine translation models with speech recognition output style text as input. Experiments conducted on the IWSLT 2018 task indicate that, compared to baseline system from KIT, our system achieved 14.9 BLEU improvement.