Chaowei Zhang
2023
Chinese Idiom Paraphrasing
Jipeng Qiang
|
Yang Li
|
Chaowei Zhang
|
Yun Li
|
Yi Zhu
|
Yunhao Yuan
|
Xindong Wu
Transactions of the Association for Computational Linguistics, Volume 11
Idioms are a kind of idiomatic expression in Chinese, most of which consist of four Chinese characters. Due to the properties of non-compositionality and metaphorical meaning, Chinese idioms are hard to be understood by children and non-native speakers. This study proposes a novel task, denoted as Chinese Idiom Paraphrasing (CIP). CIP aims to rephrase idiom-containing sentences to non-idiomatic ones under the premise of preserving the original sentence’s meaning. Since the sentences without idioms are more easily handled by Chinese NLP systems, CIP can be used to pre-process Chinese datasets, thereby facilitating and improving the performance of Chinese NLP tasks, e.g., machine translation systems, Chinese idiom cloze, and Chinese idiom embeddings. In this study, we can treat the CIP task as a special paraphrase generation task. To circumvent difficulties in acquiring annotations, we first establish a large-scale CIP dataset based on human and machine collaboration, which consists of 115,529 sentence pairs. In addition to three sequence-to-sequence methods as the baselines, we further propose a novel infill-based approach based on text infilling. The results show that the proposed method has better performance than the baselines based on the established CIP dataset.
2020
Multi-Reward based Reinforcement Learning for Neural Machine Translation
Shuo Sun
|
Hongxu Hou
|
Nier Wu
|
Ziyue Guo
|
Chaowei Zhang
Proceedings of the 19th Chinese National Conference on Computational Linguistics
Reinforcement learning (RL) has made remarkable progress in neural machine translation (NMT). However, it exists the problems with uneven sampling distribution, sparse rewards and high variance in training phase. Therefore, we propose a multi-reward reinforcement learning training strategy to decouple action selection and value estimation. Meanwhile, our method combines with language model rewards to jointly optimize model parameters. In addition, we add Gumbel noise in sampling to obtain more effective semantic information. To verify the robustness of our method, we not only conducted experiments on large corpora, but also performed on low-resource languages. Experimental results show that our work is superior to the baselines in WMT14 English-German, LDC2014 Chinese-English and CWMT2018 Mongolian-Chinese tasks, which fully certificates the effectiveness of our method.
Search
Co-authors
- Shuo Sun 1
- Hongxu Hou 1
- Nier Wu 1
- Ziyue Guo 1
- Jipeng Qiang 1
- show all...