2024
pdf
abs
Overcoming Early Saturation on Low-Resource Languages in Multilingual Dependency Parsing
Jiannan Mao
|
Chenchen Ding
|
Hour Kaing
|
Hideki Tanaka
|
Masao Utiyama
|
Tadahiro Matsumoto.
Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024
UDify is a multilingual and multi-task parser fine-tuned on mBERT that achieves remarkable performance in high-resource languages. However, the performance saturates early and decreases gradually in low-resource languages as training proceeds. This work applies a data augmentation method and conducts experiments on seven few-shot and four zero-shot languages. The unlabeled attachment scores were improved on the zero-shot languages dependency parsing tasks, with the average score rising from 67.1% to 68.7%. Meanwhile, dependency parsing tasks for high-resource languages and other tasks were hardly affected. Experimental results indicate the data augmentation method is effective for low-resource languages in a multilingual dependency parsing.
pdf
abs
Robust Neural Machine Translation for Abugidas by Glyph Perturbation
Hour Kaing
|
Chenchen Ding
|
Hideki Tanaka
|
Masao Utiyama
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Neural machine translation (NMT) systems are vulnerable when trained on limited data. This is a common scenario in low-resource tasks in the real world. To increase robustness, a solution is to intently add realistic noise in the training phase. Noise simulation using text perturbation has been proven to be efficient in writing systems that use Latin letters. In this study, we further explore perturbation techniques on more complex abugida writing systems, for which the visual similarity of complex glyphs is considered to capture the essential nature of these writing systems. Besides the generated noise, we propose a training strategy to improve robustness. We conducted experiments on six languages: Bengali, Hindi, Myanmar, Khmer, Lao, and Thai. By overcoming the introduced noise, we obtained non-degenerate NMT systems with improved robustness for low-resource tasks for abugida glyphs.
2023
pdf
abs
Improving Embedding Transfer for Low-Resource Machine Translation
Van Hien Tran
|
Chenchen Ding
|
Hideki Tanaka
|
Masao Utiyama
Proceedings of Machine Translation Summit XIX, Vol. 1: Research Track
Low-resource machine translation (LRMT) poses a substantial challenge due to the scarcity of parallel training data. This paper introduces a new method to improve the transfer of the embedding layer from the Parent model to the Child model in LRMT, utilizing trained token embeddings in the Parent model’s high-resource vocabulary. Our approach involves projecting all tokens into a shared semantic space and measuring the semantic similarity between tokens in the low-resource and high-resource languages. These measures are then utilized to initialize token representations in the Child model’s low-resource vocabulary. We evaluated our approach on three benchmark datasets of low-resource language pairs: Myanmar-English, Indonesian-English, and Turkish-English. The experimental results demonstrate that our method outperforms previous methods regarding translation quality. Additionally, our approach is computationally efficient, leading to reduced training time compared to prior works.
pdf
Improving Zero-Shot Dependency Parsing by Unsupervised Learning
Jiannan Mao
|
Chenchen Ding
|
Hour Kaing
|
Hideki Tanaka
|
Masao Utiyama
|
Tadahiro Matsumoto
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2022
pdf
abs
FeatureBART: Feature Based Sequence-to-Sequence Pre-Training for Low-Resource NMT
Abhisek Chakrabarty
|
Raj Dabre
|
Chenchen Ding
|
Hideki Tanaka
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 29th International Conference on Computational Linguistics
In this paper we present FeatureBART, a linguistically motivated sequence-to-sequence monolingual pre-training strategy in which syntactic features such as lemma, part-of-speech and dependency labels are incorporated into the span prediction based pre-training framework (BART). These automatically extracted features are incorporated via approaches such as concatenation and relevance mechanisms, among which the latter is known to be better than the former. When used for low-resource NMT as a downstream task, we show that these feature based models give large improvements in bilingual settings and modest ones in multilingual settings over their counterparts that do not use features.
2021
pdf
abs
Multi-Source Cross-Lingual Constituency Parsing
Hour Kaing
|
Chenchen Ding
|
Katsuhito Sudoh
|
Masao Utiyama
|
Eiichiro Sumita
|
Satoshi Nakamura
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Pretrained multilingual language models have become a key part of cross-lingual transfer for many natural language processing tasks, even those without bilingual information. This work further investigates the cross-lingual transfer ability of these models for constituency parsing and focuses on multi-source transfer. Addressing structure and label set diversity problems, we propose the integration of typological features into the parsing model and treebank normalization. We trained the model on eight languages with diverse structures and use transfer parsing for an additional six low-resource languages. The experimental results show that the treebank normalization is essential for cross-lingual transfer performance and the typological features introduce further improvement. As a result, our approach improves the baseline F1 of multi-source transfer by 5 on average.
pdf
bib
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
Toshiaki Nakazawa
|
Hideki Nakayama
|
Isao Goto
|
Hideya Mino
|
Chenchen Ding
|
Raj Dabre
|
Anoop Kunchukuttan
|
Shohei Higashiyama
|
Hiroshi Manabe
|
Win Pa Pa
|
Shantipriya Parida
|
Ondřej Bojar
|
Chenhui Chu
|
Akiko Eriguchi
|
Kaori Abe
|
Yusuke Oda
|
Katsuhito Sudoh
|
Sadao Kurohashi
|
Pushpak Bhattacharyya
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
pdf
bib
abs
Overview of the 8th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideki Nakayama
|
Chenchen Ding
|
Raj Dabre
|
Shohei Higashiyama
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Shantipriya Parida
|
Ondřej Bojar
|
Chenhui Chu
|
Akiko Eriguchi
|
Kaori Abe
|
Yusuke Oda
|
Sadao Kurohashi
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
This paper presents the results of the shared tasks from the 8th workshop on Asian translation (WAT2021). For the WAT2021, 28 teams participated in the shared tasks and 24 teams submitted their translation results for the human evaluation. We also accepted 5 research papers. About 2,100 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
2020
pdf
abs
A Three-Parameter Rank-Frequency Relation in Natural Languages
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We present that, the rank-frequency relation in textual data follows f ∝ r-𝛼(r+𝛾)-𝛽, where f is the token frequency and r is the rank by frequency, with (𝛼, 𝛽, 𝛾) as parameters. The formulation is derived based on the empirical observation that d2 (x+y)/dx2 is a typical impulse function, where (x,y)=(log r, log f). The formulation is the power law when 𝛽=0 and the Zipf–Mandelbrot law when 𝛼=0. We illustrate that 𝛼 is related to the analytic features of syntax and 𝛽+𝛾 to those of morphology in natural languages from an investigation of multilingual corpora.
pdf
bib
Proceedings of the 7th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideki Nakayama
|
Chenchen Ding
|
Raj Dabre
|
Anoop Kunchukuttan
|
Win Pa Pa
|
Ondřej Bojar
|
Shantipriya Parida
|
Isao Goto
|
Hidaya Mino
|
Hiroshi Manabe
|
Katsuhito Sudoh
|
Sadao Kurohashi
|
Pushpak Bhattacharyya
Proceedings of the 7th Workshop on Asian Translation
pdf
bib
abs
Overview of the 7th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideki Nakayama
|
Chenchen Ding
|
Raj Dabre
|
Shohei Higashiyama
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Shantipriya Parida
|
Ondřej Bojar
|
Sadao Kurohashi
Proceedings of the 7th Workshop on Asian Translation
This paper presents the results of the shared tasks from the 7th workshop on Asian translation (WAT2020). For the WAT2020, 20 teams participated in the shared tasks and 14 teams submitted their translation results for the human evaluation. We also received 12 research paper submissions out of which 7 were accepted. About 500 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
pdf
abs
Improving Low-Resource NMT through Relevance Based Linguistic Features Incorporation
Abhisek Chakrabarty
|
Raj Dabre
|
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 28th International Conference on Computational Linguistics
In this study, linguistic knowledge at different levels are incorporated into the neural machine translation (NMT) framework to improve translation quality for language pairs with extremely limited data. Integrating manually designed or automatically extracted features into the NMT framework is known to be beneficial. However, this study emphasizes that the relevance of the features is crucial to the performance. Specifically, we propose two methods, 1) self relevance and 2) word-based relevance, to improve the representation of features for NMT. Experiments are conducted on translation tasks from English to eight Asian languages, with no more than twenty thousand sentences for training. The proposed methods improve translation quality for all tasks by up to 3.09 BLEU points. Discussions with visualization provide the explainability of the proposed methods where we show that the relevance methods provide weights to features thereby enhancing their impact on low-resource machine translation.
pdf
abs
A Myanmar (Burmese)-English Named Entity Transliteration Dictionary
Aye Myat Mon
|
Chenchen Ding
|
Hour Kaing
|
Khin Mar Soe
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the Twelfth Language Resources and Evaluation Conference
Transliteration is generally a phonetically based transcription across different writing systems. It is a crucial task for various downstream natural language processing applications. For the Myanmar (Burmese) language, robust automatic transliteration for borrowed English words is a challenging task because of the complex Myanmar writing system and the lack of data. In this study, we constructed a Myanmar-English named entity dictionary containing more than eighty thousand transliteration instances. The data have been released under a CC BY-NC-SA license. We evaluated the automatic transliteration performance using statistical and neural network-based approaches based on the prepared data. The neural network model outperformed the statistical model significantly in terms of the BLEU score on the character level. Different units used in the Myanmar script for processing were also compared and discussed.
2019
pdf
abs
MY-AKKHARA: A Romanization-based Burmese (Myanmar) Input Method
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations
MY-AKKHARA is a method used to input Burmese texts encoded in the Unicode standard, based on commonly accepted Latin transcription. By using this method, arbitrary Burmese strings can be accurately inputted with 26 lowercase Latin letters. Meanwhile, the 26 uppercase Latin letters are designed as shortcuts of lowercase letter sequences. The frequency of Burmese characters is considered in MY-AKKHARA to realize an efficient keystroke distribution on a QWERTY keyboard. Given that the Unicode standard has not been extensively used in digitization of Burmese, we hope that MY-AKKHARA can contribute to the widespread use of Unicode in Myanmar and can provide a platform for smart input methods for Burmese in the future. An implementation of MY-AKKHARA running in Windows is released at
http://www2.nict.go.jp/astrec-att/member/ding/my-akkhara.htmlpdf
bib
Proceedings of the 6th Workshop on Asian Translation
Toshiaki Nakazawa
|
Chenchen Ding
|
Raj Dabre
|
Anoop Kunchukuttan
|
Nobushige Doi
|
Yusuke Oda
|
Ondřej Bojar
|
Shantipriya Parida
|
Isao Goto
|
Hidaya Mino
Proceedings of the 6th Workshop on Asian Translation
pdf
bib
abs
Overview of the 6th Workshop on Asian Translation
Toshiaki Nakazawa
|
Nobushige Doi
|
Shohei Higashiyama
|
Chenchen Ding
|
Raj Dabre
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Yusuke Oda
|
Shantipriya Parida
|
Ondřej Bojar
|
Sadao Kurohashi
Proceedings of the 6th Workshop on Asian Translation
This paper presents the results of the shared tasks from the 6th workshop on Asian translation (WAT2019) including Ja↔En, Ja↔Zh scientific paper translation subtasks, Ja↔En, Ja↔Ko, Ja↔En patent translation subtasks, Hi↔En, My↔En, Km↔En, Ta↔En mixed domain subtasks and Ru↔Ja news commentary translation task. For the WAT2019, 25 teams participated in the shared tasks. We also received 10 research paper submissions out of which 61 were accepted. About 400 translation results were submitted to the automatic evaluation server, and selected submis- sions were manually evaluated.
pdf
abs
Supervised and Unsupervised Machine Translation for Myanmar-English and Khmer-English
Benjamin Marie
|
Hour Kaing
|
Aye Myat Mon
|
Chenchen Ding
|
Atsushi Fujita
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 6th Workshop on Asian Translation
This paper presents the NICT’s supervised and unsupervised machine translation systems for the WAT2019 Myanmar-English and Khmer-English translation tasks. For all the translation directions, we built state-of-the-art supervised neural (NMT) and statistical (SMT) machine translation systems, using monolingual data cleaned and normalized. Our combination of NMT and SMT performed among the best systems for the four translation directions. We also investigated the feasibility of unsupervised machine translation for low-resource and distant language pairs and confirmed observations of previous work showing that unsupervised MT is still largely unable to deal with them.
pdf
abs
English-Myanmar Supervised and Unsupervised NMT: NICT’s Machine Translation Systems at WAT-2019
Rui Wang
|
Haipeng Sun
|
Kehai Chen
|
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 6th Workshop on Asian Translation
This paper presents the NICT’s participation (team ID: NICT) in the 6th Workshop on Asian Translation (WAT-2019) shared translation task, specifically Myanmar (Burmese) - English task in both translation directions. We built neural machine translation (NMT) systems for these tasks. Our NMT systems were trained with language model pretraining. Back-translation technology is adopted to NMT. Our NMT systems rank the third in English-to-Myanmar and the second in Myanmar-to-English according to BLEU score.
2018
pdf
bib
Overview of the 5th Workshop on Asian Translation
Toshiaki Nakazawa
|
Katsuhito Sudoh
|
Shohei Higashiyama
|
Chenchen Ding
|
Raj Dabre
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Sadao Kurohashi
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation: 5th Workshop on Asian Translation: 5th Workshop on Asian Translation
pdf
English-Myanmar NMT and SMT with Pre-ordering: NICT’s Machine Translation Systems at WAT-2018
Rui Wang
|
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation: 5th Workshop on Asian Translation: 5th Workshop on Asian Translation
pdf
abs
Simplified Abugidas
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20,000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTM-based recurrent neural network, finding that the abugida graphemes could be recovered with 94% - 97% accuracy at the top-1 level and 98% - 99% at the top-4 level, even after omitting most diacritics (10 - 30 types) and merging the remaining 30 - 50 characters into 21 graphemes.
2017
pdf
bib
abs
Overview of the 4th Workshop on Asian Translation
Toshiaki Nakazawa
|
Shohei Higashiyama
|
Chenchen Ding
|
Hideya Mino
|
Isao Goto
|
Hideto Kazawa
|
Yusuke Oda
|
Graham Neubig
|
Sadao Kurohashi
Proceedings of the 4th Workshop on Asian Translation (WAT2017)
This paper presents the results of the shared tasks from the 4th workshop on Asian translation (WAT2017) including J↔E, J↔C scientific paper translation subtasks, C↔J, K↔J, E↔J patent translation subtasks, H↔E mixed domain subtasks, J↔E newswire subtasks and J↔E recipe subtasks. For the WAT2017, 12 institutions participated in the shared tasks. About 300 translation results have been submitted to the automatic evaluation server, and selected submissions were manually evaluated.
2016
pdf
bib
Proceedings of the 3rd Workshop on Asian Translation (WAT2016)
Toshiaki Nakazawa
|
Hideya Mino
|
Chenchen Ding
|
Isao Goto
|
Graham Neubig
|
Sadao Kurohashi
|
Ir. Hammam Riza
|
Pushpak Bhattacharyya
Proceedings of the 3rd Workshop on Asian Translation (WAT2016)
pdf
bib
abs
Overview of the 3rd Workshop on Asian Translation
Toshiaki Nakazawa
|
Chenchen Ding
|
Hideya Mino
|
Isao Goto
|
Graham Neubig
|
Sadao Kurohashi
Proceedings of the 3rd Workshop on Asian Translation (WAT2016)
This paper presents the results of the shared tasks from the 3rd workshop on Asian translation (WAT2016) including J ↔ E, J ↔ C scientific paper translation subtasks, C ↔ J, K ↔ J, E ↔ J patent translation subtasks, I ↔ E newswire subtasks and H ↔ E, H ↔ J mixed domain subtasks. For the WAT2016, 15 institutions participated in the shared tasks. About 500 translation results have been submitted to the automatic evaluation server, and selected submissions were manually evaluated.
pdf
abs
Similar Southeast Asian Languages: Corpus-Based Case Study on Thai-Laotian and Malay-Indonesian
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 3rd Workshop on Asian Translation (WAT2016)
This paper illustrates the similarity between Thai and Laotian, and between Malay and Indonesian, based on an investigation on raw parallel data from Asian Language Treebank. The cross-lingual similarity is investigated and demonstrated on metrics of correspondence and order of tokens, based on several standard statistical machine translation techniques. The similarity shown in this study suggests a possibility on harmonious annotation and processing of the language pairs in future development.
2015
pdf
NICT at WAT 2015
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 2nd Workshop on Asian Translation (WAT2015)
pdf
Improving fast_align by Reordering
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
2014
pdf
Word Order Does NOT Differ Significantly Between Chinese and Japanese
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
|
Mikio Yamamoto
Proceedings of the 1st Workshop on Asian Translation (WAT2014)
pdf
abs
Document-level re-ranking with soft lexical and semantic features for statistical machine translation
Chenchen Ding
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track
We introduce two document-level features to polish baseline sentence-level translations generated by a state-of-the-art statistical machine translation (SMT) system. One feature uses the word-embedding technique to model the relation between a sentence and its context on the target side; the other feature is a crisp document-level token-type ratio of target-side translations for source-side words to model the lexical consistency in translation. The weights of introduced features are tuned to optimize the sentence- and document-level metrics simultaneously on the basis of Pareto optimality. Experimental results on two different schemes with different corpora illustrate that the proposed approach can efficiently and stably integrate document-level information into a sentence-level SMT system. The best improvements were approximately 0.5 BLEU on test sets with statistical significance.
pdf
abs
Empircal dependency-based head finalization for statistical Chinese-, English-, and French-to-Myanmar (Burmese) machine translation
Chenchen Ding
|
Ye Kyaw Thu
|
Masao Utiyama
|
Andrew Finch
|
Eiichiro Sumita
Proceedings of the 11th International Workshop on Spoken Language Translation: Papers
We conduct dependency-based head finalization for statistical machine translation (SMT) for Myanmar (Burmese). Although Myanmar is an understudied language, linguistically it is a head-final language with similar syntax to Japanese and Korean. So, applying the efficient techniques of Japanese and Korean processing to Myanmar is a natural idea. Our approach is a combination of two approaches. The first is a head-driven phrase structure grammar (HPSG) based head finalization for English-to-Japanese translation, the second is dependency-based pre-ordering originally designed for English-to-Korean translation. We experiment on Chinese-, English-, and French-to-Myanmar translation, using a statistical pre-ordering approach as a comparison method. Experimental results show the dependency-based head finalization was able to consistently improve a baseline SMT system, for different source languages and different segmentation schemes for the Myanmar language.
pdf
Dependency Tree Abstraction for Long-Distance Reordering in Statistical Machine Translation
Chenchen Ding
|
Yuki Arase
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics
2013
pdf
An Unsupervised Parameter Estimation Algorithm for a Generative Dependency N-gram Language Model
Chenchen Ding
|
Mikio Yamamoto
Proceedings of the Sixth International Joint Conference on Natural Language Processing
2011
pdf
abs
Long-distance hierarchical structure transformation rules utilizing function words
Chenchen Ding
|
Takashi Inui
|
Mikio Yamamoto
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, we propose structure transformation rules for statistical machine translation which are lexicalized by only function words. Although such rules can be extracted from an aligned parallel corpus simply as original phrase pairs, their structure is hierarchical and thus can be used in a hierarchical translation system. In addition, structure transformation rules can take into account long-distance reordering, allowing for more than two phrases to be moved simultaneously. The rule set is used as a core module in our hierarchical model together with two other modules, namely, a basic reordering module and an optional gap phrase module. Our model is considerably more compact and produces slightly higher BLEU scores than the original hierarchical phrase-based model in Japanese-English translation on the parallel corpus of the NTCIR-7 patent translation task.