Raymond Hendy Susanto


2021

pdf
Rakuten’s Participation in WAT 2021: Examining the Effectiveness of Pre-trained Models for Multilingual and Multimodal Machine Translation
Raymond Hendy Susanto | Dongzhe Wang | Sunil Yadav | Mausam Jain | Ohnmar Htun
Proceedings of the 8th Workshop on Asian Translation (WAT2021)

This paper introduces our neural machine translation systems’ participation in the WAT 2021 shared translation tasks (team ID: sakura). We participated in the (i) NICT-SAP, (ii) Japanese-English multimodal translation, (iii) Multilingual Indic, and (iv) Myanmar-English translation tasks. Multilingual approaches such as mBART (Liu et al., 2020) are capable of pre-training a complete, multilingual sequence-to-sequence model through denoising objectives, making it a great starting point for building multilingual translation systems. Our main focus in this work is to investigate the effectiveness of multilingual finetuning on such a multilingual language model on various translation tasks, including low-resource, multimodal, and mixed-domain translation. We further explore a multimodal approach based on universal visual representation (Zhang et al., 2019) and compare its performance against a unimodal approach based on mBART alone.

2020

pdf
Lexically Constrained Neural Machine Translation with Levenshtein Transformer
Raymond Hendy Susanto | Shamil Chollampatt | Liling Tan
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

This paper proposes a simple and effective algorithm for incorporating lexical constraints in neural machine translation. Previous work either required re-training existing models with the lexical constraints or incorporating them during beam search decoding with significantly higher computational overheads. Leveraging the flexibility and speed of a recently proposed Levenshtein Transformer model (Gu et al., 2019), our method injects terminology constraints at inference time without any impact on decoding speed. Our method does not require any modification to the training procedure and can be easily applied at runtime with custom dictionaries. Experiments on English-German WMT datasets show that our approach improves an unconstrained baseline and previous approaches.

pdf
Can Automatic Post-Editing Improve NMT?
Shamil Chollampatt | Raymond Hendy Susanto | Liling Tan | Ewa Szymanska
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Automatic post-editing (APE) aims to improve machine translations, thereby reducing human post-editing effort. APE has had notable success when used with statistical machine translation (SMT) systems but has not been as successful over neural machine translation (NMT) systems. This has raised questions on the relevance of APE task in the current scenario. However, the training of APE models has been heavily reliant on large-scale artificial corpora combined with only limited human post-edited data. We hypothesize that APE models have been underperforming in improving NMT translations due to the lack of adequate supervision. To ascertain our hypothesis, we compile a larger corpus of human post-edits of English to German NMT. We empirically show that a state-of-art neural APE model trained on this corpus can significantly improve a strong in-domain NMT system, challenging the current understanding in the field. We further investigate the effects of varying training data sizes, using artificial training data, and domain specificity for the APE task. We release this new corpus under CC BY-NC-SA 4.0 license at https://github.com/shamilcm/pedra.

2019

pdf
Sarah’s Participation in WAT 2019
Raymond Hendy Susanto | Ohnmar Htun | Liling Tan
Proceedings of the 6th Workshop on Asian Translation

This paper describes our MT systems’ participation in the of WAT 2019. We participated in the (i) Patent, (ii) Timely Disclosure, (iii) Newswire and (iv) Mixed-domain tasks. Our main focus is to explore how similar Transformer models perform on various tasks. We observed that for tasks with smaller datasets, our best model setup are shallower models with lesser number of attention heads. We investigated practical issues in NMT that often appear in production settings, such as coping with multilinguality and simplifying pre- and post-processing pipeline in deployment.

2017

pdf
Neural Architectures for Multilingual Semantic Parsing
Raymond Hendy Susanto | Wei Lu
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.

2016

pdf
Learning to Capitalize with Character-Level Recurrent Neural Networks: An Empirical Study
Raymond Hendy Susanto | Hai Leong Chieu | Wei Lu
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2014

pdf
System Combination for Grammatical Error Correction
Raymond Hendy Susanto | Peter Phandi | Hwee Tou Ng
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task
Hwee Tou Ng | Siew Mei Wu | Ted Briscoe | Christian Hadiwinoto | Raymond Hendy Susanto | Christopher Bryant
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
The CoNLL-2014 Shared Task on Grammatical Error Correction
Hwee Tou Ng | Siew Mei Wu | Ted Briscoe | Christian Hadiwinoto | Raymond Hendy Susanto | Christopher Bryant
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task

2012

pdf
Rule-based Machine Translation between Indonesian and Malaysian
Raymond Hendy Susanto | Septina Dian Larasati | Francis M. Tyers
Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing