Ryuto Konno


2021

pdf bib
Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution
Ryuto Konno | Shun Kiyono | Yuichiroh Matsubayashi | Hiroki Ouchi | Kentaro Inui
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Masked language models (MLMs) have contributed to drastic performance improvements with regard to zero anaphora resolution (ZAR). To further improve this approach, in this study, we made two proposals. The first is a new pretraining task that trains MLMs on anaphoric relations with explicit supervision, and the second proposal is a new finetuning method that remedies a notorious issue, the pretrain-finetune discrepancy. Our experiments on Japanese ZAR demonstrated that our two proposals boost the state-of-the-art performance, and our detailed analysis provides new insights on the remaining challenges.

2020

pdf bib
Tohoku-AIP-NTT at WMT 2020 News Translation Task
Shun Kiyono | Takumi Ito | Ryuto Konno | Makoto Morishita | Jun Suzuki
Proceedings of the Fifth Conference on Machine Translation

In this paper, we describe the submission of Tohoku-AIP-NTT to the WMT’20 news translation task. We participated in this task in two language pairs and four language directions: English <–> German and English <–> Japanese. Our system consists of techniques such as back-translation and fine-tuning, which are already widely adopted in translation tasks. We attempted to develop new methods for both synthetic data filtering and reranking. However, the methods turned out to be ineffective, and they provided us with no significant improvement over the baseline. We analyze these negative results to provide insights for future studies.

pdf bib
An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution
Ryuto Konno | Yuichiroh Matsubayashi | Shun Kiyono | Hiroki Ouchi | Ryo Takahashi | Kentaro Inui
Proceedings of the 28th International Conference on Computational Linguistics

One critical issue of zero anaphora resolution (ZAR) is the scarcity of labeled data. This study explores how effectively this problem can be alleviated by data augmentation. We adopt a state-of-the-art data augmentation method, called the contextual data augmentation (CDA), that generates labeled training instances using a pretrained language model. The CDA has been reported to work well for several other natural language processing tasks, including text classification and machine translation. This study addresses two underexplored issues on CDA, that is, how to reduce the computational cost of data augmentation and how to ensure the quality of the generated data. We also propose two methods to adapt CDA to ZAR: [MASK]-based augmentation and linguistically-controlled masking. Consequently, the experimental results on Japanese ZAR show that our methods contribute to both the accuracy gainand the computation cost reduction. Our closer analysis reveals that the proposed method can improve the quality of the augmented training data when compared to the conventional CDA.

pdf bib
Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
Hiroki Ouchi | Jun Suzuki | Sosuke Kobayashi | Sho Yokoi | Tatsuki Kuribayashi | Ryuto Konno | Kentaro Inui
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Interpretable rationales for model predictions play a critical role in practical applications. In this study, we develop models possessing interpretable inference process for structured prediction. Specifically, we present a method of instance-based learning that learns similarities between spans. At inference time, each span is assigned a class label based on its similar spans in the training set, where it is easy to understand how much each training instance contributes to the predictions. Through empirical analysis on named entity recognition, we demonstrate that our method enables to build models that have high interpretability without sacrificing performance.