Koichi Takeda


2022

pdf
Automating Interlingual Homograph Recognition with Parallel Sentences
Yi Han | Ryohei Sasano | Koichi Takeda
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022

Interlingual homographs are words that spell the same but possess different meanings across languages. Recognizing interlingual homographs from form-identical words generally needs linguistic knowledge and massive annotation work. In this paper, we propose an automatic interlingual homograph recognition method based on the cross-lingual word embedding similarity and co-occurrence of form-identical words in parallel sentences. We conduct experiments with various off-the-shelf language models coordinating with cross-lingual alignment operations and co-occurrence metrics on the Chinese-Japanese and English-Dutch language pairs. Experimental results demonstrate that our proposed method is able to make accurate and consistent predictions across languages.

pdf
Comparison and Combination of Sentence Embeddings Derived from Different Supervision Signals
Hayato Tsukagoshi | Ryohei Sasano | Koichi Takeda
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics

There have been many successful applications of sentence embedding methods.However, it has not been well understood what properties are captured in the resulting sentence embeddings depending on the supervision signals.In this paper, we focus on two types of sentence embedding methods with similar architectures and tasks: one fine-tunes pre-trained language models on the natural language inference task, and the other fine-tunes pre-trained language models on word prediction task from its definition sentence, and investigate their properties.Specifically, we compare their performances on semantic textual similarity (STS) tasks using STS datasets partitioned from two perspectives: 1) sentence source and 2) superficial similarity of the sentence pairs, and compare their performances on the downstream and probing tasks.Furthermore, we attempt to combine the two methods and demonstrate that combining the two methods yields substantially better performance than the respective methods on unsupervised STS tasks and downstream tasks.

pdf
Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification
Ryosuke Takahashi | Ryohei Sasano | Koichi Takeda
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics

Many linguistic expressions have idiomatic and literal interpretations, and the automatic distinction of these two interpretations has been studied for decades. Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification. This indicates that contextualized word embedding alone contains information about whether the word is being used in a literal sense or not. However, we believe that more types of information can be derived from MLMs and that leveraging such information can improve idiom token classification. In this paper, we leverage three types of embeddings from MLMs; uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings and show that the newly added embeddings significantly improve idiom token classification for both English and Japanese datasets.

2021

pdf
Verb Sense Clustering using Contextualized Word Representations for Semantic Frame Induction
Kosuke Yamada | Ryohei Sasano | Koichi Takeda
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Self-Guided Curriculum Learning for Neural Machine Translation
Lei Zhou | Liang Ding | Kevin Duh | Shinji Watanabe | Ryohei Sasano | Koichi Takeda
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)

In supervised learning, a well-trained model should be able to recover ground truth accurately, i.e. the predicted labels are expected to resemble the ground truth labels as much as possible. Inspired by this, we formulate a difficulty criterion based on the recovery degrees of training examples. Motivated by the intuition that after skimming through the training corpus, the neural machine translation (NMT) model “knows” how to schedule a suitable curriculum according to learning difficulty, we propose a self-guided curriculum learning strategy that encourages the NMT model to learn from easy to hard on the basis of recovery degrees. Specifically, we adopt sentence-level BLEU score as the proxy of recovery degree. Experimental results on translation benchmarks including WMT14 English-German and WMT17 Chinese-English demonstrate that our proposed method considerably improves the recovery degree, thus consistently improving the translation performance.

pdf
DefSent: Sentence Embeddings using Definition Sentences
Hayato Tsukagoshi | Ryohei Sasano | Koichi Takeda
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Sentence embedding methods using natural language inference (NLI) datasets have been successfully applied to various tasks. However, these methods are only available for limited languages due to relying heavily on the large NLI datasets. In this paper, we propose DefSent, a sentence embedding method that uses definition sentences from a word dictionary, which performs comparably on unsupervised semantics textual similarity (STS) tasks and slightly better on SentEval tasks than conventional methods. Since dictionaries are available for many languages, DefSent is more broadly applicable than methods using NLI datasets without constructing additional datasets. We demonstrate that DefSent performs comparably on unsupervised semantics textual similarity (STS) tasks and slightly better on SentEval tasks to the methods using large NLI datasets. Our code is publicly available at https://github.com/hpprc/defsent.

pdf
Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering
Kosuke Yamada | Ryohei Sasano | Koichi Takeda
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering-based methods with contextualized word embeddings. However, there are two potential drawbacks to these methods: one is that they focus too much on the superficial information of the frame-evoking verb and the other is that they tend to divide the instances of the same verb into too many different frame clusters. To overcome these drawbacks, we propose a semantic frame induction method using masked word embeddings and two-step clustering. Through experiments on the English FrameNet data, we demonstrate that using the masked word embeddings is effective for avoiding too much reliance on the surface information of frame-evoking verbs and that two-step clustering can improve the number of resulting frame clusters for the instances of the same verb.

pdf
Transformer-based Lexically Constrained Headline Generation
Kosuke Yamada | Yuta Hitomi | Hideaki Tamori | Ryohei Sasano | Naoaki Okazaki | Kentaro Inui | Koichi Takeda
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

This paper explores a variant of automatic headline generation methods, where a generated headline is required to include a given phrase such as a company or a product name. Previous methods using Transformer-based models generate a headline including a given phrase by providing the encoder with additional information corresponding to the given phrase. However, these methods cannot always include the phrase in the generated headline. Inspired by previous RNN-based methods generating token sequences in backward and forward directions from the given phrase, we propose a simple Transformer-based method that guarantees to include the given phrase in the high-quality generated headline. We also consider a new headline generation strategy that takes advantage of the controllable generation order of Transformer. Our experiments with the Japanese News Corpus demonstrate that our methods, which are guaranteed to include the phrase in the generated headline, achieve ROUGE scores comparable to previous Transformer-based methods. We also show that our generation strategy performs better than previous strategies.

2020

pdf
Zero-Shot Translation Quality Estimation with Explicit Cross-Lingual Patterns
Lei Zhou | Liang Ding | Koichi Takeda
Proceedings of the Fifth Conference on Machine Translation

This paper describes our submission of the WMT 2020 Shared Task on Sentence Level Direct Assessment, Quality Estimation (QE). In this study, we empirically reveal the mismatching issue when directly adopting BERTScore (Zhang et al., 2020) to QE. Specifically, there exist lots of mismatching errors between source sentence and translated candidate sentence with token pairwise similarity. In response to this issue, we propose to expose explicit cross lingual patterns, e.g. word alignments and generation score, to our proposed zero-shot models. Experiments show that our proposed QE model with explicit cross-lingual patterns could alleviate the mismatching issue, thereby improving the performance. Encouragingly, our zero-shot QE method could achieve comparable performance with supervised QE method, and even outperforms the supervised counterpart on 2 out of 6 directions. We expect our work could shed light on the zero-shot QE model improvement.

pdf
Development of a Medical Incident Report Corpus with Intention and Factuality Annotation
Hongkuan Zhang | Ryohei Sasano | Koichi Takeda | Zoie Shui-Yee Wong
Proceedings of the Twelfth Language Resources and Evaluation Conference

Medical incident reports (MIRs) are documents that record what happened in a medical incident. A typical MIR consists of two sections: a structured categorical part and an unstructured text part. Most texts in MIRs describe what medication was intended to be given and what was actually given, because what happened in an incident is largely due to discrepancies between intended and actual medications. Recognizing the intention of clinicians and the factuality of medication is essential to understand the causes of medical incidents and avoid similar incidents in the future. Therefore, we are developing an MIR corpus with annotation of intention and factuality as well as of medication entities and their relations. In this paper, we present our annotation scheme with respect to the definition of medication entities that we take into account, the method to annotate the relations between entities, and the details of the intention and factuality annotation. We then report the annotated corpus consisting of 349 Japanese medical incident reports.

pdf
Sequential Span Classification with Neural Semi-Markov CRFs for Biomedical Abstracts
Kosuke Yamada | Tsutomu Hirao | Ryohei Sasano | Koichi Takeda | Masaaki Nagata
Findings of the Association for Computational Linguistics: EMNLP 2020

Dividing biomedical abstracts into several segments with rhetorical roles is essential for supporting researchers’ information access in the biomedical domain. Conventional methods have regarded the task as a sequence labeling task based on sequential sentence classification, i.e., they assign a rhetorical label to each sentence by considering the context in the abstract. However, these methods have a critical problem: they are prone to mislabel longer continuous sentences with the same rhetorical label. To tackle the problem, we propose sequential span classification that assigns a rhetorical label, not to a single sentence but to a span that consists of continuous sentences. Accordingly, we introduce Neural Semi-Markov Conditional Random Fields to assign the labels to such spans by considering all possible spans of various lengths. Experimental results obtained from PubMed 20k RCT and NICTA-PIBOSO datasets demonstrate that our proposed method achieved the best micro sentence-F1 score as well as the best micro span-F1 score.

2019

pdf
Incorporating Textual Information on User Behavior for Personality Prediction
Kosuke Yamada | Ryohei Sasano | Koichi Takeda
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

Several recent studies have shown that textual information of user posts and user behaviors such as liking and sharing the specific posts are useful for predicting the personality of social media users. However, less attention has been paid to the textual information derived from the user behaviors. In this paper, we investigate the effect of textual information on user behaviors for personality prediction. Our experiments on the personality prediction of Twitter users show that the textual information of user behaviors is more useful than the co-occurrence information of the user behaviors. They also show that taking user behaviors into account is crucial for predicting the personality of users who do not post frequently.

2002

pdf
Sentence generation for pattern-based machine translation
Koichi Takeda
Proceedings of the 9th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages: Papers

1998

pdf
A Method for Relating Multiple Newspaper Articles by Using Graphs, and Its Application to Webcasting
Naohiko Uramoto | Koichi Takeda
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
A Pattern-based Machine Example-Translation System Extended by based Processing
Hideo Watanabe | Koichi Takeda
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
A Method for Relating Multiple Newspaper Articles by Using Graphs, and Its Application to Webcasting
Naohiko Uramoto | Koichi Takeda
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
A Pattern-based Machine Translation System Extended by Example-based Processing
Hideo Watanabe | Koichi Takeda
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

1996

pdf
Pattern-Based Machine Translation
Koichi Takeda
COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics

pdf
Pattern-Based Context-Free Grammars for Machine Translation
Koichi Takeda
34th Annual Meeting of the Association for Computational Linguistics

1994

pdf
Tricolor DAGs for Machine Translation
Koichi Takeda
32nd Annual Meeting of the Association for Computational Linguistics

pdf
Portable Knowledge Sources for Machine Translation
Koichi Takeda
COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics

1993

pdf
An Object-Oriented Implementation of Machine Translation Systems
Koichi Takeda
Proceedings of the Fifth Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages

1992

pdf
Shalt2- a Symmetric Machine Translation System with Conceptual Transfer
Koichi Takeda | Naohiko Uramoto | Tetsuya Nasukawa | Taijiro Tsutsumi
COLING 1992 Volume 3: The 14th International Conference on Computational Linguistics

1986

pdf
CRITAC - A Japanese Text Proofreading System
Koichi Takeda | Tetsunosuke Fujisaki | Emiko Suzuki
Coling 1986 Volume 1: The 11th International Conference on Computational Linguistics