Ryohei Sasano


2022

pdf
Cross-lingual Linking of Automatically Constructed Frames and FrameNet
Ryohei Sasano
Proceedings of the Thirteenth Language Resources and Evaluation Conference

A semantic frame is a conceptual structure describing an event, relation, or object along with its participants. Several semantic frame resources have been manually elaborated, and there has been much interest in the possibility of applying semantic frames designed for a particular language to other languages, which has led to the development of cross-lingual frame knowledge. However, manually developing such cross-lingual lexical resources is labor-intensive. To support the development of such resources, this paper presents an attempt at automatic cross-lingual linking of automatically constructed frames and manually crafted frames. Specifically, we link automatically constructed example-based Japanese frames to English FrameNet by using cross-lingual word embeddings and a two-stage model that first extracts candidate FrameNet frames for each Japanese frame by taking only the frame-evoking words into account, then finds the best alignment of frames by also taking frame elements into account. Experiments using frame-annotated sentences in Japanese FrameNet indicate that our approach will facilitate the manual development of cross-lingual frame resources.

pdf
Automating Interlingual Homograph Recognition with Parallel Sentences
Yi Han | Ryohei Sasano | Koichi Takeda
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022

Interlingual homographs are words that spell the same but possess different meanings across languages. Recognizing interlingual homographs from form-identical words generally needs linguistic knowledge and massive annotation work. In this paper, we propose an automatic interlingual homograph recognition method based on the cross-lingual word embedding similarity and co-occurrence of form-identical words in parallel sentences. We conduct experiments with various off-the-shelf language models coordinating with cross-lingual alignment operations and co-occurrence metrics on the Chinese-Japanese and English-Dutch language pairs. Experimental results demonstrate that our proposed method is able to make accurate and consistent predictions across languages.

pdf
Comparison and Combination of Sentence Embeddings Derived from Different Supervision Signals
Hayato Tsukagoshi | Ryohei Sasano | Koichi Takeda
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics

There have been many successful applications of sentence embedding methods.However, it has not been well understood what properties are captured in the resulting sentence embeddings depending on the supervision signals.In this paper, we focus on two types of sentence embedding methods with similar architectures and tasks: one fine-tunes pre-trained language models on the natural language inference task, and the other fine-tunes pre-trained language models on word prediction task from its definition sentence, and investigate their properties.Specifically, we compare their performances on semantic textual similarity (STS) tasks using STS datasets partitioned from two perspectives: 1) sentence source and 2) superficial similarity of the sentence pairs, and compare their performances on the downstream and probing tasks.Furthermore, we attempt to combine the two methods and demonstrate that combining the two methods yields substantially better performance than the respective methods on unsupervised STS tasks and downstream tasks.

pdf
Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification
Ryosuke Takahashi | Ryohei Sasano | Koichi Takeda
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics

Many linguistic expressions have idiomatic and literal interpretations, and the automatic distinction of these two interpretations has been studied for decades. Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification. This indicates that contextualized word embedding alone contains information about whether the word is being used in a literal sense or not. However, we believe that more types of information can be derived from MLMs and that leveraging such information can improve idiom token classification. In this paper, we leverage three types of embeddings from MLMs; uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings and show that the newly added embeddings significantly improve idiom token classification for both English and Japanese datasets.

pdf
Cross-Modal Similarity-Based Curriculum Learning for Image Captioning
Hongkuan Zhang | Saku Sugawara | Akiko Aizawa | Lei Zhou | Ryohei Sasano | Koichi Takeda
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Image captioning models require the high-level generalization ability to describe the contents of various images in words. Most existing approaches treat the image–caption pairs equally in their training without considering the differences in their learning difficulties. Several image captioning approaches introduce curriculum learning methods that present training data with increasing levels of difficulty. However, their difficulty measurements are either based on domain-specific features or prior model training. In this paper, we propose a simple yet efficient difficulty measurement for image captioning using cross-modal similarity calculated by a pretrained vision–language model. Experiments on the COCO and Flickr30k datasets show that our proposed approach achieves superior performance and competitive convergence speed to baselines without requiring heuristics or incurring additional training costs. Moreover, the higher model performance on difficult examples and unseen data also demonstrates the generalization ability.

2021

pdf
Transformer-based Lexically Constrained Headline Generation
Kosuke Yamada | Yuta Hitomi | Hideaki Tamori | Ryohei Sasano | Naoaki Okazaki | Kentaro Inui | Koichi Takeda
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

This paper explores a variant of automatic headline generation methods, where a generated headline is required to include a given phrase such as a company or a product name. Previous methods using Transformer-based models generate a headline including a given phrase by providing the encoder with additional information corresponding to the given phrase. However, these methods cannot always include the phrase in the generated headline. Inspired by previous RNN-based methods generating token sequences in backward and forward directions from the given phrase, we propose a simple Transformer-based method that guarantees to include the given phrase in the high-quality generated headline. We also consider a new headline generation strategy that takes advantage of the controllable generation order of Transformer. Our experiments with the Japanese News Corpus demonstrate that our methods, which are guaranteed to include the phrase in the generated headline, achieve ROUGE scores comparable to previous Transformer-based methods. We also show that our generation strategy performs better than previous strategies.

pdf
Self-Guided Curriculum Learning for Neural Machine Translation
Lei Zhou | Liang Ding | Kevin Duh | Shinji Watanabe | Ryohei Sasano | Koichi Takeda
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)

In supervised learning, a well-trained model should be able to recover ground truth accurately, i.e. the predicted labels are expected to resemble the ground truth labels as much as possible. Inspired by this, we formulate a difficulty criterion based on the recovery degrees of training examples. Motivated by the intuition that after skimming through the training corpus, the neural machine translation (NMT) model “knows” how to schedule a suitable curriculum according to learning difficulty, we propose a self-guided curriculum learning strategy that encourages the NMT model to learn from easy to hard on the basis of recovery degrees. Specifically, we adopt sentence-level BLEU score as the proxy of recovery degree. Experimental results on translation benchmarks including WMT14 English-German and WMT17 Chinese-English demonstrate that our proposed method considerably improves the recovery degree, thus consistently improving the translation performance.

pdf
Verb Sense Clustering using Contextualized Word Representations for Semantic Frame Induction
Kosuke Yamada | Ryohei Sasano | Koichi Takeda
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
DefSent: Sentence Embeddings using Definition Sentences
Hayato Tsukagoshi | Ryohei Sasano | Koichi Takeda
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Sentence embedding methods using natural language inference (NLI) datasets have been successfully applied to various tasks. However, these methods are only available for limited languages due to relying heavily on the large NLI datasets. In this paper, we propose DefSent, a sentence embedding method that uses definition sentences from a word dictionary, which performs comparably on unsupervised semantics textual similarity (STS) tasks and slightly better on SentEval tasks than conventional methods. Since dictionaries are available for many languages, DefSent is more broadly applicable than methods using NLI datasets without constructing additional datasets. We demonstrate that DefSent performs comparably on unsupervised semantics textual similarity (STS) tasks and slightly better on SentEval tasks to the methods using large NLI datasets. Our code is publicly available at https://github.com/hpprc/defsent.

pdf
Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering
Kosuke Yamada | Ryohei Sasano | Koichi Takeda
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering-based methods with contextualized word embeddings. However, there are two potential drawbacks to these methods: one is that they focus too much on the superficial information of the frame-evoking verb and the other is that they tend to divide the instances of the same verb into too many different frame clusters. To overcome these drawbacks, we propose a semantic frame induction method using masked word embeddings and two-step clustering. Through experiments on the English FrameNet data, we demonstrate that using the masked word embeddings is effective for avoiding too much reliance on the surface information of frame-evoking verbs and that two-step clustering can improve the number of resulting frame clusters for the instances of the same verb.

2020

pdf
Sequential Span Classification with Neural Semi-Markov CRFs for Biomedical Abstracts
Kosuke Yamada | Tsutomu Hirao | Ryohei Sasano | Koichi Takeda | Masaaki Nagata
Findings of the Association for Computational Linguistics: EMNLP 2020

Dividing biomedical abstracts into several segments with rhetorical roles is essential for supporting researchers’ information access in the biomedical domain. Conventional methods have regarded the task as a sequence labeling task based on sequential sentence classification, i.e., they assign a rhetorical label to each sentence by considering the context in the abstract. However, these methods have a critical problem: they are prone to mislabel longer continuous sentences with the same rhetorical label. To tackle the problem, we propose sequential span classification that assigns a rhetorical label, not to a single sentence but to a span that consists of continuous sentences. Accordingly, we introduce Neural Semi-Markov Conditional Random Fields to assign the labels to such spans by considering all possible spans of various lengths. Experimental results obtained from PubMed 20k RCT and NICTA-PIBOSO datasets demonstrate that our proposed method achieved the best micro sentence-F1 score as well as the best micro span-F1 score.

pdf
Investigating Word-Class Distributions in Word Vector Spaces
Ryohei Sasano | Anna Korhonen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

This paper presents an investigation on the distribution of word vectors belonging to a certain word class in a pre-trained word vector space. To this end, we made several assumptions about the distribution, modeled the distribution accordingly, and validated each assumption by comparing the goodness of each model. Specifically, we considered two types of word classes – the semantic class of direct objects of a verb and the semantic class in a thesaurus – and tried to build models that properly estimate how likely it is that a word in the vector space is a member of a given word class. Our results on selectional preference and WordNet datasets show that the centroid-based model will fail to achieve good enough performance, the geometry of the distribution and the existence of subgroups will have limited impact, and also the negative instances need to be considered for adequate modeling of the distribution. We further investigated the relationship between the scores calculated by each model and the degree of membership and found that discriminative learning-based models are best in finding the boundaries of a class, while models based on the offset between positive and negative instances perform best in determining the degree of membership.

pdf
Development of a Medical Incident Report Corpus with Intention and Factuality Annotation
Hongkuan Zhang | Ryohei Sasano | Koichi Takeda | Zoie Shui-Yee Wong
Proceedings of the Twelfth Language Resources and Evaluation Conference

Medical incident reports (MIRs) are documents that record what happened in a medical incident. A typical MIR consists of two sections: a structured categorical part and an unstructured text part. Most texts in MIRs describe what medication was intended to be given and what was actually given, because what happened in an incident is largely due to discrepancies between intended and actual medications. Recognizing the intention of clinicians and the factuality of medication is essential to understand the causes of medical incidents and avoid similar incidents in the future. Therefore, we are developing an MIR corpus with annotation of intention and factuality as well as of medication entities and their relations. In this paper, we present our annotation scheme with respect to the definition of medication entities that we take into account, the method to annotate the relations between entities, and the details of the intention and factuality annotation. We then report the annotated corpus consisting of 349 Japanese medical incident reports.

2019

pdf
Incorporating Textual Information on User Behavior for Personality Prediction
Kosuke Yamada | Ryohei Sasano | Koichi Takeda
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

Several recent studies have shown that textual information of user posts and user behaviors such as liking and sharing the specific posts are useful for predicting the personality of social media users. However, less attention has been paid to the textual information derived from the user behaviors. In this paper, we investigate the effect of textual information on user behaviors for personality prediction. Our experiments on the personality prediction of Twitter users show that the textual information of user behaviors is more useful than the co-occurrence information of the user behaviors. They also show that taking user behaviors into account is crucial for predicting the personality of users who do not post frequently.

2018

pdf
An Empirical Study on Fine-Grained Named Entity Recognition
Khai Mai | Thai-Hoang Pham | Minh Trung Nguyen | Tuan Duc Nguyen | Danushka Bollegala | Ryohei Sasano | Satoshi Sekine
Proceedings of the 27th International Conference on Computational Linguistics

Named entity recognition (NER) has attracted a substantial amount of research. Recently, several neural network-based models have been proposed and achieved high performance. However, there is little research on fine-grained NER (FG-NER), in which hundreds of named entity categories must be recognized, especially for non-English languages. It is still an open question whether there is a model that is robust across various settings or the proper model varies depending on the language, the number of named entity categories, and the size of training datasets. This paper first presents an empirical comparison of FG-NER models for English and Japanese and demonstrates that LSTM+CNN+CRF (Ma and Hovy, 2016), one of the state-of-the-art methods for English NER, also works well for English FG-NER but does not work well for Japanese, a language that has a large number of character types. To tackle this problem, we propose a method to improve the neural network-based Japanese FG-NER performance by removing the CNN layer and utilizing dictionary and category embeddings. Experiment results show that the proposed method improves Japanese FG-NER F-score from 66.76% to 75.18%.

2017

pdf
Extended Named Entity Recognition API and Its Applications in Language Education
Tuan Duc Nguyen | Khai Mai | Thai-Hoang Pham | Minh Trung Nguyen | Truc-Vien T. Nguyen | Takashi Eguchi | Ryohei Sasano | Satoshi Sekine
Proceedings of ACL 2017, System Demonstrations

pdf
Distinguishing Japanese Non-standard Usages from Standard Ones
Tatsuya Aoki | Ryohei Sasano | Hiroya Takamura | Manabu Okumura
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We focus on non-standard usages of common words on social media. In the context of social media, words sometimes have other usages that are totally different from their original. In this study, we attempt to distinguish non-standard usages on social media from standard ones in an unsupervised manner. Our basic idea is that non-standardness can be measured by the inconsistency between the expected meaning of the target word and the given context. For this purpose, we use context embeddings derived from word embeddings. Our experimental results show that the model leveraging the context embedding outperforms other methods and provide us with findings, for example, on how to construct context embeddings and which corpus to use.

2016

pdf
Controlling Output Length in Neural Encoder-Decoders
Yuta Kikuchi | Graham Neubig | Ryohei Sasano | Hiroya Takamura | Manabu Okumura
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
A Corpus-Based Analysis of Canonical Word Order of Japanese Double Object Constructions
Ryohei Sasano | Manabu Okumura
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf
Context-Dependent Automatic Response Generation Using Statistical Machine Translation Techniques
Andrew Shin | Ryohei Sasano | Hiroya Takamura | Manabu Okumura
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2013

pdf
Automatic Knowledge Acquisition for Case Alternation between the Passive and Active Voices in Japanese
Ryohei Sasano | Daisuke Kawahara | Sadao Kurohashi | Manabu Okumura
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
A Simple Approach to Unknown Word Processing in Japanese Morphological Analysis
Ryohei Sasano | Sadao Kurohashi | Manabu Okumura
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf
Subtree Extractive Summarization via Submodular Maximization
Hajime Morita | Ryohei Sasano | Hiroya Takamura | Manabu Okumura
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf
Generating “A for Alpha” When There Are Thousands of Characters
Hiroaki Kawasaki | Ryohei Sasano | Hiroya Takamura | Manabu Okumura
Proceedings of COLING 2012

2011

pdf
A Discriminative Approach to Japanese Zero Anaphora Resolution with Large-scale Lexicalized Case Frames
Ryohei Sasano | Sadao Kurohashi
Proceedings of 5th International Joint Conference on Natural Language Processing

2009

pdf
A Probabilistic Model for Associative Anaphora Resolution
Ryohei Sasano | Sadao Kurohashi
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
The Effect of Corpus Size on Case Frame Acquisition for Discourse Analysis
Ryohei Sasano | Daisuke Kawahara | Sadao Kurohashi
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2008

pdf
A Fully-Lexicalized Probabilistic Model for Japanese Zero Anaphora Resolution
Ryohei Sasano | Daisuke Kawahara | Sadao Kurohashi
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Japanese Named Entity Recognition Using Structural Natural Language Processing
Ryohei Sasano | Sadao Kurohashi
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II

2004

pdf
Toward Text Understanding: Integrating Relevance-tagged Corpus and Automatically Constructed Case Frames
Daisuke Kawahara | Ryohei Sasano | Sadao Kurohashi
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

This paper proposes a wide-range anaphora resolution system toward text understanding. This system resolves zero, direct and indirect anaphors in Japanese texts by integrating two sorts of linguistic resources: a hand-annotated corpus with various relations and automatically constructed case frames. The corpus has relevance tags which consist of predicate-argument relations, relations between nouns and coreferences, and is utilized for learning parameters of the system and testing it. The case frames are indispensable knowledge both for detecting zero/indirect anaphors and estimating appropriate antecedents. Our preliminary experiments showed promising results.

pdf
Automatic Construction of Nominal Case Frames and its Application to Indirect Anaphora Resolution
Ryohei Sasano | Daisuke Kawahara | Sadao Kurohashi
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics