Ryosuke Takahashi
May refer to several people
Other people with similar names: Ryosuke Takahashi (Tohoku)
2025
Understanding the Side Effects of Rank-One Knowledge Editing
Ryosuke Takahashi
|
Go Kamoda
|
Benjamin Heinzerling
|
Keisuke Sakaguchi
|
Kentaro Inui
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
This study conducts a detailed analysis of the side effects of rank-one knowledge editing using language models with controlled knowledge. The analysis focuses on each element of knowledge triples (subject, relation, object) and examines two aspects: “knowledge that causes large side effects when edited” and “knowledge that is affected by the side effects.” Our findings suggest that editing knowledge with subjects that have relationships with numerous objects or are robustly embedded within the LM may trigger extensive side effects. Furthermore, we demonstrate that the similarity between relation vectors, the density of object vectors, and the distortion of knowledge representations are closely related to how susceptible knowledge is to editing influences. The findings of this research provide new insights into the mechanisms of side effects in LM knowledge editing and indicate specific directions for developing more effective and reliable knowledge editing methods.
2022
Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification
Ryosuke Takahashi
|
Ryohei Sasano
|
Koichi Takeda
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics
Many linguistic expressions have idiomatic and literal interpretations, and the automatic distinction of these two interpretations has been studied for decades. Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification. This indicates that contextualized word embedding alone contains information about whether the word is being used in a literal sense or not. However, we believe that more types of information can be derived from MLMs and that leveraging such information can improve idiom token classification. In this paper, we leverage three types of embeddings from MLMs; uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings and show that the newly added embeddings significantly improve idiom token classification for both English and Japanese datasets.
Search
Fix author
Co-authors
- Benjamin Heinzerling 1
- Kentaro Inui 1
- Go Kamoda 1
- Keisuke Sakaguchi 1
- Ryohei Sasano 1
- show all...