Ayu Teramen


2025

pdf bib
Text Normalization for Japanese Sentiment Analysis
Risa Kondo | Ayu Teramen | Reon Kajikawa | Koki Horiguchi | Tomoyuki Kajiwara | Takashi Ninomiya | Hideaki Hayashi | Yuta Nakashima | Hajime Nagahara
Proceedings of the Tenth Workshop on Noisy and User-generated Text

We manually normalize noisy Japanese expressions on social networking services (SNS) to improve the performance of sentiment polarity classification.Despite advances in pre-trained language models, informal expressions found in social media still plague natural language processing.In this study, we analyzed 6,000 posts from a sentiment analysis corpus for Japanese SNS text, and constructed a text normalization taxonomy consisting of 33 types of editing operations.Text normalization according to our taxonomy significantly improved the performance of BERT-based sentiment analysis in Japanese.Detailed analysis reveals that most types of editing operations each contribute to improve the performance of sentiment analysis.

2024

pdf bib
English-to-Japanese Multimodal Machine Translation Based on Image-Text Matching of Lecture Videos
Ayu Teramen | Takumi Ohtsuka | Risa Kondo | Tomoyuki Kajiwara | Takashi Ninomiya
Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)

We work on a multimodal machine translation of the audio contained in English lecture videos to generate Japanese subtitles. Image-guided multimodal machine translation is promising for error correction in speech recognition and for text disambiguation. In our situation, lecture videos provide a variety of images. Images of presentation materials can complement information not available from audio and may help improve translation quality. However, images of speakers or audiences would not directly affect the translation quality. We construct a multimodal parallel corpus with automatic speech recognition text and multiple images for a transcribed parallel corpus of lecture videos, and propose a method to select the most relevant ones from the multiple images with the speech text for improving the performance of image-guided multimodal machine translation. Experimental results on translating automatic speech recognition or transcribed English text into Japanese show the effectiveness of our method to select a relevant image.