Yoshimi Suzuki


2024

pdf
Enhanced Coherence-Aware Network with Hierarchical Disentanglement for Aspect-Category Sentiment Analysis
Jin Cui | Fumiyo Fukumoto | Xinfeng Wang | Yoshimi Suzuki | Jiyi Li | Noriko Tomuro | Wanzeng Kong
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Aspect-category-based sentiment analysis (ACSA), which aims to identify aspect categories and predict their sentiments has been intensively studied due to its wide range of NLP applications. Most approaches mainly utilize intrasentential features. However, a review often includes multiple different aspect categories, and some of them do not explicitly appear in the review. Even in a sentence, there is more than one aspect category with its sentiments, and they are entangled intra-sentence, which makes the model fail to discriminately preserve all sentiment characteristics. In this paper, we propose an enhanced coherence-aware network with hierarchical disentanglement (ECAN) for ACSA tasks. Specifically, we explore coherence modeling to capture the contexts across the whole review and to help the implicit aspect and sentiment identification. To address the issue of multiple aspect categories and sentiment entanglement, we propose a hierarchical disentanglement module to extract distinct categories and sentiment features. Extensive experimental and visualization results show that our ECAN effectively decouples multiple categories and sentiments entangled in the coherence representations and achieves state-of-the-art (SOTA) performance. Our codes and data are available online: https://github.com/cuijin-23/ECAN.

2023

pdf
Aspect-Category Enhanced Learning with a Neural Coherence Model for Implicit Sentiment Analysis
Jin Cui | Fumiyo Fukumoto | Xinfeng Wang | Yoshimi Suzuki | Jiyi Li | Wanzeng Kong
Findings of the Association for Computational Linguistics: EMNLP 2023

Aspect-based sentiment analysis (ABSA) has been widely studied since the explosive growth of social networking services. However, the recognition of implicit sentiments that do not contain obvious opinion words remains less explored. In this paper, we propose aspect-category enhanced learning with a neural coherence model (ELCoM). It captures document-level coherence by using contrastive learning, and sentence-level by a hypergraph to mine opinions from explicit sentences to aid implicit sentiment classification. To address the issue of sentences with different sentiment polarities in the same category, we perform cross-category enhancement to offset the impact of anomalous nodes in the hypergraph and obtain sentence representations with enhanced aspect-category. Extensive experiments on benchmark datasets show that the ELCoM achieves state-of-the-art performance. Our source codes and data are released at https://github.com/cuijin-23/ELCoM.

pdf
Intermediate-Task Transfer Learning for Peer Review Score Prediction
Panitan Muangkammuen | Fumiyo Fukumoto | Jiyi Li | Yoshimi Suzuki
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Student Research Workshop

pdf
Speech Synthesis Model Based on Face Landmarks
Chenji Jin | Yoshimi Suzuki | Fei Lin
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Student Research Workshop

pdf
Learning Disentangled Meaning and Style Representations for Positive Text Reframing
Xu Sheng | Fumiyo Fukumoto | Jiyi Li | Go Kentaro | Yoshimi Suzuki
Proceedings of the 16th International Natural Language Generation Conference

The positive text reframing (PTR) task which generates a text giving a positive perspective with preserving the sense of the input text, has attracted considerable attention as one of the NLP applications. Due to the significant representation capability of the pre-trained language model (PLM), a beneficial baseline can be easily obtained by just fine-tuning the PLM. However, how to interpret a diversity of contexts to give a positive perspective is still an open problem. Especially, it is more serious when the size of the training data is limited. In this paper, we present a PTR framework, that learns representations where the meaning and style of text are structurally disentangled. The method utilizes pseudo-positive reframing datasets which are generated with two augmentation strategies. A simple but effective multi-task learning-based model is learned to fuse the generation capabilities from these datasets. Experimental results on Positive Psychology Frames (PPF) dataset, show that our approach outperforms the baselines, BART by five and T5 by six evaluation metrics. Our source codes and data are available online.

2022

pdf
Exploiting Labeled and Unlabeled Data via Transformer Fine-tuning for Peer-Review Score Prediction
Panitan Muangkammuen | Fumiyo Fukumoto | Jiyi Li | Yoshimi Suzuki
Findings of the Association for Computational Linguistics: EMNLP 2022

Automatic Peer-review Aspect Score Prediction (PASP) of academic papers can be a helpful assistant tool for both reviewers and authors. Most existing works on PASP utilize supervised learning techniques. However, the limited number of peer-review data deteriorates the performance of PASP. This paper presents a novel semi-supervised learning (SSL) method that incorporates the Transformer fine-tuning into the Γ-model, a variant of the Ladder network, to leverage contextual features from unlabeled data. Backpropagation simultaneously minimizes the sum of supervised and unsupervised cost functions, avoiding the need for layer-wise pre-training. The experimental results show that our model outperforms the supervised and naive semi-supervised learning baselines. Our source codes are available online.

2020

pdf
Semi-Automatic Construction and Refinement of an Annotated Corpus for a Deep Learning Framework for Emotion Classification
Jiajun Xu | Kyosuke Masuda | Hiromitsu Nishizaki | Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the Twelfth Language Resources and Evaluation Conference

In the case of using a deep learning (machine learning) framework for emotion classification, one significant difficulty faced is the requirement of building a large, emotion corpus in which each sentence is assigned emotion labels. As a result, there is a high cost in terms of time and money associated with the construction of such a corpus. Therefore, this paper proposes a method of creating a semi-automatically constructed emotion corpus. For the purpose of this study sentences were mined from Twitter using some emotional seed words that were selected from a dictionary in which the emotion words were well-defined. Tweets were retrieved by one emotional seed word, and the retrieved sentences were assigned emotion labels based on the emotion category of the seed word. It was evident from the findings that the deep learning-based emotion classification model could not achieve high levels of accuracy in emotion classification because the semi-automatically constructed corpus had many errors when assigning emotion labels. In this paper, therefore, an approach for improving the quality of the emotion labels by automatically correcting the errors of emotion labels is proposed and tested. The experimental results showed that the proposed method worked well, and the classification accuracy rate was improved to 55.1% from 44.9% on the Twitter emotion classification task.

2015

pdf
Learning Timeline Difference for Text Categorization
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf
The Effect of Temporal-based Term Selection for Text Classification
Fumiyo Fukumoto | Shougo Ushiyama | Yoshimi Suzuki | Suguru Matsuyoshi
Proceedings of the Australasian Language Technology Association Workshop 2014

pdf
Detection of Topic and its Extrinsic Evaluation Through Multi-Document Summarization
Yoshimi Suzuki | Fumiyo Fukumoto
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2013

pdf
Text Classification from Positive and Unlabeled Data using Misclassified Data Correction
Fumiyo Fukumoto | Yoshimi Suzuki | Suguru Matsuyoshi
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf
Classifying Hotel Reviews into Criteria for Review Summarization
Yoshimi Suzuki
Proceedings of the 2nd Workshop on Sentiment Analysis where AI meets Psychology

2011

pdf
Identification of Domain-Specific Senses in a Machine-Readable Dictionary
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Cluster Labelling based on Concepts in a Machine-Readable Dictionary
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf
Eliminating Redundancy by Spectral Relaxation for Multi-Document Summarization
Fumiyo Fukumoto | Akina Sakai | Yoshimi Suzuki
Proceedings of TextGraphs-5 - 2010 Workshop on Graph-based Methods for Natural Language Processing

2009

pdf
Classifying Japanese Polysemous Verbs based on Fuzzy C-means Clustering
Yoshimi Suzuki | Fumiyo Fukumoto
Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)

2008

pdf
Retrieving Bilingual Verb-Noun Collocations by Integrating Cross-Language Category Hierarchies
Fumiyo Fukumoto | Yoshimi Suzuki | Kazuyuki Yamashita
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2006

pdf
Using Bilingual Comparable Corpora and Semi-supervised Clustering for Topic Tracking
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2004

pdf
Correcting Category Errors in Text Classification
Fumiyo Fukumoto | Yoshimi Suzuki
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf
A Comparison of Manual and Automatic Constructions of Category Hierarchy for Classifying Large Corpora
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004

2002

pdf
Manipulating Large Corpora for Text Classification
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002)

pdf
Detecting Shifts in News Stories for Paragraph Extraction
Fumiyo Fukumoto | Yoshimi Suzuki
COLING 2002: The 19th International Conference on Computational Linguistics

pdf
Topic Tracking using Subject Templates and Clustering Positive Training Instances
Yoshimi Suzuki | Fumiyo Fukumoto | Yoshihiro Sekiguchi
COLING 2002: The 17th International Conference on Computational Linguistics: Project Notes

2000

pdf
Extracting Key Paragraph based on Topic and Event Detection Towards Multi-Document Summarization
Fumiyo Fukumoto | Yoshimi Suzuki
NAACL-ANLP 2000 Workshop: Automatic Summarization

1999

pdf
Word Sense Disambiguation in Untagged Text based on Term Weight Learning
Fumiyo Fukumoto | Yoshimi Suzuki
Ninth Conference of the European Chapter of the Association for Computational Linguistics

1998

pdf
Keyword Extraction using Term-Domain Interdependence for Dictation of Radio News
Yoshimi Suzuki | Fumiyo Fukumoto | Yoshihiro Sekiguchi
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
Keyword Extraction using Term-Domain Interdependence for Dictation of Radio News
Yoshimi Suzuki | Fumiyo Fukumoto | Yoshihiro Sekiguchi
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
An Empirical Approach to Text Categorization Based on Term Weight Learning
Fumiyo Fukumoto | Yoshimi Suzuki
Proceedings of the Third Conference on Empirical Methods for Natural Language Processing

1997

pdf
An Automatic Extraction of Key Paragraphs Based on Context Dependency
Fumiyo Fukumoto | Yoshimi Suzuki | Jun’ichi Fukumoto
Fifth Conference on Applied Natural Language Processing

1996

pdf
An Automatic Clustering of Articles Using Dictionary Definitions
Fumiyo Fukumoto | Yoshimi Suzuki
COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics