Kazuki Akiyama
2022
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain
Haruya Suzuki
|
Yuto Miyauchi
|
Kazuki Akiyama
|
Tomoyuki Kajiwara
|
Takashi Ninomiya
|
Noriko Takemura
|
Yuta Nakashima
|
Hajime Nagahara
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We annotate 35,000 SNS posts with both the writer’s subjective sentiment polarity labels and the reader’s objective ones to construct a Japanese sentiment analysis dataset. Our dataset includes intensity labels (none, weak, medium, and strong) for each of the eight basic emotions by Plutchik (joy, sadness, anticipation, surprise, anger, fear, disgust, and trust) as well as sentiment polarity labels (strong positive, positive, neutral, negative, and strong negative). Previous studies on emotion analysis have studied the analysis of basic emotions and sentiment polarity independently. In other words, there are few corpora that are annotated with both basic emotions and sentiment polarity. Our dataset is the first large-scale corpus to annotate both of these emotion labels, and from both the writer’s and reader’s perspectives. In this paper, we analyze the relationship between basic emotion intensity and sentiment polarity on our dataset and report the results of benchmarking sentiment polarity classification.
2021
Hie-BART: Document Summarization with Hierarchical BART
Kazuki Akiyama
|
Akihiro Tamura
|
Takashi Ninomiya
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop
This paper proposes a new abstractive document summarization model, hierarchical BART (Hie-BART), which captures hierarchical structures of a document (i.e., sentence-word structures) in the BART model. Although the existing BART model has achieved a state-of-the-art performance on document summarization tasks, the model does not have the interactions between sentence-level information and word-level information. In machine translation tasks, the performance of neural machine translation models has been improved by incorporating multi-granularity self-attention (MG-SA), which captures the relationships between words and phrases. Inspired by the previous work, the proposed Hie-BART model incorporates MG-SA into the encoder of the BART model for capturing sentence-word structures. Evaluations on the CNN/Daily Mail dataset show that the proposed Hie-BART model outperforms some strong baselines and improves the performance of a non-hierarchical BART model (+0.23 ROUGE-L).
Search
Co-authors
- Takashi Ninomiya 2
- Akihiro Tamura 1
- Haruya Suzuki 1
- Yuto Miyauchi 1
- Tomoyuki Kajiwara 1
- show all...