Ichiro Sakata


2023

pdf
Dynamic Structured Neural Topic Model with Self-Attention Mechanism
Nozomu Miyamoto | Masaru Isonuma | Sho Takase | Junichiro Mori | Ichiro Sakata
Findings of the Association for Computational Linguistics: ACL 2023

This study presents a dynamic structured neural topic model, which can handle the time-series development of topics while capturing their dependencies.Our model captures the topic branching and merging processes by modeling topic dependencies based on a self-attention mechanism.Additionally, we introduce citation regularization, which induces attention weights to represent citation relations by modeling text and citations jointly.Our model outperforms a prior dynamic embedded topic model regarding perplexity and coherence, while maintaining sufficient diversity across topics.Furthermore, we confirm that our model can potentially predict emerging topics from academic literature.

pdf
SciReviewGen: A Large-scale Dataset for Automatic Literature Review Generation
Tetsu Kasanishi | Masaru Isonuma | Junichiro Mori | Ichiro Sakata
Findings of the Association for Computational Linguistics: ACL 2023

Automatic literature review generation is one of the most challenging tasks in natural language processing.Although large language models have tackled literature review generation, the absence of large-scale datasets has been a stumbling block to the progress.We release SciReviewGen, consisting of over 10,000 literature reviews and 690,000 papers cited in the reviews.Based on the dataset, we evaluate recent transformer-based summarization models on the literature review generation task, including Fusion-in-Decoder extended for literature review generation.Human evaluation results show that some machine-generated summaries are comparable to human-written reviews, while revealing the challenges of automatic literature review generation such as hallucinations and a lack of detailed information.Our dataset and code are available at [https://github.com/tetsu9923/SciReviewGen](https://github.com/tetsu9923/SciReviewGen).

pdf
Differentiable Instruction Optimization for Cross-Task Generalization
Masaru Isonuma | Junichiro Mori | Ichiro Sakata
Findings of the Association for Computational Linguistics: ACL 2023

Instruction tuning has been attracting much attention to achieve generalization ability across a wide variety of tasks.Although various types of instructions have been manually created for instruction tuning, it is still unclear what kind of instruction is optimal to obtain cross-task generalization ability.This work presents instruction optimization, which optimizes training instructions with respect to generalization ability.Rather than manually tuning instructions, we introduce learnable instructions and optimize them with gradient descent by leveraging bilevel optimization.Experimental results show that the learned instruction enhances the diversity of instructions and improves the generalization ability compared to using only manually created instructions.

2022

pdf
Lexical Entailment with Hierarchy Representations by Deep Metric Learning
Naomi Sato | Masaru Isonuma | Kimitaka Asatani | Shoya Ishizuka | Aori Shimizu | Ichiro Sakata
Findings of the Association for Computational Linguistics: EMNLP 2022

In this paper, we introduce a novel method for lexical entailment tasks, which detects a hyponym-hypernym relation among words. Existing lexical entailment studies are lacking in generalization performance, as they cannot be applied to words that are not included in the training dataset. Moreover, existing work evaluates the performance by using the dataset that contains words used for training. This study proposes a method that learns a mapping from word embeddings to the hierarchical embeddings in order to predict the hypernymy relations of any input words. To validate the generalization performance, we conduct experiments using a train dataset that does not overlap with the evaluation dataset. As a result, our method achieved state-of-the-art performance and showed robustness for unknown words.

2021

pdf
Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance
Masaru Isonuma | Junichiro Mori | Danushka Bollegala | Ichiro Sakata
Transactions of the Association for Computational Linguistics, Volume 9

This paper presents a novel unsupervised abstractive summarization method for opinionated texts. While the basic variational autoencoder-based models assume a unimodal Gaussian prior for the latent code of sentences, we alternate it with a recursive Gaussian mixture, where each mixture component corresponds to the latent code of a topic sentence and is mixed by a tree-structured topic distribution. By decoding each Gaussian component, we generate sentences with tree-structured topic guidance, where the root sentence conveys generic content, and the leaf sentences describe specific topics. Experimental results demonstrate that the generated topic sentences are appropriate as a summary of opinionated texts, which are more informative and cover more input contents than those generated by the recent unsupervised summarization model (Bražinskas et al., 2020). Furthermore, we demonstrate that the variance of latent Gaussians represents the granularity of sentences, analogous to Gaussian word embedding (Vilnis and McCallum, 2015).

2020

pdf
Tree-Structured Neural Topic Model
Masaru Isonuma | Junichiro Mori | Danushka Bollegala | Ichiro Sakata
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

This paper presents a tree-structured neural topic model, which has a topic distribution over a tree with an infinite number of branches. Our model parameterizes an unbounded ancestral and fraternal topic distribution by applying doubly-recurrent neural networks. With the help of autoencoding variational Bayes, our model improves data scalability and achieves competitive performance when inducing latent topics and tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010). This work extends the tree-structured topic model such that it can be incorporated with neural models for downstream tasks.

2019

pdf
Unsupervised Neural Single-Document Summarization of Reviews via Learning Latent Discourse Structure and its Ranking
Masaru Isonuma | Junichiro Mori | Ichiro Sakata
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

This paper focuses on the end-to-end abstractive summarization of a single product review without supervision. We assume that a review can be described as a discourse tree, in which the summary is the root, and the child sentences explain their parent in detail. By recursively estimating a parent from its children, our model learns the latent discourse tree without an external parser and generates a concise summary. We also introduce an architecture that ranks the importance of each sentence on the tree to support summary generation focusing on the main review point. The experimental results demonstrate that our model is competitive with or outperforms other unsupervised approaches. In particular, for relatively long reviews, it achieves a competitive or better performance than supervised models. The induced tree shows that the child sentences provide additional information about their parent, and the generated summary abstracts the entire review.

2017

pdf
Extractive Summarization Using Multi-Task Learning with Document Classification
Masaru Isonuma | Toru Fujino | Junichiro Mori | Yutaka Matsuo | Ichiro Sakata
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

The need for automatic document summarization that can be used for practical applications is increasing rapidly. In this paper, we propose a general framework for summarization that extracts sentences from a document using externally related information. Our work is aimed at single document summarization using small amounts of reference summaries. In particular, we address document summarization in the framework of multi-task learning using curriculum learning for sentence extraction and document classification. The proposed framework enables us to obtain better feature representations to extract sentences from documents. We evaluate our proposed summarization method on two datasets: financial report and news corpus. Experimental results demonstrate that our summarizers achieve performance that is comparable to state-of-the-art systems.