Nobuyuki Shimizu


2023

pdf
A Challenging Multimodal Video Summary: Simultaneously Extracting and Generating Keyframe-Caption Pairs from Video
Keito Kudo | Haruki Nagasawa | Jun Suzuki | Nobuyuki Shimizu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

This paper proposes a practical multimodal video summarization task setting and a dataset to train and evaluate the task. The target task involves summarizing a given video into a predefined number of keyframe-caption pairs and displaying them in a listable format to grasp the video content quickly. This task aims to extract crucial scenes from the video in the form of images (keyframes) and generate corresponding captions explaining each keyframe’s situation. This task is useful as a practical application and presents a highly challenging problem worthy of study. Specifically, achieving simultaneous optimization of the keyframe selection performance and caption quality necessitates careful consideration of the mutual dependence on both preceding and subsequent keyframes and captions. To facilitate subsequent research in this field, we also construct a dataset by expanding upon existing datasets and propose an evaluation framework. Furthermore, we develop two baseline systems and report their respective performance.

2022

pdf
How do people talk about images? A study on open-domain conversations with images.
Yi-Pei Chen | Nobuyuki Shimizu | Takashi Miyazaki | Hideki Nakayama
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop

This paper explores how humans conduct conversations with images by investigating an open-domain image conversation dataset, ImageChat. We examined the conversations with images from the perspectives of image relevancy and image information. We found that utterances/conversations are not always related to the given image, and conversation topics diverge within three turns about half of the time. Besides image objects, more comprehensive non-object image information is also indispensable. After inspecting the causes, we suggested that understanding the overall scenario of image and connecting objects based on their high-level attributes might be very helpful to generate more engaging open-domain conversations when an image is presented. We proposed enriching the image information with image caption and object tags based on our analysis. With our proposed image+ features, we improved automatic metrics including BLEU and Bert Score, and increased the diversity and image-relevancy of generated responses to the strong baseline. The result verifies that our analysis provides valuable insights and could facilitate future research on open-domain conversations with images.

pdf
RNSum: A Large-Scale Dataset for Automatic Release Note Generation via Commit Logs Summarization
Hisashi Kamezawa | Noriki Nishida | Nobuyuki Shimizu | Takashi Miyazaki | Hideki Nakayama
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

A release note is a technical document that describes the latest changes to a software product and is crucial in open source software development. However, it still remains challenging to generate release notes automatically. In this paper, we present a new dataset called RNSum, which contains approximately 82,000 English release notes and the associated commit messages derived from the online repositories in GitHub. Then, we propose classwise extractive-then-abstractive/abstractive summarization approaches to this task, which can employ a modern transformer-based seq2seq network like BART and can be applied to various repositories without specific constraints. The experimental results on the RNSum dataset show that the proposed methods can generate less noisy release notes at higher coverage than the baselines. We also observe that there is a significant gap in the coverage of essential information when compared to human references. Our dataset and the code are publicly available.

2020

pdf
A Visually-grounded First-person Dialogue Dataset with Verbal and Non-verbal Responses
Hisashi Kamezawa | Noriki Nishida | Nobuyuki Shimizu | Takashi Miyazaki | Hideki Nakayama
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

In real-world dialogue, first-person visual information about where the other speakers are and what they are paying attention to is crucial to understand their intentions. Non-verbal responses also play an important role in social interactions. In this paper, we propose a visually-grounded first-person dialogue (VFD) dataset with verbal and non-verbal responses. The VFD dataset provides manually annotated (1) first-person images of agents, (2) utterances of human speakers, (3) eye-gaze locations of the speakers, and (4) the agents’ verbal and non-verbal responses. We present experimental results obtained using the proposed VFD dataset and recent neural network models (e.g., BERT, ResNet). The results demonstrate that first-person vision helps neural network models correctly understand human intentions, and the production of non-verbal responses is a challenging task like that of verbal responses. Our dataset is publicly available.

2018

pdf
Pretraining Sentiment Classifiers with Unlabeled Dialog Data
Toru Shimizu | Nobuyuki Shimizu | Hayato Kobayashi
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The huge cost of creating labeled training data is a common problem for supervised learning tasks such as sentiment classification. Recent studies showed that pretraining with unlabeled data via a language model can improve the performance of classification models. In this paper, we take the concept a step further by using a conditional language model, instead of a language model. Specifically, we address a sentiment classification task for a tweet analysis service as a case study and propose a pretraining strategy with unlabeled dialog data (tweet-reply pairs) via an encoder-decoder model. Experimental results show that our strategy can improve the performance of sentiment classifiers and outperform several state-of-the-art strategies including language model pretraining.

pdf
Visual Question Answering Dataset for Bilingual Image Understanding: A Study of Cross-Lingual Transfer Using Attention Maps
Nobuyuki Shimizu | Na Rong | Takashi Miyazaki
Proceedings of the 27th International Conference on Computational Linguistics

Visual question answering (VQA) is a challenging task that requires a computer system to understand both a question and an image. While there is much research on VQA in English, there is a lack of datasets for other languages, and English annotation is not directly applicable in those languages. To deal with this, we have created a Japanese VQA dataset by using crowdsourced annotation with images from the Visual Genome dataset. This is the first such dataset in Japanese. As another contribution, we propose a cross-lingual method for making use of English annotation to improve a Japanese VQA system. The proposed method is based on a popular VQA method that uses an attention mechanism. We use attention maps generated from English questions to help improve the Japanese VQA task. The proposed method experimentally performed better than simply using a monolingual corpus, which demonstrates the effectiveness of using attention maps to transfer cross-lingual information.

2016

pdf
Cross-Lingual Image Caption Generation
Takashi Miyazaki | Nobuyuki Shimizu
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2010

pdf
Features for Detecting Hedge Cues
Nobuyuki Shimizu | Hiroshi Nakagawa
Proceedings of the Fourteenth Conference on Computational Natural Language Learning – Shared Task

2009

pdf
Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification
Takashi Ninomiya | Takuya Matsuzaki | Nobuyuki Shimizu | Hiroshi Nakagawa
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

2008

pdf
Modeling Chinese Documents with Topical Word-Character Models
Wei Hu | Nobuyuki Shimizu | Hiroshi Nakagawa | Huanye Sheng
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Metric Learning for Synonym Acquisition
Nobuyuki Shimizu | Masato Hagiwara | Yasuhiro Ogawa | Katsuhiko Toyama | Hiroshi Nakagawa
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf
Structural Correspondence Learning for Dependency Parsing
Nobuyuki Shimizu | Hiroshi Nakagawa
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf
Exact Decoding for Jointly Labeling and Chunking Sequences
Nobuyuki Shimizu | Andrew Haas
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

pdf
Semantic Discourse Segmentation and Labeling for Route Instructions
Nobuyuki Shimizu
Proceedings of the COLING/ACL 2006 Student Research Workshop

pdf
Maximum Spanning Tree Algorithm for Non-projective Labeled Dependency Parsing
Nobuyuki Shimizu
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)

2004

pdf
HITIQA: Scenario Based Question Answering
Sharon Small | Tomek Strzalkowski | Ting Liu | Sean Ryan | Robert Salkin | Nobuyuki Shimizu | Paul Kantor | Diane Kelly | Robert Rittman | Nina Wacholder | Boris Yamrom
Proceedings of the Workshop on Pragmatics of Question Answering at HLT-NAACL 2004

pdf
HITIQA: Towards Analytical Question Answering
Sharon Small | Tomek Strzalkowski | Ting Liu | Sean Ryan | Robert Salkin | Nobuyuki Shimizu | Paul Kantor | Diane Kelly | Robert Rittman | Nina Wacholder
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf
HITIQA: An Interactive Question Answering System: A Preliminary Report
Sharon Small | Ting Liu | Nobuyuki Shimizu | Tomek Strzalkowski
Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering