Kotaro Funakoshi


2022

pdf
A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model
Dongyuan Li | Jingyi You | Kotaro Funakoshi | Manabu Okumura
Proceedings of the 29th International Conference on Computational Linguistics

Text infilling aims to restore incomplete texts by filling in blanks, which has attracted more attention recently because of its wide application in ancient text restoration and text rewriting. However, attribute- aware text infilling is yet to be explored, and existing methods seldom focus on the infilling length of each blank or the number/location of blanks. In this paper, we propose an Attribute-aware Text Infilling method via a Pre-trained language model (A-TIP), which contains a text infilling component and a plug- and-play discriminator. Specifically, we first design a unified text infilling component with modified attention mechanisms and intra- and inter-blank positional encoding to better perceive the number of blanks and the infilling length for each blank. Then, we propose a plug-and-play discriminator to guide generation towards the direction of improving attribute relevance without decreasing text fluency. Finally, automatic and human evaluations on three open-source datasets indicate that A-TIP achieves state-of- the-art performance compared with all baselines.

pdf
Generating Repetitions with Appropriate Repeated Words
Toshiki Kawamoto | Hidetaka Kamigaito | Kotaro Funakoshi | Manabu Okumura
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

A repetition is a response that repeats words in the previous speaker’s utterance in a dialogue. Repetitions are essential in communication to build trust with others, as investigated in linguistic studies. In this work, we focus on repetition generation. To the best of our knowledge, this is the first neural approach to address repetition generation. We propose Weighted Label Smoothing, a smoothing method for explicitly learning which words to repeat during fine-tuning, and a repetition scoring method that can output more appropriate repetitions during decoding. We conducted automatic and human evaluations involving applying these methods to the pre-trained language model T5 for generating repetitions. The experimental results indicate that our methods outperformed baselines in both evaluations.

pdf
Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization
Jingyi You | Dongyuan Li | Hidetaka Kamigaito | Kotaro Funakoshi | Manabu Okumura
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them. They also considered date selection and event detection as two independent tasks, which makes it impossible to integrate their advantages and obtain a globally optimal summary. In this paper, we present a joint learning-based heterogeneous graph attention network for TLS (HeterTls), in which date selection and event detection are combined into a unified framework to improve the extraction accuracy and remove redundant sentences simultaneously. Our heterogeneous graph involves multiple types of nodes, the representations of which are iteratively learned across the heterogeneous graph attention layer. We evaluated our model on four datasets, and found that it significantly outperformed the current state-of-the-art baselines with regard to ROUGE scores and date selection metrics.

2021

pdf
Generating Weather Comments from Meteorological Simulations
Soichiro Murakami | Sora Tanaka | Masatsugu Hangyo | Hidetaka Kamigaito | Kotaro Funakoshi | Hiroya Takamura | Manabu Okumura
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

The task of generating weather-forecast comments from meteorological simulations has the following requirements: (i) the changes in numerical values for various physical quantities need to be considered, (ii) the weather comments should be dependent on delivery time and area information, and (iii) the comments should provide useful information for users. To meet these requirements, we propose a data-to-text model that incorporates three types of encoders for numerical forecast maps, observation data, and meta-data. We also introduce weather labels representing weather information, such as sunny and rain, for our model to explicitly describe useful information. We conducted automatic and human evaluations. The results indicate that our model performed best against baselines in terms of informativeness. We make our code and data publicly available.

pdf
Towards Table-to-Text Generation with Numerical Reasoning
Lya Hulliyyatus Suadaa | Hidetaka Kamigaito | Kotaro Funakoshi | Manabu Okumura | Hiroya Takamura
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Recent neural text generation models have shown significant improvement in generating descriptive text from structured data such as table formats. One of the remaining important challenges is generating more analytical descriptions that can be inferred from facts in a data source. The use of a template-based generator and a pointer-generator is among the potential alternatives for table-to-text generators. In this paper, we propose a framework consisting of a pre-trained model and a copy mechanism. The pre-trained models are fine-tuned to produce fluent text that is enriched with numerical reasoning. However, it still lacks fidelity to the table contents. The copy mechanism is incorporated in the fine-tuning step by using general placeholders to avoid producing hallucinated phrases that are not supported by a table while preserving high fluency. In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.

2018

pdf
A POS Tagging Model Adapted to Learner English
Ryo Nagata | Tomoya Mizumoto | Yuta Kikuchi | Yoshifumi Kawasaki | Kotaro Funakoshi
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text

There has been very limited work on the adaptation of Part-Of-Speech (POS) tagging to learner English despite the fact that POS tagging is widely used in related tasks. In this paper, we explore how we can adapt POS tagging to learner English efficiently and effectively. Based on the discussion of possible causes of POS tagging errors in learner English, we show that deep neural models are particularly suitable for this. Considering the previous findings and the discussion, we introduce the design of our model based on bidirectional Long Short-Term Memory. In addition, we describe how to adapt it to a wide variety of native languages (potentially, hundreds of them). In the evaluation section, we empirically show that it is effective for POS tagging in learner English, achieving an accuracy of 0.964, which significantly outperforms the state-of-the-art POS-tagger. We further investigate the tagging results in detail, revealing which part of the model design does or does not improve the performance.

2016

pdf
The dialogue breakdown detection challenge: Task description, datasets, and evaluation metrics
Ryuichiro Higashinaka | Kotaro Funakoshi | Yuka Kobayashi | Michimasa Inaba
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Dialogue breakdown detection is a promising technique in dialogue systems. To promote the research and development of such a technique, we organized a dialogue breakdown detection challenge where the task is to detect a system’s inappropriate utterances that lead to dialogue breakdowns in chat. This paper describes the design, datasets, and evaluation metrics for the challenge as well as the methods and results of the submitted runs of the participants.

pdf
Nonparametric Bayesian Models for Spoken Language Understanding
Kei Wakabayashi | Johane Takeuchi | Kotaro Funakoshi | Mikio Nakano
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf
Fatal or not? Finding errors that lead to dialogue breakdowns in chat-oriented dialogue systems
Ryuichiro Higashinaka | Masahiro Mizukami | Kotaro Funakoshi | Masahiro Araki | Hiroshi Tsukahara | Yuka Kobayashi
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Towards Taxonomy of Errors in Chat-oriented Dialogue Systems
Ryuichiro Higashinaka | Kotaro Funakoshi | Masahiro Araki | Hiroshi Tsukahara | Yuka Kobayashi | Masahiro Mizukami
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2013

pdf
A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances
Takashi Yamauchi | Mikio Nakano | Kotaro Funakoshi
Proceedings of the SIGDIAL 2013 Conference

2012

pdf
A Unified Probabilistic Approach to Referring Expressions
Kotaro Funakoshi | Mikio Nakano | Takenobu Tokunaga | Ryu Iida
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2011

pdf
A Two-Stage Domain Selection Framework for Extensible Multi-Domain Spoken Dialogue Systems
Mikio Nakano | Shun Sato | Kazunori Komatani | Kyoko Matsuyama | Kotaro Funakoshi | Hiroshi G. Okuno
Proceedings of the SIGDIAL 2011 Conference

2010

pdf
Non-humanlike Spoken Dialogue: A Design Perspective
Kotaro Funakoshi | Mikio Nakano | Kazuki Kobayashi | Takanori Komatsu | Seiji Yamada
Proceedings of the SIGDIAL 2010 Conference

pdf
Automatic Allocation of Training Data for Rapid Prototyping of Speech Understanding based on Multiple Model Combination
Kazunori Komatani | Masaki Katsumaru | Mikio Nakano | Kotaro Funakoshi | Tetsuya Ogata | Hiroshi G. Okuno
Coling 2010: Posters

2009

pdf
A Probabilistic Model of Referring Expressions for Complex Objects
Kotaro Funakoshi | Philipp Spanger | Mikio Nakano | Takenobu Tokunaga
Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

pdf
A Speech Understanding Framework that Uses Multiple Language Models and Multiple Understanding Models
Masaki Katsumaru | Mikio Nakano | Kazunori Komatani | Kotaro Funakoshi | Tetsuya Ogata | Hiroshi G. Okuno
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

2008

pdf
A Framework for Building Conversational Agents Based on a Multi-Expert Model
Mikio Nakano | Kotaro Funakoshi | Yuji Hasegawa | Hiroshi Tsujino
Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue

pdf
Rapid Prototyping of Robust Language Understanding Modules for Spoken Dialogue Systems
Yuichiro Fukubayashi | Kazunori Komatani | Mikio Nakano | Kotaro Funakoshi | Hiroshi Tsujino | Tetsuya Ogata | Hiroshi G. Okuno
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

2007

pdf
Analysis of User Reactions to Turn-Taking Failures in Spoken Dialogue Systems
Mikio Nakano | Yuka Nagano | Kotaro Funakoshi | Toshihiko Ito | Kenji Araki | Yuji Hasegawa | Hiroshi Tsujino
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

2006

pdf
Identifying Repair Targets in Action Control Dialogue
Kotaro Funakoshi | Takenobu Tokunaga
11th Conference of the European Chapter of the Association for Computational Linguistics

pdf
Group-Based Generation of Referring Expressions
Kotaro Funakoshi | Satoru Watanabe | Takenobu Tokunaga
Proceedings of the Fourth International Natural Language Generation Conference

2005

pdf
Controlling Animated Agents in Natural Language
Kotaro Funakoshi | Takenobu Tokugana
Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts

2004

pdf
Generation of Relative Referring Expressions based on Perceptual Grouping
Kotaro Funakoshi | Satoru Watanabe | Naoko Kuriyama | Takenobu Tokunaga
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2002

pdf
Processing Japanese Self-correction in Speech Dialog Systems
Kotaro Funakoshi | Takenobu Tokunaga | Hozumi Tanaka
COLING 2002: The 19th International Conference on Computational Linguistics