2024
pdf
abs
A Multimodal Dialogue System to Lead Consensus Building with Emotion-Displaying
Shinnosuke Nozue
|
Yuto Nakano
|
Shoji Moriya
|
Tomoki Ariyama
|
Kazuma Kokuta
|
Suchun Xie
|
Kai Sato
|
Shusaku Sone
|
Ryohei Kamei
|
Reina Akama
|
Yuichiroh Matsubayashi
|
Keisuke Sakaguchi
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
The evolution of large language models has enabled fluent dialogue, increasing interest in the coexistence of humans and avatars. An essential aspect of achieving this coexistence involves developing sophisticated dialogue systems that can influence user behavior. In this background, we propose an effective multimodal dialogue system designed to promote consensus building with humans. Our system employs a slot-filling strategy to guide discussions and attempts to influence users with suggestions through emotional expression and intent conveyance via its avatar. These innovations have resulted in our system achieving the highest performance in a competition evaluating consensus building between humans and dialogue systems. We hope that our research will promote further discussion on the development of dialogue systems that enhance consensus building in human collaboration.
2023
pdf
abs
TohokuNLP at SemEval-2023 Task 5: Clickbait Spoiling via Simple Seq2Seq Generation and Ensembling
Hiroto Kurita
|
Ikumi Ito
|
Hiroaki Funayama
|
Shota Sasaki
|
Shoji Moriya
|
Ye Mengyu
|
Kazuma Kokuta
|
Ryujin Hatakeyama
|
Shusaku Sone
|
Kentaro Inui
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
This paper describes our system submitted to SemEval-2023 Task 5: Clickbait Spoiling. We work on spoiler generation of the subtask 2 and develop a system which comprises two parts: 1) simple seq2seq spoiler generation and 2) post-hoc model ensembling. Using this simple method, we address the challenge of generating multipart spoiler. In the test set, our submitted system outperformed the baseline by a large margin (approximately 10 points above on the BLEU score) for mixed types of spoilers. We also found that our system successfully handled the challenge of the multipart spoiler, confirming the effectiveness of our approach.
pdf
abs
Can LMs Store and Retrieve 1-to-N Relational Knowledge?
Haruki Nagasawa
|
Benjamin Heinzerling
|
Kazuma Kokuta
|
Kentaro Inui
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as ”country and its capital,” with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1-to-N relational knowledge, such as ”parent and children.”However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models’ abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs’ ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging.