Kazuma Kokuta
2023
TohokuNLP at SemEval-2023 Task 5: Clickbait Spoiling via Simple Seq2Seq Generation and Ensembling
Hiroto Kurita
|
Ikumi Ito
|
Hiroaki Funayama
|
Shota Sasaki
|
Shoji Moriya
|
Ye Mengyu
|
Kazuma Kokuta
|
Ryujin Hatakeyama
|
Shusaku Sone
|
Kentaro Inui
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
This paper describes our system submitted to SemEval-2023 Task 5: Clickbait Spoiling. We work on spoiler generation of the subtask 2 and develop a system which comprises two parts: 1) simple seq2seq spoiler generation and 2) post-hoc model ensembling. Using this simple method, we address the challenge of generating multipart spoiler. In the test set, our submitted system outperformed the baseline by a large margin (approximately 10 points above on the BLEU score) for mixed types of spoilers. We also found that our system successfully handled the challenge of the multipart spoiler, confirming the effectiveness of our approach.
Can LMs Store and Retrieve 1-to-N Relational Knowledge?
Haruki Nagasawa
|
Benjamin Heinzerling
|
Kazuma Kokuta
|
Kentaro Inui
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as ”country and its capital,” with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1-to-N relational knowledge, such as ”parent and children.”However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models’ abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs’ ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging.
Search
Co-authors
- Kentaro Inui 2
- Hiroto Kurita 1
- Ikumi Ito 1
- Hiroaki Funayama 1
- Shota Sasaki 1
- show all...