Yige Chen
2022
Contrastive Learning enhanced Author-Style Headline Generation
Hui Liu
|
Weidong Guo
|
Yige Chen
|
Xiangyang Li
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Headline generation is a task of generating an appropriate headline for a given article, which can be further used for machine-aided writing or enhancing the click-through ratio. Current works only use the article itself in the generation, but have not taken the writing style of headlines into consideration. In this paper, we propose a novel Seq2Seq model called CLH3G (Contrastive Learning enhanced Historical Headlines based Headline Generation) which can use the historical headlines of the articles that the author wrote in the past to improve the headline generation of current articles. By taking historical headlines into account, we can integrate the stylistic features of the author into our model, and generate a headline not only appropriate for the article, but also consistent with the author’s style. In order to efficiently learn the stylistic features of the author, we further introduce a contrastive learning based auxiliary task for the encoder of our model. Besides, we propose two methods to use the learned stylistic features to guide both the pointer and the decoder during the generation. Experimental results show that historical headlines of the same user can improve the headline generation significantly, and both the contrastive learning module and the two style features fusion methods can further boost the performance.
Yet Another Format of Universal Dependencies for Korean
Yige Chen
|
Eunkyul Leah Jo
|
Yundong Yao
|
KyungTae Lim
|
Miikka Silfverberg
|
Francis M. Tyers
|
Jungyeul Park
Proceedings of the 29th International Conference on Computational Linguistics
In this study, we propose a morpheme-based scheme for Korean dependency parsing and adopt the proposed scheme to Universal Dependencies. We present the linguistic rationale that illustrates the motivation and the necessity of adopting the morpheme-based format, and develop scripts that convert between the original format used by Universal Dependencies and the proposed morpheme-based format automatically. The effectiveness of the proposed format for Korean dependency parsing is then testified by both statistical and neural models, including UDPipe and Stanza, with our carefully constructed morpheme-based word embedding for Korean. morphUD outperforms parsing results for all Korean UD treebanks, and we also present detailed error analysis.
Search
Co-authors
- Hui Liu 1
- Weidong Guo 1
- Xiangyang Li 1
- Eunkyul Leah Jo 1
- Yundong Yao 1
- show all...