Zijian Huang
2025
A Survey of Pun Generation: Datasets, Evaluations and Methodologies
Yuchen Su
|
Yonghua Zhu
|
Ruofan Wang
|
Zijian Huang
|
Diana Benavides-Prado
|
Michael J. Witbrock
Findings of the Association for Computational Linguistics: EMNLP 2025
Pun generation seeks to creatively modify linguistic elements in text to produce humour or evoke double meanings. It also aims to preserve coherence and contextual appropriateness, making it useful in creative writing and entertainment across various media and contexts. This field has been widely studied in computational linguistics, while there are currently no surveys that specifically focus on pun generation. To bridge this gap, this paper provides a comprehensive review of pun generation datasets and methods across different stages, including traditional approaches, deep learning techniques, and pre-trained language models. Additionally, we summarise both automated and human evaluation metrics used to assess the quality of pun generation. Finally, we discuss the research challenges and propose promising directions for future work.
2024
SKGSum: Structured Knowledge-Guided Document Summarization
Qiqi Wang
|
Ruofan Wang
|
Kaiqi Zhao
|
Robert Amor
|
Benjamin Liu
|
Jiamou Liu
|
Xianda Zheng
|
Zijian Huang
Findings of the Association for Computational Linguistics: ACL 2024
A summary structure is inherent to certain types of texts according to the Genre Theory of Linguistics. Such structures aid readers in efficiently locating information within summaries. However, most existing automatic summarization methods overlook the importance of summary structure, resulting in summaries that emphasize the most prominent information while omitting essential details from other sections. While a few summarizers recognize the importance of summary structure, they rely heavily on the predefined labels of summary structures in the source document and ground truth summaries. To address these shortcomings, we developed a Structured Knowledge-Guided Summarization (SKGSum) and its variant, SKGSum-W, which do not require structure labels. Instead, these methods rely on a set of automatically extracted summary points to generate summaries. We evaluate the proposed methods using three real-world datasets. The results indicate that our methods not only improve the quality of summaries, in terms of ROUGE and BERTScore, but also broaden the types of documents that can be effectively summarized.
2023
[MASK] Insertion: a robust method for anti-adversarial attacks
Xinrong Hu
|
Ce Xu
|
Junlong Ma
|
Zijian Huang
|
Jie Yang
|
Yi Guo
|
Johan Barthelemy
Findings of the Association for Computational Linguistics: EACL 2023
Adversarial attack aims to perturb input sequences and mislead a trained model for false predictions. To enhance the model robustness, defensing methods are accordingly employed by either data augmentation (involving adversarial samples) or model enhancement (modifying the training loss and/or model architecture). In contrast to previous work, this paper revisits the masked language modeling (MLM) and presents a simple yet efficient algorithm against adversarial attacks, termed [MASK] insertion for defensing (MI4D). Specifically, MI4D simply inserts [MASK] tokens to input sequences during training and inference, maximizing the intersection of the new convex hull (MI4D creates) with the original one (the clean input forms). As neither additional adversarial samples nor the model modification is required, MI4D is as computationally efficient as traditional fine-tuning. Comprehensive experiments have been conducted using three benchmark datasets and four attacking methods. MI4D yields a significant improvement (on average) of the accuracy between 3.2 and 11.1 absolute points when compared with six state-of-the-art defensing baselines.
Search
Fix author
Co-authors
- Ruofan Wang 2
- Robert Amor 1
- Johan Barthelemy 1
- Diana Benavides-Prado 1
- Yi Guo 1
- show all...