2023
pdf
abs
Causal Intervention for Abstractive Related Work Generation
Jiachang Liu
|
Qi Zhang
|
Chongyang Shi
|
Usman Naseem
|
Shoujin Wang
|
Liang Hu
|
Ivor Tsang
Findings of the Association for Computational Linguistics: EMNLP 2023
Abstractive related work generation has attracted increasing attention in generating coherent related work that helps readers grasp the current research. However, most existing models ignore the inherent causality during related work generation, leading to spurious correlations which downgrade the models’ generation quality and generalizability. In this study, we argue that causal intervention can address such limitations and improve the quality and coherence of generated related work. To this end, we propose a novel Causal Intervention Module for Related Work Generation (CaM) to effectively capture causalities in the generation process. Specifically, we first model the relations among the sentence order, document (reference) correlations, and transitional content in related work generation using a causal graph. Then, to implement causal interventions and mitigate the negative impact of spurious correlations, we use do-calculus to derive ordinary conditional probabilities and identify causal effects through CaM. Finally, we subtly fuse CaM with Transformer to obtain an end-to-end related work generation framework. Extensive experiments on two real-world datasets show that CaM can effectively promote the model to learn causal relations and thus produce related work of higher quality and coherence.
pdf
abs
Multiview Clickbait Detection via Jointly Modeling Subjective and Objective Preference
Chongyang Shi
|
Yijun Yin
|
Qi Zhang
|
Liang Xiao
|
Usman Naseem
|
Shoujin Wang
|
Liang Hu
Findings of the Association for Computational Linguistics: EMNLP 2023
Clickbait posts tend to spread inaccurate or misleading information to manipulate people’s attention and emotions, which greatly harms the credibility of social media. Existing clickbait detection models rely on analyzing the objective semantics in posts or correlating posts with article content only. However, these models fail to identify and exploit the manipulation intention of clickbait from a user’s subjective perspective, leading to limited capability to explore comprehensive clues of clickbait. To address such a issue, we propose a multiview clickbait detection model, named MCDM, to model subjective and objective preferences simultaneously. MCDM introduces two novel complementary modules for modeling subjective feeling and objective content relevance, respectively. The subjective feeling module adopts a user-centric approach to capture subjective features of posts, such as language patterns and emotional inclinations. The objective module explores news elements from posts and models article content correlations to capture objective clues for clickbait detection. Extensive experimental results on two real-world datasets show that our proposed MCDM outperforms state-of-the-art approaches for clickbait detection, verifying the effectiveness of integrating subjective and objective preferences for detecting clickbait.
pdf
abs
Reducing Knowledge Noise for Improved Semantic Analysis in Biomedical Natural Language Processing Applications
Usman Naseem
|
Surendrabikram Thapa
|
Qi Zhang
|
Liang Hu
|
Anum Masood
|
Mehwish Nasim
Proceedings of the 5th Clinical Natural Language Processing Workshop
Graph-based techniques have gained traction for representing and analyzing data in various natural language processing (NLP) tasks. Knowledge graph-based language representation models have shown promising results in leveraging domain-specific knowledge for NLP tasks, particularly in the biomedical NLP field. However, such models have limitations, including knowledge noise and neglect of contextual relationships, leading to potential semantic errors and reduced accuracy. To address these issues, this paper proposes two novel methods. The first method combines knowledge graph-based language model with nearest-neighbor models to incorporate semantic and category information from neighboring instances. The second method involves integrating knowledge graph-based language model with graph neural networks (GNNs) to leverage feature information from neighboring nodes in the graph. Experiments on relation extraction (RE) and classification tasks in English and Chinese language datasets demonstrate significant performance improvements with both methods, highlighting their potential for enhancing the performance of language models and improving NLP applications in the biomedical domain.
pdf
bib
Temporal Tides of Emotional Resonance: A Novel Approach to Identify Mental Health on Social Media
Usman Naseem
|
Surendrabikram Thapa
|
Qi Zhang
|
Junaid Rashid
|
Liang Hu
|
Mehwish Nasim
Proceedings of the 11th International Workshop on Natural Language Processing for Social Media