Hao Xu


2024

pdf
POP-CEE: Position-oriented Prompt-tuning Model for Causal Emotion Entailment
Zhihan Zhou | Xue Gu | Yujie Zhao | Hao Xu
Findings of the Association for Computational Linguistics: ACL 2024

The objective of the Causal Emotion Entailment (CEE) task is to identify the causes of the target emotional utterances in a given conversation. Most existing studies have focused on a fine-tuning paradigm based on a pretrained model, e.g., the BERT model. However, there are gaps between the pretrained task and the CEE task. Although a pretrained model enhances contextual comprehension to some extent, it cannot acquire specific knowledge that is relevant to the CEE task. In addition, in a typical CEE task, there are peculiarities in the distribution of the positions with different emotion types of emotion utterances and cause utterances in conversations. Existing methods employ a fixed-size window to capture the relationship between neighboring conversations; however, these methods ignore the specific semantic associations between emotions and cause utterances. To address these issues, we propose the Position-oriented Prompt-tuning (POP-CEE) model to solve the CEE task in an end-to-end manner. Specifically, we can model the CEE task by designing prompts with multiple unified goals and by exploring the positional relationship between emotion and cause utterances using a position constraint module. Experimental results demonstrate that the proposed POP-CEE model achieves state-of-the-art performance on a benchmark dataset. Ourcode and data can be found at: https://github.com/Zh0uzh/POP-CEE.

pdf
Ancient Chinese Glyph Identification Powered by Radical Semantics
Yang Chi | Fausto Giunchiglia | Chuntao Li | Hao Xu
Findings of the Association for Computational Linguistics: ACL 2024

The ancestor of Chinese character – the ancient characters from about 1300 BC to 200 BC are not fixed in their writing glyphs. At the same or different points in time, one character can possess multiple glyphs that are different in shapes or radicals. Nearly half of ancient glyphs have not been deciphered yet. This paper proposes an innovative task of ancient Chinese glyph identification, which aims at inferring the Chinese character label for the unknown ancient Chinese glyphs which are not in the training set based on the image and radical information. Specifically, we construct a Chinese glyph knowledge graph (CGKG) associating glyphs in different historical periods according to the radical semantics, and propose a multimodal Chinese glyph identification framework (MCGI) fusing the visual, textual, and the graph data. The experiment is designed on a real Chinese glyph dataset spanning over 1000 years, it demonstrates the effectiveness of our method, and reports the potentials of each modality on this task. It provides a preliminary reference for the automatic ancient Chinese character deciphering at the glyph level.

pdf
An Effective Span-based Multimodal Named Entity Recognition with Consistent Cross-Modal Alignment
Yongxiu Xu | Hao Xu | Heyan Huang | Shiyao Cui | Minghao Tang | Longzheng Wang | Hongbo Xu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

With the increasing availability of multimodal content on social media, consisting primarily of text and images, multimodal named entity recognition (MNER) has gained a wide-spread attention. A fundamental challenge of MNER lies in effectively aligning different modalities. However, the majority of current approaches rely on word-based sequence labeling framework and align the image and text at inconsistent semantic levels (whole image-words or regions-words). This misalignment may lead to inferior entity recognition performance. To address this issue, we propose an effective span-based method, named SMNER, which achieves a more consistent multimodal alignment from the perspectives of information-theoretic and cross-modal interaction, respectively. Specifically, we first introduce a cross-modal information bottleneck module for the global-level multimodal alignment (whole image-whole text). This module aims to encourage the semantic distribution of the image to be closer to the semantic distribution of the text, which can enable the filtering out of visual noise. Next, we introduce a cross-modal attention module for the local-level multimodal alignment (regions-spans), which captures the correlations between regions in the image and spans in the text, enabling a more precise alignment of the two modalities. Extensive ex- periments conducted on two benchmark datasets demonstrate that SMNER outperforms the state-of-the-art baselines.

pdf
EmoPrompt-ECPE: Emotion Knowledge-aware Prompt-tuning for Emotion-Cause Pair Extraction
Xue Gu | Zhihan Zhou | Ziyao Meng | Jian Li | Tiago Gomes | Adriano Tavares | Hao Xu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Emotion-cause pair extraction (ECPE) main focus is on extracting all potential emotion clauses and corresponding cause clauses from unannotated documents. Existing methods achieve promising results with the help of fine-tuning and prompt paradigms, but they present three downsides. First, most approaches cannot distinguish between the emotion-cause pairs that belong to different types of emotions, limiting the existing approaches’ applicability. Second, existing prompt methods utilize a one-to-one mapping relation to achieve label words to category mapping, which brings considerable bias to the results. Third, existing methods achieve the cause extraction task supported by explicit semantic understanding or basic prompt templates, ignoring the implicit information contained in the cause clauses themselves. To solve these issues, we propose an Emotion knowledge-aware Prompt-tuning for Emotion-Cause Pair Extraction (EmoPrompt-ECPE) method, which integrate the knowledge of emotion categories in the ECPE task and mine the implicit knowledge of cause clauses. Specifically, we inject the latent knowledge of the cause clauses and the emotion types into the prompt template. Besides, we extend the emotion labels for many-to-one mapping of label words to categories with an external emotion word base. Furthermore, we utilize the cosine similarity filtering of the label word base to reduce the noise caused by knowledge introduction. Experiments on both Chinese and English benchmark datasets show that our approach can achieve state-of-the-art results. Our code and data can be found at: https://github.com/xy-xiaotudou/EmoPrompt-ECPE.

pdf
Hierarchical Topic Modeling via Contrastive Learning and Hyperbolic Embedding
Zhicheng Lin | HeGang Chen | Yuyin Lu | Yanghui Rao | Hao Xu | Hanjiang Lai
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Hierarchical topic modeling, which can mine implicit semantics in the corpus and automatically construct topic hierarchical relationships, has received considerable attention recently. However, the current hierarchical topic models are mainly based on Euclidean space, which cannot well retain the implicit hierarchical semantic information in the corpus, leading to irrational structure of the generated topics. On the other hand, the existing Generative Adversarial Network (GAN) based neural topic models perform satisfactorily, but they remain constrained by pattern collapse due to the discontinuity of latent space. To solve the above problems, with the hypothesis of hyperbolic space, we propose a novel GAN-based hierarchical topic model to mine high-quality topics by introducing contrastive learning to capture information from documents. Furthermore, the distinct tree-like property of hyperbolic space preserves the implicit hierarchical semantics of documents in topic embeddings, which are projected into the hyperbolic space. Finally, we use a multi-head self-attention mechanism to learn implicit hierarchical semantics of topics and mine topic structure information. Experiments on real-world corpora demonstrate the remarkable performance of our model on topic coherence and topic diversity, as well as the rationality of the topic hierarchy.

2022

pdf
A Simple Contrastive Learning Framework for Interactive Argument Pair Identification via Argument-Context Extraction
Lida Shi | Fausto Giunchiglia | Rui Song | Daqian Shi | Tongtong Liu | Xiaolei Diao | Hao Xu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Interactive argument pair identification is an emerging research task for argument mining, aiming to identify whether two arguments are interactively related. It is pointed out that the context of the argument is essential to improve identification performance. However, current context-based methods achieve limited improvements since the entire context typically contains much irrelevant information. In this paper, we propose a simple contrastive learning framework to solve this problem by extracting valuable information from the context. This framework can construct hard argument-context samples and obtain a robust and uniform representation by introducing contrastive learning. We also propose an argument-context extraction module to enhance information extraction by discarding irrelevant blocks. The experimental results show that our method achieves the state-of-the-art performance on the benchmark dataset. Further analysis demonstrates the effectiveness of our proposed modules and visually displays more compact semantic representations.

pdf
ZiNet: Linking Chinese Characters Spanning Three Thousand Years
Yang Chi | Fausto Giunchiglia | Daqian Shi | Xiaolei Diao | Chuntao Li | Hao Xu
Findings of the Association for Computational Linguistics: ACL 2022

Modern Chinese characters evolved from 3,000 years ago. Up to now, tens of thousands of glyphs of ancient characters have been discovered, which must be deciphered by experts to interpret unearthed documents. Experts usually need to compare each ancient character to be examined with similar known ones in whole historical periods. However, it is inevitably limited by human memory and experience, which often cost a lot of time but associations are limited to a small scope. To help researchers discover glyph similar characters, this paper introduces ZiNet, the first diachronic knowledge base describing relationships and evolution of Chinese characters and words. In addition, powered by the knowledge of radical systems in ZiNet, this paper introduces glyph similarity measurement between ancient Chinese characters, which could capture similar glyph pairs that are potentially related in origins or semantics. Results show strong positive correlations between scores from the method and from human experts. Finally, qualitative analysis and implicit future applications are presented.

2020

pdf
A Large Scale Speech Sentiment Corpus
Eric Chen | Zhiyun Lu | Hao Xu | Liangliang Cao | Yu Zhang | James Fan
Proceedings of the Twelfth Language Resources and Evaluation Conference

We present a multimodal corpus for sentiment analysis based on the existing Switchboard-1 Telephone Speech Corpus released by the Linguistic Data Consortium. This corpus extends the Switchboard-1 Telephone Speech Corpus by adding sentiment labels from 3 different human annotators for every transcript segment. Each sentiment label can be one of three options: positive, negative, and neutral. Annotators are recruited using Google Cloud’s data labeling service and the labeling task was conducted over the internet. The corpus contains a total of 49500 labeled speech segments covering 140 hours of audio. To the best of our knowledge, this is the largest multimodal Corpus for sentiment analysis that includes both speech and text features.