Zhehuan Zhao

Also published as: 哲焕


2024

pdf bib
PGA-SciRE:基于大语言模型的数据增强框架进行科学领域的关系(PGA-SciRE:Harnessing LLM on Data Augmentation for Enhancing Scientific Relation Extraction)
Yang Zhou (周洋) | Shimin Dan (单世民) | Hongkui Wei (魏宏夔) | Zhehuan Zhao (赵哲焕) | Wenshuo Feng (冯文铄)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“关系提取旨在识别文本中提到的实体对之间的关系。大语言模型的进步对自然语言处理任务产生了巨大的影响。在这项工作中,我们针对科学领域的关系抽取任务,提出一个名为PGA的数据增强框架,用于提升模型在科学领域的关系抽取的性能。框架引入了两种数据增强的方式,利用大语言模型通过转述原训练集样本,得到句意相同但具备不同表述和形式的伪样本。以及指导大语言模型根据原训练集样本的关系和实体标签,生成暗含对应标签信息的句子。这两种伪样本分别与原数据集共同参与关系抽取模型的训练。实验中PGA框架提高了三个主流模型的科学领域内关系抽取的F1分数。同时,使用大语言模型获得样本也能有效减少人工标注数据的成本。”

pdf bib
Beyond Linguistic Cues: Fine-grained Conversational Emotion Recognition via Belief-Desire Modelling
Bo Xu | Longjiao Li | Wei Luo | Mehdi Naseriparsa | Zhehuan Zhao | Hongfei Lin | Feng Xia
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Emotion recognition in conversation (ERC) is essential for dialogue systems to identify the emotions expressed by speakers. Although previous studies have made significant progress, accurate recognition and interpretation of similar fine-grained emotion properly accounting for individual variability remains a challenge. One particular under-explored area is the role of individual beliefs and desires in modelling emotion. Inspired by the Belief-Desire Theory of Emotion, we propose a novel method for conversational emotion recognition that incorporates both belief and desire to accurately identify emotions. We extract emotion-eliciting events from utterances and construct graphs that represent beliefs and desires in conversations. By applying message passing between nodes, our graph effectively models the utterance context, speaker’s global state, and the interaction between emotional beliefs, desires, and utterances. We evaluate our model’s performance by conducting extensive experiments on four popular ERC datasets and comparing it with multiple state-of-the-art models. The experimental results demonstrate the superiority of our proposed model and validate the effectiveness of each module in the model.

pdf bib
ESCP: Enhancing Emotion Recognition in Conversation with Speech and Contextual Prefixes
Xiujuan Xu | Xiaoxiao Shi | Zhehuan Zhao | Yu Liu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Emotion Recognition in Conversation (ERC) aims to analyze the speaker’s emotional state in a conversation. Fully mining the information in multimodal and historical utterances plays a crucial role in the performance of the model. However, recent works in ERC focus on historical utterances modeling and generally concatenate the multimodal features directly, which neglects mining deep multimodal information and brings redundancy at the same time. To address the shortcomings of existing models, we propose a novel model, termed Enhancing Emotion Recognition in Conversation with Speech and Contextual Prefixes (ESCP). ESCP employs a directed acyclic graph (DAG) to model historical utterances in a conversation and incorporates a contextual prefix containing the sentiment and semantics of historical utterances. By adding speech and contextual prefixes, the inter- and intra-modal emotion information is efficiently modeled using the prior knowledge of the large-scale pre-trained model. Experiments conducted on several public benchmarks demonstrate that the proposed approach achieves state-of-the-art (SOTA) performances. These results affirm the effectiveness of the novel ESCP model and underscore the significance of incorporating speech and contextual prefixes to guide the pre-trained model.