Zida Yan

Also published as: 自达


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2024

pdf bib
基于方面引导的图文渐进融合的多模态方面级情感分析方法(Aspect-Guided Progressive Fusion of Text and Image for Multimodal Aspect-Based Sentiment Analysis)
Zida Yan (闫自达) | Junjun Guo (郭军军) | Zhengtao Yu (余正涛)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“多模态方面级情感分析旨在通过结合图像信息和文本信息来识别特定方面的情感极性。然而,图像和文本作为两种不同的模态,其在数据表现形式和语义表达上存在显著差异,缩小模态鸿沟和跨模态特征融合是多模态方面级情感分析任务中出现的两个关键问题。对此,本文提出了一种基于方面引导的图文渐进融合的多模态方面级情感分析方法,该方法采用图像和文本中重叠的方面信息作为枢轴,利用方面引导的图文对比学习和基于对比的跨模态语义交互来缩小模态差异、促进语义交互,然后在多模态特征空间中整合视觉和文本信息,通过方面引导的基于对比的多模态语义融合来促进跨模态特征融合,从而提升多模态情感分析的性能。在三个多模态方面级情感分析基准数据集上的实验结果证明了本文提出方法的有效性,并且优于其他大多数最先进的多模态方面级情感分析方法。”