Xumeng Liu
2024
Look before You Leap: Dual Logical Verification for Knowledge-based Visual Question Generation
Xumeng Liu
|
Wenya Guo
|
Ying Zhang
|
Xubo Liu
|
Yu Zhao
|
Shenglong Yu
|
Xiaojie Yuan
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Knowledge-based Visual Question Generation aims to generate visual questions with outside knowledge other than the image. Existing approaches are answer-aware, which incorporate answers into the question-generation process. However, these methods just focus on leveraging the semantics of inputs to propose questions, ignoring the logical coherence among generated questions (Q), images (V), answers (A), and corresponding acquired outside knowledge (K). It results in generating many non-expected questions with low quality, lacking insight and diversity, and some of them are even without any corresponding answer. To address this issue, we inject logical verification into the processes of knowledge acquisition and question generation, which is defined as LVˆ2-Net. Through checking the logical structure among V, A, K, ground-truth and generated Q twice in the whole KB-VQG procedure, LVˆ2-Net can propose diverse and insightful knowledge-based visual questions. And experimental results on two commonly used datasets demonstrate the superiority of LVˆ2-Net. Our code will be released to the public soon.
2023
AoM: Detecting Aspect-oriented Information for Multimodal Aspect-Based Sentiment Analysis
Ru Zhou
|
Wenya Guo
|
Xumeng Liu
|
Shenglong Yu
|
Ying Zhang
|
Xiaojie Yuan
Findings of the Association for Computational Linguistics: ACL 2023
Multimodal aspect-based sentiment analysis (MABSA) aims to extract aspects from text-image pairs and recognize their sentiments. Existing methods make great efforts to align the whole image to corresponding aspects. However, different regions of the image may relate to different aspects in the same sentence, and coarsely establishing image-aspect alignment will introduce noise to aspect-based sentiment analysis (i.e., visual noise). Besides, the sentiment of a specific aspect can also be interfered by descriptions of other aspects (i.e., textual noise). Considering the aforementioned noises, this paper proposes an Aspect-oriented Method (AoM) to detect aspect-relevant semantic and sentiment information. Specifically, an aspect-aware attention module is designed to simultaneously select textual tokens and image blocks that are semantically related to the aspects. To accurately aggregate sentiment information, we explicitly introduce sentiment embedding into AoM, and use a graph convolutional network to model the vision-text and text-text interaction. Extensive experiments demonstrate the superiority of AoM to existing methods.
Search
Co-authors
- Wenya Guo 2
- Ying Zhang 2
- Shenglong Yu 2
- Xiaojie Yuan 2
- Xubo Liu 1
- show all...