Leveraging Customer Feedback for Multi-modal Insight Extraction

Sandeep Mukku, Abinesh Kanagarajan, Pushpendu Ghosh, Chetan Aggarwal


Abstract
Businesses can benefit from customer feedback in different modalities, such as text and images, to enhance their products and services. However, it is difficult to extract actionable and relevant pairs of text segments and images from customer feedback in a single pass. In this paper, we propose a novel multi-modal method that fuses image and text information in a latent space and decodes it to extract the relevant feedback segments using an image-text grounded text decoder. We also introduce a weakly-supervised data generation technique that produces training data for this task. We evaluate our model on unseen data and demonstrate that it can effectively mine actionable insights from multi-modal customer feedback, outperforming the existing baselines by 14 points in F1 score.
Anthology ID:
2024.naacl-industry.22
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Yi Yang, Aida Davani, Avi Sil, Anoop Kumar
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
266–278
Language:
URL:
https://aclanthology.org/2024.naacl-industry.22
DOI:
Bibkey:
Cite (ACL):
Sandeep Mukku, Abinesh Kanagarajan, Pushpendu Ghosh, and Chetan Aggarwal. 2024. Leveraging Customer Feedback for Multi-modal Insight Extraction. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track), pages 266–278, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Leveraging Customer Feedback for Multi-modal Insight Extraction (Mukku et al., NAACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.naacl-industry.22.pdf