Shuyu Wei
2025
Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models
Rui Hu
|
Delai Qiu
|
Shuyu Wei
|
Jiaming Zhang
|
Yining Wang
|
Shengping Liu
|
Jitao Sang
Findings of the Association for Computational Linguistics: ACL 2025
Omnimodal Large Language Models (OLLMs) have shown significant progress in integrating vision and text, but still struggle with integrating vision and audio, often exhibiting suboptimal performance when processing audio queries compared to text queries. This disparity is primarily due to insufficient alignment between vision and audio modalities during training, leading to inadequate attention to visual information when using audio queries. To mitigate this issue, we propose a Self-Knowledge Distillation (Self-KD) training method where the vision-text component of the OLLM serves as the teacher and the vision-audio component as the student. This enables the model to process audio in a manner analogous to its text processing. Our experimental results demonstrate that Self-KD is an effective method for enhancing the vision-audio capabilities of OLLMs by learning from the vision-text components, which subsequently improves the interaction between audio and images and results in improved performance on multimodal tasks.
2024
CDEval: A Benchmark for Measuring the Cultural Dimensions of Large Language Models
Yuhang Wang
|
Yanxu Zhu
|
Chao Kong
|
Shuyu Wei
|
Xiaoyuan Yi
|
Xing Xie
|
Jitao Sang
Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP
As the scaling of Large Language Models (LLMs) has dramatically enhanced their capabilities, there has been a growing focus on the alignment problem to ensure their responsible and ethical use. While existing alignment efforts predominantly concentrate on universal values such as the HHH principle, the aspect of culture, which is inherently pluralistic and diverse, has not received adequate attention. This work introduces a new benchmark, CDEval, aimed at evaluating the cultural dimensions of LLMs. CDEval is constructed by incorporating both GPT-4’s automated generation and human verification, covering six cultural dimensions across seven domains. Our comprehensive experiments provide intriguing insights into the culture of mainstream LLMs, highlighting both consistencies and variations across different dimensions and domains. The findings underscore the importance of integrating cultural considerations in LLM development, particularly for applications in diverse cultural settings. This benchmark serves as a valuable resource for cultural studies in LLMs, paving the way for more culturally aware and sensitive models.