Zhi Zhong


2025

pdf bib
Cross-Modal Learning for Music-to-Music-Video Description Generation
Zhuoyuan Mao | Mengjie Zhao | Qiyu Wu | Zhi Zhong | Wei-Hsiang Liao | Hiromi Wakaki | Yuki Mitsufuji
Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025)

Music-to-music-video generation is a challenging task due to the intrinsic differences between the music and video modalities. The advent of powerful text-to-video diffusion models has opened a promising pathway for music-video (MV) generation by first addressing the music-to-MV description task and subsequently leveraging these models for video generation. In this study, we focus on the MV description generation task and propose a comprehensive pipeline encompassing training data construction and multimodal model fine-tuning. We fine-tune existing pre-trained multimodal models on our newly constructed music-to-MV description dataset based on the Music4All dataset, which integrates both musical and visual information. Our experimental results demonstrate that music representations can be effectively mapped to textual domains, enabling the generation of meaningful MV description directly from music inputs. We also identify key components in the dataset construction pipeline that critically impact the quality of MV description and highlight specific musical attributes that warrant greater focus for improved MV description generation.

2024

pdf bib
On the Language Encoder of Contrastive Cross-modal Models
Mengjie Zhao | Junya Ono | Zhi Zhong | Chieh-Hsin Lai | Yuhta Takida | Naoki Murata | Wei-Hsiang Liao | Takashi Shibuya | Hiromi Wakaki | Yuki Mitsufuji
Findings of the Association for Computational Linguistics: ACL 2024

Contrastive cross-modal models such as CLIP and CLAP aid various vision-language (VL) and audio-language (AL) tasks. However, there has been limited investigation of and improvement in their language encoder – the central component of encoding natural language descriptions of image/audio into vector representations. We extensively evaluate how unsupervised and supervised sentence embedding training affect language encoder quality and cross-modal task performance. In VL pretraining, we found that sentence embedding training enhances language encoder quality and aids in cross-modal tasks, improving contrastive VL models such as CyCLIP. Sentence embedding training benefits AL tasks when the amount of training data is large. We analyze the representation spaces to understand the strengths of sentence embedding training, and find that it improves text-space uniformity, at the cost of decreased cross-modal alignment.

2013

pdf bib
Towards Robust Linguistic Analysis using OntoNotes
Sameer Pradhan | Alessandro Moschitti | Nianwen Xue | Hwee Tou Ng | Anders Björkelund | Olga Uryupina | Yuchen Zhang | Zhi Zhong
Proceedings of the Seventeenth Conference on Computational Natural Language Learning

2012

pdf bib
Word Sense Disambiguation Improves Information Retrieval
Zhi Zhong | Hwee Tou Ng
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2010

pdf bib
It Makes Sense: A Wide-Coverage Word Sense Disambiguation System for Free Text
Zhi Zhong | Hwee Tou Ng
Proceedings of the ACL 2010 System Demonstrations

2008

pdf bib
Word Sense Disambiguation Using OntoNotes: An Empirical Study
Zhi Zhong | Hwee Tou Ng | Yee Seng Chan
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
NUS-PT: Exploiting Parallel Texts for Word Sense Disambiguation in the English All-Words Tasks
Yee Seng Chan | Hwee Tou Ng | Zhi Zhong
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)