Jia-Yi Liao


2021

pdf bib
A Study on Using Transfer Learning to Improve BERT Model for Emotional Classification of Chinese Lyrics
Jia-Yi Liao | Ya-Hsuan Lin | Kuan-Cheng Lin | Jia-Wei Chang
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

The explosive growth of music libraries has made music information retrieval and recommendation a critical issue. Recommendation systems based on music emotion recognition are gradually gaining attention. Most of the studies focus on audio data rather than lyrics to build models of music emotion classification. In addition, because of the richness of English language resources, most of the existing studies are focused on English lyrics but rarely on Chinese. For this reason, We propose an approach that uses the BERT pretraining model and Transfer learning to improve the emotion classification task of Chinese lyrics. The following approaches were used without any specific training for the Chinese lyrics emotional classification task: (a) Using BERT, only can reach 50% of the classification accuracy. (b) Using BERT with transfer learning of CVAW, CVAP, and CVAT datasets can achieve 71% classification accuracy.