Jiaxuan Liu


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
DiffStyleTTS: Diffusion-based Hierarchical Prosody Modeling for Text-to-Speech with Diverse and Controllable Styles
Jiaxuan Liu | Zhaoci Liu | Yajun Hu | Yingying Gao | Shilei Zhang | Zhenhua Ling
Proceedings of the 31st International Conference on Computational Linguistics

Human speech exhibits rich and flexible prosodic variations. To address the one-to-many mapping problem from text to prosody in a reasonable and flexible manner, we propose DiffStyleTTS, a multi-speaker acoustic model based on a conditional diffusion module and an improved classifier-free guidance, which hierarchically models speech prosodic features, and controls different prosodic styles to guide prosody prediction. Experiments show that our method outperforms all baselines in naturalness and achieves superior synthesis speed compared to three diffusion-based baselines. Additionally, by adjusting the guiding scale, DiffStyleTTS effectively controls the guidance intensity of the synthetic prosody.

pdf bib
Dianchi at SemEval-2025 Task 11: Multilabel Emotion Recognition via Orthogonal Knowledge Distillation
Zhenlan Wang | Jiaxuan Liu
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

This paper presents KDBERT-MLDistill, a novel framework for multi-label emotion recognition developed for SemEval-2025 Task 11. Addressing challenges of fine-grained emotion misdetection and small-data overfitting, the method synergizes BERT-based text encoding with orthogonal knowledge distillation. Key innovations include: (1) Orthogonal regularization on classifier weights to minimize redundant feature correlations, coupled with dynamic pseudo-labeling for periodic data augmentation; (2) A hierarchical distillation mechanism where dual teacher-student models iteratively exchange parameters to balance knowledge retention and exploration.