Zhili Liu
2025
Mixture of insighTful Experts (MoTE): The Synergy of Reasoning Chains and Expert Mixtures in Self-Alignment
Zhili Liu
|
Yunhao Gou
|
Kai Chen
|
Lanqing Hong
|
Jiahui Gao
|
Fei Mi
|
Yu Zhang
|
Zhenguo Li
|
Xin Jiang
|
Qun Liu
|
James Kwok
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Corrupted but Not Broken: Understanding and Mitigating the Negative Impacts of Corrupted Data in Visual Instruction Tuning
Yunhao Gou
|
Hansi Yang
|
Zhili Liu
|
Kai Chen
|
Yihan Zeng
|
Lanqing Hong
|
Zhenguo Li
|
Qun Liu
|
Bo Han
|
James Kwok
|
Yu Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
2024
ProxyQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models
Haochen Tan
|
Zhijiang Guo
|
Zhan Shi
|
Lu Xu
|
Zhili Liu
|
Yunlong Feng
|
Xiaoguang Li
|
Yasheng Wang
|
Lifeng Shang
|
Qun Liu
|
Linqi Song
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)