Zhichao Xu
2026
Learning to Ideate for Machine Learning Engineering Agents
Yunxiang Zhang | Kang Zhou | Zhichao Xu | Kiran Ramnath | Yun Zhou | Sangmin Woo | Haibo Ding | Lin Lee Cheong
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Yunxiang Zhang | Kang Zhou | Zhichao Xu | Kiran Ramnath | Yun Zhou | Sangmin Woo | Haibo Ding | Lin Lee Cheong
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Existing machine learning engineering (MLE) agents struggle to iteratively optimize their implemented algorithms for effectiveness. To address this, we introduce MLE-Ideator, a dual-agent framework that separates ideation from implementation. In our system, an implementation agent can request strategic help from a dedicated Ideator. We show this approach is effective in two ways. First, in a training-free setup, our framework significantly outperforms implementation-only agent baselines on MLE-Bench. Second, we demonstrate that the Ideator can be trained with reinforcement learning (RL) to generate more effective ideas. With only 1K training samples from 10 MLE tasks, our RL-trained Qwen3-8B Ideator achieves an 11.5% relative improvement compared to its untrained counterpart and surpasses Claude Sonnet 3.5. These results highlights a promising path toward training strategic AI systems for scientific discovery.
BayesFlow: A Probability Inference Framework for Meta-Agent Assisted Workflow Generation
Bo Yuan | Yun Zhou | Zhichao Xu | Kiran Ramnath | Aosong Feng | Balasubramaniam Srinivasan
Findings of the Association for Computational Linguistics: EACL 2026
Bo Yuan | Yun Zhou | Zhichao Xu | Kiran Ramnath | Aosong Feng | Balasubramaniam Srinivasan
Findings of the Association for Computational Linguistics: EACL 2026
Automatic workflow generation is the process of automatically synthesizing sequences of LLM calls, tool invocations, and post-processing steps for complex end-to-end tasks. Most prior methods cast this task as an optimization problem with limited theoretical grounding. We propose to cast workflow generation as Bayesian inference over a posterior distribution on workflows, and introduce Bayesian Workflow Generation (BWG), a sampling framework that builds workflows step-by-step using parallel look-ahead rollouts for importance weighting and a sequential in-loop refiner for pool-wide improvements. We prove that, without the refiner, the weighted empirical distribution converges to the target posterior. We instantiate BWG as BayesFlow, a training-free algorithm for workflow construction. Across six benchmark datasets, BayesFlow improves accuracy by up to 9 percentage points over SOTA workflow generation baselines and by up to 65 percentage points over zero-shot prompting, establishing BWG as a principled upgrade to search-based workflow design.
Diffusion Language Model Inference with Monte Carlo Tree Search
Zheng Huang | Kiran Ramnath | Yueyan Chen | Aosong Feng | Sangmin Woo | Balasubramaniam Srinivasan | Zhichao Xu | Kang Zhou | Shuai Wang | Haibo Ding | Lin Lee Cheong
Findings of the Association for Computational Linguistics: EACL 2026
Zheng Huang | Kiran Ramnath | Yueyan Chen | Aosong Feng | Sangmin Woo | Balasubramaniam Srinivasan | Zhichao Xu | Kang Zhou | Shuai Wang | Haibo Ding | Lin Lee Cheong
Findings of the Association for Computational Linguistics: EACL 2026
Diffusion language models (DLMs) have recently emerged as a compelling alternative to autoregressive generation, offering parallel generation and improved global coherence. During inference, DLMs generate text by iteratively denoising masked sequences in parallel; however, determining which positions to unmask and which tokens to commit forms a large combinatorial search problem. Existing inference methods approximate this search using heuristics, which often yield suboptimal decoding paths; other approaches instead rely on additional training to guide token selection. To introduce a principled search mechanism for DLMs inference, we introduce MEDAL, an inference-time scaling framework that integrates Monte Carlo Tree SEarch initialization for Diffusion LAnguage Model inference. We employ Monte Carlo Tree Search at the initialization stage to explore promising unmasking trajectories, providing a robust starting point for subsequent refinement. This design enables efficient inference-time scaling, allowing generation quality to improve as the search budget increases, without additional training. Across multiple benchmarks, MEDAL achieves up to 22.0% improvement over existing inference strategies, establishing a new paradigm for search-based inference in DLMs.
SALT: Step-level Advantage Assignment for Long-horizon Agents via Trajectory Graph
Jiazheng Li | Yawei Wang | Qiaojing Yan | Yijun Tian | Zhichao Xu | Huan Song | Panpan Xu | Lin Lee Cheong
Findings of the Association for Computational Linguistics: EACL 2026
Jiazheng Li | Yawei Wang | Qiaojing Yan | Yijun Tian | Zhichao Xu | Huan Song | Panpan Xu | Lin Lee Cheong
Findings of the Association for Computational Linguistics: EACL 2026
Large Language Models (LLMs) have demonstrated remarkable capabilities, enabling language agents to excel at single-turn tasks. However, their application to complex, multi-step, and long-horizon tasks remains challenging. While reinforcement learning (RL) offers a promising avenue for addressing these challenges, mainstream approaches typically rely solely on sparse, outcome-based rewards — a limitation that becomes especially problematic for group-based RL algorithms lacking critic models, such as Group Relative Policy Optimization (GRPO). In such methods, uniformly rewarding or penalizing all actions within a trajectory can lead to training instability and suboptimal policies, because beneficial and detrimental actions are often entangled across multi-step interactions. To address this challenge, we propose SALT, a novel and lightweight framework that provides a finer-grained advantage assignment, derived solely from outcome rewards. We achieve this by constructing a graph from trajectories of the same prompt, which allows us to quantify the quality of each step and assign advantages accordingly. Crucially, SALT is designed as a plug-and-play module that seamlessly integrates with existing group-based RL algorithms — requiring no modifications to the rollout procedure and introducing negligible computational overhead. Extensive experiments on the WebShop, ALFWorld, and AppWorld benchmarks with various model sizes demonstrate that SALT consistently improves performance. We also conduct a thorough analysis to validate the design choices behind SALT and offer actionable insights.
2025
MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation
Chia-Yuan Chang | Zhimeng Jiang | Vineeth Rakesh | Menghai Pan | Chin-Chia Michael Yeh | Guanchu Wang | Mingzhi Hu | Zhichao Xu | Yan Zheng | Mahashweta Das | Na Zou
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Chia-Yuan Chang | Zhimeng Jiang | Vineeth Rakesh | Menghai Pan | Chin-Chia Michael Yeh | Guanchu Wang | Mingzhi Hu | Zhichao Xu | Yan Zheng | Mahashweta Das | Na Zou
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) are becoming essential tools for various natural language processing tasks but often suffer from generating outdated or incorrect information. Retrieval-Augmented Generation (RAG) addresses this issue by incorporating external, real-time information retrieval to ground LLM responses. However, the existing RAG systems frequently struggle with the quality of retrieval documents, as irrelevant or noisy documents degrade performance, increase computational overhead, and undermine response reliability. To tackle this problem, we propose Multi-Agent Filtering Retrieval-Augmented Generation (MAIN-RAG), a training-free RAG framework that leverages multiple LLM agents to collaboratively filter and score retrieved documents. Specifically, MAIN-RAG introduces an adaptive filtering mechanism that dynamically adjusts the relevance filtering threshold based on score distributions, effectively minimizing noise while maintaining high recall of relevant documents. The proposed approach leverages inter-agent consensus to ensure robust document selection without requiring additional training data or fine-tuning. Experimental results across four QA benchmarks demonstrate that MAIN-RAG consistently outperforms traditional RAG approaches, achieving a 2–11% improvement in answer accuracy while reducing the number of irrelevant retrieved documents. Quantitative analysis further reveals that our approach achieves superior response consistency and answer accuracy over baseline methods, offering a competitive and practical alternative to training-based solutions.
A Systematic Survey of Automatic Prompt Optimization Techniques
Kiran Ramnath | Kang Zhou | Sheng Guan | Soumya Smruti Mishra | Xuan Qi | Zhengyuan Shen | Shuai Wang | Sangmin Woo | Sullam Jeoung | Yawei Wang | Haozhu Wang | Han Ding | Yuzhe Lu | Zhichao Xu | Yun Zhou | Balasubramaniam Srinivasan | Qiaojing Yan | Yueyan Chen | Haibo Ding | Panpan Xu | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Kiran Ramnath | Kang Zhou | Sheng Guan | Soumya Smruti Mishra | Xuan Qi | Zhengyuan Shen | Shuai Wang | Sangmin Woo | Sullam Jeoung | Yawei Wang | Haozhu Wang | Han Ding | Yuzhe Lu | Zhichao Xu | Yun Zhou | Balasubramaniam Srinivasan | Qiaojing Yan | Yueyan Chen | Haibo Ding | Panpan Xu | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Since the advent of large language models (LLMs), prompt engineering has been a crucial step for eliciting desired responses for various Natural Language Processing (NLP) tasks. However, prompt engineering remains an impediment for end users due to rapid advances in models, tasks, and associated best practices. To mitigate this, Automatic Prompt Optimization (APO) techniques have recently emerged that use various automated techniques to help improve the performance of LLMs on various tasks. In this paper, we present a comprehensive survey summarizing the current progress and remaining challenges in this field. We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein. We hope to spur further research guided by our framework.
SLOT: Structuring the Output of Large Language Models
Zhengyuan Shen | Darren Yow-Bang Wang | Soumya Smruti Mishra | Zhichao Xu | Yifei Teng | Haibo Ding
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Zhengyuan Shen | Darren Yow-Bang Wang | Soumya Smruti Mishra | Zhichao Xu | Yifei Teng | Haibo Ding
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Structured outputs are essential for large language models (LLMs) in critical applications like agents and information extraction. Despite their capabilities, LLMs often generate outputs that deviate from predefined schemas, significantly hampering reliable application development. We present SLOT (Structured LLM Output Transformer), a model-agnostic approach that transforms unstructured LLM outputs into precise structured formats. While existing solutions predominantly rely on constrained decoding techniques or are tightly coupled with specific models, SLOT employs a fine-tuned lightweight language model as a post-processing layer, achieving flexibility across various LLMs and schema specifications. We introduce SLOTBench, curated by a data synthesis pipeline alongside a formal evaluation methodology that quantifies both schema accuracy and content fidelity. Our results demonstrate that fine-tuned Mistral-7B model with constrained decoding achieves near-perfect schema accuracy (99.5%) and content similarity (94.0%), outperforming Claude-3.5-Sonnet by substantial margins (+25 and +20 percentage points, respectively). Notably, even compact models like Llama-3.2-1B can match or exceed the structured output capabilities of much larger proprietary models when equipped with SLOT, enabling reliable structured generation in resource-constrained environments. SLOTBench will be released upon legal approval.
IPR: Intelligent Prompt Routing with User-Controlled Quality-Cost Trade-offs
Aosong Feng | Balasubramaniam Srinivasan | Yun Zhou | Zhichao Xu | Kang Zhou | Sheng Guan | Yueyan Chen | Xian Wu | Ninad Kulkarni | Yi Zhang | Zhengyuan Shen | Dmitriy Bespalov | Soumya Smruti Mishra | Yifei Teng | Darren Yow-Bang Wang | Haibo Ding | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Aosong Feng | Balasubramaniam Srinivasan | Yun Zhou | Zhichao Xu | Kang Zhou | Sheng Guan | Yueyan Chen | Xian Wu | Ninad Kulkarni | Yi Zhang | Zhengyuan Shen | Dmitriy Bespalov | Soumya Smruti Mishra | Yifei Teng | Darren Yow-Bang Wang | Haibo Ding | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Routing incoming queries to the most cost-effective LLM while maintaining response quality poses a fundamental challenge in optimizing performance-cost trade-offs for large-scale commercial systems.We present IPR—a quality-constrained Intelligent Prompt Routing framework that dynamically selects optimal models based on predicted response quality and user-specified tolerance levels.IPR introduces three key innovations: (1) a modular architecture with lightweight quality estimators trained on 1.5M prompts annotated with calibrated quality scores, enabling fine-grained quality prediction across model families; (2) a user-controlled routing mechanism with tolerance parameter 𝜏 ∈ [0,1] that provides explicit control over quality-cost trade-offs; and (3) an extensible design using frozen encoders with model-specific adapters, reducing new model integration from days to hours. To rigorously train and evaluate IPR, we curate an industrial-level IPR dataset, a comprehensive benchmark containing 1.5 million examples with response quality annotations across 11 LLM candidates.Deployed on a major cloud platform, IPR achieves 43.9% cost reduction while maintaining quality parity with the strongest model in the Claude family and processes requests with sub-150ms latency.
Distillation versus Contrastive Learning: How to Train Your Rerankers
Zhichao Xu | Zhiqi Huang | Shengyao Zhuang | Vivek Srikumar
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Zhichao Xu | Zhiqi Huang | Shengyao Zhuang | Vivek Srikumar
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Training effective text rerankers is crucial for information retrieval. Two strategies are widely used: contrastive learning (optimizing directly on ground-truth labels) and knowledge distillation (transferring knowledge from a larger reranker). While both have been studied extensively, a clear comparison of their effectiveness for training cross-encoder rerankers under practical conditions is needed.This paper empirically compares these strategies by training rerankers of different sizes (0.5B, 1.5B, 3B, 7B) and architectures (Transformer, Recurrent) using both methods on the same data, with a strong contrastive learning model acting as the distillation teacher. Our results show that knowledge distillation generally yields better in-domain and out-of-domain ranking performance than contrastive learning when distilling from a more performant teacher model. This finding is consistent across student model sizes and architectures. However, distilling from a teacher of the same capacity does not provide the same advantage, particularly for out-of-domain tasks. These findings offer practical guidance for choosing a training strategy based on available teacher models. We recommend using knowledge distillation to train smaller rerankers if a larger, more performant teacher is accessible; in its absence, contrastive learning remains a robust baseline. Our code implementation is made available to facilitate reproducbility.
CSPLADE: Learned Sparse Retrieval with Causal Language Models
Zhichao Xu | Aosong Feng | Yijun Tian | Haibo Ding | Lin Lee Cheong
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Zhichao Xu | Aosong Feng | Yijun Tian | Haibo Ding | Lin Lee Cheong
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
In recent years, dense retrieval has been the focus of information retrieval (IR) research. While effective, dense retrieval produces uninterpretable dense vectors, and suffers from the drawback of large index size. Learned sparse retrieval (LSR) has emerged as promising alternative, achieving competitive retrieval performance while also being able to leverage the classical inverted index data structure for efficient retrieval. However, limited works have explored scaling LSR beyond BERT scale. In this work, we identify two challenges in training large language models (LLM) for LSR: (1) training instability during the early stage of contrastive training; (2) suboptimal performance due to pre-trained LLM’s unidirectional attention. To address these challenges, we propose two corresponding techniques: (1) a lightweight adaptation training phase to eliminate training instability; (2) two model variants to enable bidirectional information. With these techniques, we are able to train LSR models with 8B scale LLM, and achieve competitive retrieval performance with reduced index size. Furthermore, we are among the first to analyze the performance-efficiency tradeoff of LLM-based LSR model through the lens of model quantization. Our findings provide insights into adapting LLMs for efficient retrieval modeling.
Found in Translation: Measuring Multilingual LLM Consistency as Simple as Translate then Evaluate
Ashim Gupta | Maitrey Mehta | Zhichao Xu | Vivek Srikumar
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Ashim Gupta | Maitrey Mehta | Zhichao Xu | Vivek Srikumar
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Large language models (LLMs) provide detailed and impressive responses to queries in English. However, are they really consistent at responding to the same query in other languages? The popular way of evaluating for multilingual performance of LLMs requires expensive-to-collect annotated datasets. Further, evaluating for tasks like open-ended generation, where multiple correct answers may exist, is nontrivial. Instead, we propose to evaluate the predictability of model response across different languages. In this work, we propose a framework to evaluate LLM’s cross-lingual consistency based on a simple Translate then Evaluate strategy. We instantiate this evaluation framework along two dimensions of consistency: information and empathy. Our results reveal pronounced inconsistencies in popular LLM responses across thirty languages, with severe performance deficits in certain language families and scripts, underscoring critical weaknesses in their multilingual capabilities. These findings necessitate cross-lingual evaluations that are consistent along multiple dimensions. We invite practitioners to use our framework for future multilingual LLM benchmarking.
State Space Models are Strong Text Rerankers
Zhichao Xu | Jinghua Yan | Ashim Gupta | Vivek Srikumar
Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025)
Zhichao Xu | Jinghua Yan | Ashim Gupta | Vivek Srikumar
Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025)
Transformers dominate NLP and IR; but their inference inefficiencies and challenges in extrapolating to longer contexts have sparked interest in alternative model architectures. Among these, state space models (SSMs) like Mamba offer promising advantages, particularly time complexity in inference. Despite their potential, SSMs’ effectiveness at text reranking — a task requiring fine-grained query-document interaction and long-context understanding — remains underexplored.This study benchmarks SSM-based architectures (specifically, Mamba-1 and Mamba-2) against transformer-based models across various scales, architectures, and pre-training objectives, focusing on performance and efficiency in text reranking tasks. We find that (1) Mamba architectures achieve competitive text ranking performance, comparable to transformer-based models of similar size; (2) they are less efficient in training and inference compared to transformers with flash attention; and (3) Mamba-2 outperforms Mamba-1 in both performance and efficiency. These results underscore the potential of state space models as a transformer alternative and highlight areas for improvement in future IR applications.
2024
In-Context Example Ordering Guided by Label Distributions
Zhichao Xu | Daniel Cohen | Bei Wang | Vivek Srikumar
Findings of the Association for Computational Linguistics: NAACL 2024
Zhichao Xu | Daniel Cohen | Bei Wang | Vivek Srikumar
Findings of the Association for Computational Linguistics: NAACL 2024
By allowing models to predict without task-specific training, in-context learning (ICL) with pretrained LLMs has enormous potential in NLP. However, a number of problems persist in ICL. In particular, its performance is sensitive to the choice and order of in-context examples. Given the same set of in-context examples with different orderings, model performance may vary from near random to near state-of-the-art. In this work, we formulate in-context example ordering as an optimization problem. We examine three problem settings that differ in the assumptions they make about what is known about the task. Inspired by the idea of learning from label proportions, we propose two principles for in-context example ordering guided by model’s probability predictions. We apply our proposed principles to thirteen text classification datasets and nine different autoregressive LLMs with 700M to 13B parameters. We demonstrate that our approach outperforms the baselines by improving the classification accuracy, reducing model miscalibration, and also by selecting better in-context examples.
Multi-dimensional Evaluation of Empathetic Dialogue Responses
Zhichao Xu | Jiepu Jiang
Findings of the Association for Computational Linguistics: EMNLP 2024
Zhichao Xu | Jiepu Jiang
Findings of the Association for Computational Linguistics: EMNLP 2024
Empathy is critical for effective and satisfactory conversational communication. Prior efforts to measure conversational empathy mostly focus on expressed communicative intents—that is, the way empathy is expressed. Yet, these works ignore the fact that conversation is also a collaboration involving both speakers and listeners. In contrast, we propose a multi-dimensional empathy evaluation framework to measure both expressed intents from the speaker’s perspective and perceived empathy from the listener’s perspective. We apply our analytical framework to examine internal customer-service dialogues. We find the two dimensions (expressed intent types and perceived empathy) are interconnected, while perceived empathy has high correlations with dialogue satisfaction levels.To reduce the annotation cost, we explore different options to automatically measure conversational empathy: prompting LLMs and training language model-based classifiers. Our experiments show that prompting methods with even popular models like GPT-4 and Flan family models perform relatively poorly on both public and our internal datasets. In contrast, instruction-finetuned classifiers based on FlanT5 family models outperform prior works and competitive baselines. We conduct a detailed ablation study to give more insights into instruction finetuning method’s strong performance.
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM Compression
Zhichao Xu | Ashim Gupta | Tao Li | Oliver Bentham | Vivek Srikumar
Findings of the Association for Computational Linguistics: EMNLP 2024
Zhichao Xu | Ashim Gupta | Tao Li | Oliver Bentham | Vivek Srikumar
Findings of the Association for Computational Linguistics: EMNLP 2024
Increasingly, model compression techniques enable large language models (LLMs) to be deployed in real-world applications. As a result of this momentum towards local deployment, compressed LLMs will interact with a large population. Prior work on compression typically prioritize preserving perplexity, which is directly analogous to training loss. The impact of compression method on other critical aspects of model behavior—particularly safety—requires systematic assessment. To this end, we investigate the impact of model compression along four dimensions: (1) degeneration harm, i.e., bias and toxicity in generation; (2) representational harm, i.e., biases in discriminative tasks; (3) dialect bias; and (4) language modeling and downstream task performance. We examine a wide spectrum of LLM compression techniques, including unstructured pruning, semi-structured pruning, and quantization. Our analysis reveals that compression can lead to unexpected consequences. Although compression may unintentionally alleviate LLMs’ degeneration harm, it can still exacerbate representational harm. Furthermore, increasing compression produces a divergent impact on different protected groups. Finally, different compression methods have drastically different safety impacts: for example, quantization mostly preserves bias while pruning degrades quickly. Our findings underscore the importance of integrating safety assessments into the development of compressed LLMs to ensure their reliability across real-world applications.
Search
Fix author
Co-authors
- Lin Lee Cheong 6
- Haibo Ding 6
- Vivek Srikumar 5
- Aosong Feng 4
- Kiran Ramnath 4
- Balasubramaniam Srinivasan 4
- Kang Zhou 4
- Yun Zhou 4
- Yueyan Chen 3
- Ashim Gupta 3
- Soumya Smruti Mishra 3
- Zhengyuan Shen 3
- Sangmin Woo 3
- Sheng Guan 2
- Yifei Teng 2
- Yijun Tian 2
- Shuai Wang 2
- Yawei Wang 2
- Darren Yow-Bang Wang 2
- Panpan Xu 2
- Qiaojing Yan 2
- Oliver Bentham 1
- Dmitriy Bespalov 1
- Chia-Yuan Chang 1
- Daniel Cohen 1
- Mahashweta Das 1
- Han Ding 1
- Mingzhi Hu 1
- Zhiqi Huang 1
- Zheng Huang 1
- Sullam Jeoung 1
- Jiepu Jiang 1
- Zhimeng Jiang 1
- Ninad Kulkarni 1
- Tao Li 1
- Jiazheng Li 1
- Yuzhe Lu 1
- Maitrey Mehta 1
- Menghai Pan 1
- Xuan Qi 1
- Vineeth Rakesh 1
- Huan Song 1
- Bei Wang 1
- Guanchu Wang 1
- Haozhu Wang 1
- Xian Wu 1
- Jinghua Yan 1
- Chin-Chia Michael Yeh 1
- Bo Yuan 1
- Yi Zhang 1
- Yunxiang Zhang 1
- Yan Zheng 1
- Shengyao Zhuang 1
- Na Zou 1