Lionel Ni
2026
Continual Pretraining on Encrypted Synthetic Data for Privacy-Preserving LLMs
Honghao Liu | Xuhui Jiang | Chengjin Xu | Cehao Yang | Yiran Cheng | Lionel Ni | Jian Guo
Findings of the Association for Computational Linguistics: EACL 2026
Honghao Liu | Xuhui Jiang | Chengjin Xu | Cehao Yang | Yiran Cheng | Lionel Ni | Jian Guo
Findings of the Association for Computational Linguistics: EACL 2026
Preserving privacy in sensitive data while pretraining large language models on small, domain-specific corpora presents a significant challenge. In this work, we take an exploratory step toward privacy-preserving continual pretraining by proposing an entity-based framework that synthesizes encrypted training data to protect personally identifiable information (PII). Our approach constructs a weighted entity graph to guide data synthesis and applies deterministic encryption to PII entities, enabling LLMs to encode new knowledge through continual pretraining while granting authorized access to sensitive data through decryption keys. Our results on limited-scale datasets demonstrate that our pretrained models outperform base models and ensure PII security, while exhibiting a modest performance gap compared to models trained on unencrypted synthetic data. We further show that increasing the number of entities and leveraging graph-based synthesis improves model performance, and that encrypted models retain instruction-following capabilities with long retrieved contexts. We discuss the security implications and limitations of deterministic encryption, positioning this work as an initial investigation into the design space of encrypted data pretraining for privacy-preserving LLMs. Our code is available at https://github.com/DataArcTech/SoE.
2025
Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment
Saizhuo Wang | Hang Yuan | Leon Zhou | Lionel Ni | Heung-Yeung Shum | Jian Guo
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Saizhuo Wang | Hang Yuan | Leon Zhou | Lionel Ni | Heung-Yeung Shum | Jian Guo
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
One of the most important tasks in quantitative investment research is mining new alphas (effective trading signals or factors). Traditional alpha mining methods, either hand-crafted factor synthesis or algorithmic factor mining (e.g., search with genetic programming), have inherent limitations, especially in implementing the ideas of quant researchers. In this work, we propose a new alpha mining paradigm by introducing human-AI interaction, and a novel prompt engineering algorithmic framework to implement this paradigm by leveraging the power of large language models. Moreover, we develop Alpha-GPT, a new interactive alpha mining system framework that provides a heuristic way to “understand” the ideas of quant researchers and outputs creative, insightful, and effective alphas. We demonstrate the effectiveness and advantage of Alpha-GPT via a number of alpha mining experiments. In particular, we evaluated Alpha-GPT’s performance in the WorldQuant International Quant Championship, where it demonstrated results comparable to those of top-performing human participants, ranking among top-10 over 41000 teams worldwide. These findings suggest Alpha-GPT’s significant potential in generating highly effective alphas that may surpass human capabilities in quantitative investment strategies.