Xiaopeng Yang

Also published as: 小珪


2023

“ChatGPT以对话形式的交互方式,降低了使用大模型的门槛,因此迅速在全球范围内流行起来。尽管OpenAI并未公开ChatGPT的技术路线,但一些后续的工作宣称已经在开源的基座模型上复现了ChatGPT的性能。然而,尽管这些模型在某些评测上表现出与ChatGPT相似的性能,但在实际的知识量和推理能力上,它们仍然不如ChatGPT。为了更接近ChatGPT甚至GPT4的性能,我们需要对基座模型的训练进行更深入的研究。本文针对基座模型训练的数据以及模型架构进行讨论,首先总结了当前预训练数据的来源以及基本处理流程,并针对目前关注较少的代码预训练数据和中文预训练数据进行了分析;然后对当前已有基座模型的网络架构进行了回顾,并针对这些架构调整背后的动机进行了阐述。”

2019

Variational autoencoders (VAEs) and Wasserstein autoencoders (WAEs) have achieved noticeable progress in open-domain response generation. Through introducing latent variables in continuous space, these models are capable of capturing utterance-level semantics, e.g., topic, syntactic properties, and thus can generate informative and diversified responses. In this work, we improve the WAE for response generation. In addition to the utterance-level information, we also model user-level information in latent continue space. Specifically, we embed user-level and utterance-level information into two multimodal distributions, and combine these two multimodal distributions into a mixed distribution. This mixed distribution will be used as the prior distribution of WAE in our proposed model, named as PersonaWAE. Experimental results on a large-scale real-world dataset confirm the superiority of our model for generating informative and personalized responses, where both automatic and human evaluations outperform state-of-the-art models.