2024
pdf
abs
Towards an On-device Agent for Text Rewriting
Yun Zhu
|
Yinxiao Liu
|
Felix Stahlberg
|
Shankar Kumar
|
Yu-Hui Chen
|
Liangchen Luo
|
Lei Shu
|
Renjie Liu
|
Jindong Chen
|
Lei Meng
Findings of the Association for Computational Linguistics: NAACL 2024
Large Language Models (LLMs) have demonstrated impressive capabilities for text rewriting. However creating a smaller yet potent language model for text rewriting presents two formidable challenges: costly data collection and absence of emergent capabilities.In this paper we present solutions to address the above challenges.We propose an new instruction tuning method to develop a mo-bile text rewriting model that leverages LLM-generated data and heuristic reinforcement learning, eliminating the need for human data collection. Moreover, to bridge the performance gap from the constraint size, we pro-pose a cascading approach based on the confidence levels which are distilled from the large server model’s critiques. To evaluate the text rewriting tasks for mobile scenarios, we introduce MessageRewriteEval, a human-labeled benchmark that focuses on text rewriting of messages through natural language instructions. Through empirical experiments, we demonstrate that our on-device model surpasses the current state-of-the-art LLMs in text rewriting while maintaining a significantly reduced model size using public benchmark EditEval and our new benchmark. We also demonstrate that our proposed cascading approach improves model performance further.
pdf
abs
Proofread: Fixes All Errors with One Tap
Renjie Liu
|
Yanxiang Zhang
|
Yun Zhu
|
Haicheng Sun
|
Yuanbo Zhang
|
Michael Huang
|
Shanqing Cai
|
Lei Meng
|
Shumin Zhai
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
The impressive capabilities in Large Language Models (LLMs) provide a powerful approach to reimagine users’ typing experience. This paper demonstrates the Proofread feature in Gboard, a virtual keyboard running on mobile phones. Proofread enables seamless sentence-level and paragraph-level corrections with a single tap. We describe the complete system in this paper, from data generation, metrics design to model tuning and deployment. To obtain models with sufficient quality, we implement a careful data synthetic pipeline tailored to online use cases, design multifaceted metrics, employ a two-stage tuning approach to acquire the dedicated LLM for the feature: the Supervised Fine Tuning (SFT) for foundational quality, followed by the Reinforcement Learning (RL) tuning approach for targeted refinement. Specifically, we find sequential tuning on Rewrite and proofread tasks yields the best quality in SFT stage, and propose global and direct rewards in the RL tuning stage to seek further improvement. Extensive experiments on a human-labeled golden set showed our tuned PaLM2-XS model achieved 85.56% good ratio. We launched the feature to Pixel 8 devices by serving the model on TPU v5 in Google Cloud, with thousands of daily active users. Serving latency was significantly reduced by quantization, bucket inference, text segmentation, and speculative decoding. Our demo could be seen in Youtube.
pdf
abs
RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs
Bowen Tan
|
Yun Zhu
|
Lijuan Liu
|
Hongyi Wang
|
Yonghao Zhuang
|
Jindong Chen
|
Eric Xing
|
Zhiting Hu
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)
The recent progress of AI can be largely attributed to large language models (LLMs). However, their escalating memory requirements introduce challenges for machine learning (ML) researchers and engineers. Addressing this requires developers to partition a large model to distribute it across multiple GPUs or TPUs. This necessitates considerable coding and intricate configuration efforts with existing model parallel tools, such as Megatron-LM, DeepSpeed, and Alpa. These tools require users’ expertise in machine learning systems (MLSys), creating a bottleneck in LLM development, particularly for developers without MLSys background. In this work, we present RedCoast (Redco), a lightweight and user-friendly tool crafted to automate distributed training and inference for LLMs, as well as to simplify ML pipeline development. The design of Redco emphasizes two key aspects. Firstly, to automate model parallelism, our study identifies two straightforward rules to generate tensor parallel strategies for any given LLM. Integrating these rules into Redco facilitates effortless distributed LLM training and inference, eliminating the need of additional coding or complex configurations. We demonstrate the effectiveness by applying Redco on a set of LLM architectures, such as GPT-J, LLaMA, T5, and OPT, up to the size of 66B. Secondly, we propose a mechanism that allows for the customization of diverse ML pipelines through the definition of merely three functions, avoiding redundant and formulaic code like multi-host related processing. This mechanism proves adaptable across a spectrum of ML algorithms, from foundational language modeling to complex algorithms like meta-learning and reinforcement learning. As a result, Redco implementations exhibit significantly fewer lines of code compared to their official counterparts. RedCoast (Redco) has been released under Apache 2.0 license at https://github.com/tanyuqian/redco.
pdf
abs
Cluster Language Model for Improved E-Commerce Retrieval and Ranking: Leveraging Query Similarity and Fine-Tuning for Personalized Results
Duleep Rathgamage Don
|
Ying Xie
|
Le Yu
|
Simon Hughes
|
Yun Zhu
Proceedings of the Seventh Workshop on e-Commerce and NLP @ LREC-COLING 2024
This paper proposes a novel method to improve the accuracy of product search in e-commerce by utilizing a cluster language model. The method aims to address the limitations of the bi-encoder architecture while maintaining a minimal additional training burden. The approach involves labeling top products for each query, generating semantically similar query clusters using the K-Means clustering algorithm, and fine-tuning a global language model into cluster language models on individual clusters. The parameters of each cluster language model are fine-tuned to learn local manifolds in the feature space efficiently, capturing the nuances of various query types within each cluster. The inference is performed by assigning a new query to its respective cluster and utilizing the corresponding cluster language model for retrieval. The proposed method results in more accurate and personalized retrieval results, offering a superior alternative to the popular bi-encoder based retrieval models in semantic search.
2023
pdf
abs
An Efficient Conversational Smart Compose System
Yun Zhu
|
Xiayu Chen
|
Lei Shu
|
Bowen Tan
|
Xinying Song
|
Lijuan Liu
|
Maria Wang
|
Jindong Chen
|
Ning Ruan
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Online conversation is a ubiquitous way to share information and connect everyone but repetitive idiomatic text typing takes users a lot of time. This paper demonstrates a simple yet effective cloud based smart compose system to improve human-to-human conversation efficiency. Heuristics from different perspectives are designed to achieve the best trade-off between quality and latency. From the modeling side, the decoder-only model exploited the previous turns of conversational history in a computation lightweight manner. Besides, a novel phrase tokenizer is proposed to reduce latency without losing the composing quality further. Additionally, the caching mechanism is applied to the serving framework. The demo video of the system is available at
https://youtu.be/U1KXkaqr60g.We open-sourced our phrase tokenizer in
https://github.com/tensorflow/text.
2015
pdf
A hybrid system for Chinese-English patent machine translation
Hongzheng Li
|
Kai Zhao
|
Renfen Hu
|
Yun Zhu
|
Yaohong Jin
Proceedings of the 6th Workshop on Patent and Scientific Literature Translation
2014
pdf
Local Phrase Reordering Model for Chinese-English Patent Machine Translation
Xiaodie Liu
|
Yun Zhu
|
Yaohong Jin
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing