Yameng Huang
2022
CULG: Commercial Universal Language Generation
Haonan Li
|
Yameng Huang
|
Yeyun Gong
|
Jian Jiao
|
Ruofei Zhang
|
Timothy Baldwin
|
Nan Duan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Pre-trained language models (PLMs) have dramatically improved performance for many natural language processing (NLP) tasks in domains such as finance and healthcare. However, the application of PLMs in the domain of commerce, especially marketing and advertising, remains less studied. In this work, we adapt pre-training methods to the domain of commerce, by proposing CULG, a large-scale commercial universal language generation model which is pre-trained on a corpus drawn from 10 markets across 7 languages. We propose 4 commercial generation tasks and a two-stage training strategy for pre-training, and demonstrate that the proposed strategy yields performance improvements on three generation tasks as compared to single-stage pre-training. Extensive experiments show that our model outperforms other models by a large margin on commercial generation tasks, and we conclude with a discussion on additional applications over other markets, languages, and tasks.
2020
An Enhanced Knowledge Injection Model for Commonsense Generation
Zhihao Fan
|
Yeyun Gong
|
Zhongyu Wei
|
Siyuan Wang
|
Yameng Huang
|
Jian Jiao
|
Xuanjing Huang
|
Nan Duan
|
Ruofei Zhang
Proceedings of the 28th International Conference on Computational Linguistics
Commonsense generation aims at generating plausible everyday scenario description based on a set of provided concepts. Digging the relationship of concepts from scratch is non-trivial, therefore, we retrieve prototypes from external knowledge to assist the understanding of the scenario for better description generation. We integrate two additional modules into the pretrained encoder-decoder model for prototype modeling to enhance the knowledge injection procedure. We conduct experiment on CommonGen benchmark, experimental results show that our method significantly improves the performance on all the metrics.
Search
Co-authors
- Yeyun Gong 2
- Jian Jiao 2
- Ruofei Zhang 2
- Nan Duan 2
- Haonan Li 1
- show all...