Xiyou Zhou
2021
HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing
Xiyou Zhou
|
Zhiyu Chen
|
Xiaoyong Jin
|
William Yang Wang
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Computation-intensive pretrained models have been taking the lead of many natural language processing benchmarks such as GLUE. However, energy efficiency in the process of model training and inference becomes a critical bottleneck. We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible natural language processing. With HULK, we compare pretrained models’ energy efficiency from the perspectives of time and cost. Baseline benchmarking results are provided for further analysis. The fine-tuning efficiency of different pretrained models can differ significantly among different tasks, and fewer parameter number does not necessarily imply better efficiency. We analyzed such a phenomenon and demonstrated the method for comparing the multi-task efficiency of pretrained models. Our platform is available at https://hulkbenchmark.github.io/ .
2020
Logic2Text: High-Fidelity Natural Language Generation from Logical Forms
Zhiyu Chen
|
Wenhu Chen
|
Hanwen Zha
|
Xiyou Zhou
|
Yunkai Zhang
|
Sairam Sundaresan
|
William Yang Wang
Findings of the Association for Computational Linguistics: EMNLP 2020
Previous studies on Natural Language Generation (NLG) from structured data have primarily focused on surface-level descriptions of record sequences. However, for complex structured data, e.g., multi-row tables, it is often desirable for an NLG system to describe interesting facts from logical inferences across records. If only provided with the table, it is hard for existing models to produce controllable and high-fidelity logical generations. In this work, we formulate high-fidelity NLG as generation from logical forms in order to obtain controllable and faithful generations. We present a new large-scale dataset, Logic2Text, with 10,753 descriptions involving common logic types paired with the underlying logical forms. The logical forms show diversified graph structure of free schema, which pose great challenges on the model’s ability to understand the semantics. We experiment on (1) Fully-supervised training with the full datasets, and (2) Few-shot setting, provided with hundreds of paired examples; We compare several popular generation models and analyze their performances. We hope our dataset can encourage research towards building an advanced NLG system capable of natural, faithful, and human-like generation. The dataset and code is available at https://github.com/czyssrs/Logic2Text.
Search
Co-authors
- Zhiyu Chen 2
- William Yang Wang 2
- Wenhu Chen 1
- Hanwen Zha 1
- Yunkai Zhang 1
- show all...