Han Peng
2024
LLMBox: A Comprehensive Library for Large Language Models
Tianyi Tang
|
Hu Yiwen
|
Bingqian Li
|
Wenyang Luo
|
ZiJing Qin
|
Haoxiang Sun
|
Jiapeng Wang
|
Shiyi Xu
|
Xiaoxue Cheng
|
Geyang Guo
|
Han Peng
|
Bowen Zheng
|
Yiru Tang
|
Yingqian Min
|
Yushuo Chen
|
Jie Chen
|
Ranchi Zhao
|
Luran Ding
|
Yuhao Wang
|
Zican Dong
|
Xia Chunxuan
|
Junyi Li
|
Kun Zhou
|
Xin Zhao
|
Ji-Rong Wen
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
To facilitate the research on large language models (LLMs), this paper presents a comprehensive and unified library, LLMBox, to ease the development, use, and evaluation of LLMs. This library is featured with three main merits: (1) a unified data interface that supports the flexible implementation of various training strategies, (2) a comprehensive evaluation that covers extensive tasks, datasets, and models, and (3) more practical consideration, especially on user-friendliness and efficiency. With our library, users can easily reproduce existing methods, train new models, and conduct comprehensive performance comparisons. To rigorously test LLMBox, we conduct extensive experiments in a diverse coverage of evaluation settings, and experimental results demonstrate the effectiveness and efficiency of our library in supporting various implementations related to LLMs. The detailed introduction and usage guidance can be found at https://github.com/RUCAIBox/LLMBox.
2022
Rethinking Positional Encoding in Tree Transformer for Code Representation
Han Peng
|
Ge Li
|
Yunfei Zhao
|
Zhi Jin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Transformers are now widely used in code representation, and several recent works further develop tree Transformers to capture the syntactic structure in source code. Specifically, novel tree positional encodings have been proposed to incorporate inductive bias into Transformer.In this work, we propose a novel tree Transformer encoding node positions based on our new description method for tree structures.Technically, local and global soft bias shown in previous works is both introduced as positional encodings of our Transformer model.Our model finally outperforms strong baselines on code summarization and completion tasks across two languages, demonstrating our model’s effectiveness.Besides, extensive experiments and ablation study shows that combining both local and global paradigms is still helpful in improving model performance. We release our code at https://github.com/AwdHanPeng/TreeTransformer.
Search
Co-authors
- Ge Li 1
- Yunfei Zhao 1
- Zhi Jin 1
- Tianyi Tang 1
- Hu Yiwen 1
- show all...