Jing Bai


2025

pdf bib
Teaching LLMs to Plan, Not Just Solve: Plan Learning Boosts LLMs Generalization in Reasoning Tasks
Tianlong Wang | Junzhe Chen | Weibin Liao | Xueting Han | Jing Bai
Findings of the Association for Computational Linguistics: EMNLP 2025

Reinforcement learning (RL) on self-generated data has emerged as a promising paradigm for improving reasoning in large language models (LLMs). However, RL relies on accurate reward signals, which are scarce in many domains, making it critical to train models that can generalize to unseen problems. Existing methods often focus on task-specific or domain-specific reasoning, lacking consideration for generalization and may degrade performance on other tasks. To address this, we distinguish between abstract plans, representing high-level problem-solving strategies, and concrete solutions, proposing that learning plans develops transferable general reasoning capabilities and promotes better generalization. Building on this insight, we propose PlanLearn, a framework that combines plan-based search with Step-level Advantage Preference Optimization (Step-APO) to optimize plan learning. Experimental results show that PlanLearn, trained exclusively on GSM8K and MATH, not only significantly improves in-domain performance but also enhances out-of-domain benchmarks, such as HumanEval (+12.2%), GPQA (+8.6%), ARC-C (+4.0%), MMLU-STEM (+2.2%), and BBH (+1.8%). The code is available at https://github.com/tianlwang/PlanLearn.

2022

pdf bib
Enhancing Self-Attention with Knowledge-Assisted Attention Maps
Jiangang Bai | Yujing Wang | Hong Sun | Ruonan Wu | Tianmeng Yang | Pengfei Tang | Defu Cao | Mingliang Zhang1 | Yunhai Tong | Yaming Yang | Jing Bai | Ruofei Zhang | Hao Sun | Wei Shen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Large-scale pre-trained language models have attracted extensive attentions in the research community and shown promising results on various tasks of natural language processing. However, the attention maps, which record the attention scores between tokens in self-attention mechanism, are sometimes ineffective as they are learned implicitly without the guidance of explicit semantic knowledge. Thus, we aim to infuse explicit external knowledge into pre-trained language models to further boost their performance. Existing works of knowledge infusion largely depend on multi-task learning frameworks, which are inefficient and require large-scale re-training when new knowledge is considered. In this paper, we propose a novel and generic solution, KAM-BERT, which directly incorporates knowledge-generated attention maps into the self-attention mechanism. It requires only a few extra parameters and supports efficient fine-tuning once new knowledge is added. KAM-BERT achieves consistent improvements on various academic datasets for natural language understanding. It also outperforms other state-of-the-art methods which conduct knowledge infusion into transformer-based architectures. Moreover, we apply our model to an industry-scale ad relevance application and show its advantages in the real-world scenario.

2021

pdf bib
Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Jiangang Bai | Yujing Wang | Yiren Chen | Yaming Yang | Jing Bai | Jing Yu | Yunhai Tong
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Pre-trained language models like BERT achieve superior performances in various NLP tasks without explicit consideration of syntactic information. Meanwhile, syntactic information has been proved to be crucial for the success of NLP applications. However, how to incorporate the syntax trees effectively and efficiently into pre-trained Transformers is still unsettled. In this paper, we address this problem by proposing a novel framework named Syntax-BERT. This framework works in a plug-and-play mode and is applicable to an arbitrary pre-trained checkpoint based on Transformer architecture. Experiments on various datasets of natural language understanding verify the effectiveness of syntax trees and achieve consistent improvement over multiple pre-trained models, including BERT, RoBERTa, and T5.

2020

pdf bib
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao | Yujing Wang | Chufan Wu | Chen Zhang | Yang Wang | Quanlu Zhang | Yaming Yang | Yunhai Tong | Jing Bai
Proceedings of the 28th International Conference on Computational Linguistics

BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks. However, a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviors of BERT. However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model. In this paper, we address this issue by proposing a tailored solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude.

pdf bib
Evaluation of Pretrained BERT Model by Using Sentence Clustering
Naoki Shibayama | Rui Cao | Jing Bai | Wen Ma | Hiroyuki Shinnou
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation

2018

pdf bib
Domain Adaptation for Sentiment Analysis using Keywords in the Target Domain as the Learning Weight
Jing Bai | Hiroyuki Shinnou | Kanako Komiya
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation

2010

pdf bib
Cross-Market Model Adaptation with Pairwise Preference Data for Web Search Ranking
Jing Bai | Fernando Diaz | Yi Chang | Zhaohui Zheng | Keke Chen
Coling 2010: Posters

2006

pdf bib
Context-Dependent Term Relations for Information Retrieval
Jing Bai | Jian-Yun Nie | Guihong Cao
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing