Nanning Zheng


2023

pdf
How Do In-Context Examples Affect Compositional Generalization?
Shengnan An | Zeqi Lin | Qiang Fu | Bei Chen | Nanning Zheng | Jian-Guang Lou | Dongmei Zhang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Compositional generalization–understanding unseen combinations of seen primitives–is an essential reasoning capability in human intelligence. The AI community mainly studies this capability by fine-tuning neural networks on lots of training samples, while it is still unclear whether and how in-context learning–the prevailing few-shot paradigm based on large language models–exhibits compositional generalization. In this paper, we present CoFe, a test suite to investigate in-context compositional generalization. We find that the compositional generalization performance can be easily affected by the selection of in-context examples, thus raising the research question what the key factors are to make good in-context examples for compositional generalization. We study three potential factors: similarity, diversity and complexity. Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple. Furthermore, two strong limitations are observed: in-context compositional generalization on fictional words is much weaker than that on commonly used ones; it is still critical that the in-context examples should cover required linguistic structures, even though the backbone model has been pre-trained on large corpus. We hope our analysis would facilitate the understanding and utilization of in-context learning paradigm.

pdf
Skill-Based Few-Shot Selection for In-Context Learning
Shengnan An | Bo Zhou | Zeqi Lin | Qiang Fu | Bei Chen | Nanning Zheng | Weizhu Chen | Jian-Guang Lou
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

*In-context learning* is the paradigm that adapts large language models to downstream tasks by providing a few examples. *Few-shot selection*—selecting appropriate examples for each test instance separately—is important for in-context learning. In this paper, we propose **Skill-KNN**, a skill-based few-shot selection method for in-context learning. The key advantages of Skill-KNN include: (1) it addresses the problem that existing methods based on pre-trained embeddings can be easily biased by surface natural language features that are not important for the target task; (2) it does not require training or fine-tuning of any models, making it suitable for frequently expanding or changing example banks. The key insight is to optimize the inputs fed into the embedding model, rather than tuning the model itself. Technically, Skill-KNN generates the skill-based descriptions for each test case and candidate example by utilizing a pre-processing few-shot prompting, thus eliminating unimportant surface features. Experimental results across five cross-domain semantic parsing datasets and six backbone models show that Skill-KNN significantly outperforms existing methods.

2021

pdf
Learning Algebraic Recombination for Compositional Generalization
Chenyao Liu | Shengnan An | Zeqi Lin | Qian Liu | Bei Chen | Jian-Guang Lou | Lijie Wen | Nanning Zheng | Dongmei Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021