Zhao Song
2025
Circuit Complexity Bounds for RoPE-based Transformer Architecture
Bo Chen
|
Xiaoyu Li
|
Yingyu Liang
|
Jiangxuan Long
|
Zhenmei Shi
|
Zhao Song
|
Jiahao Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Characterizing the expressive power of the Transformer architecture is critical to understanding its capacity limits and scaling law. Recent works provide the circuit complexity bounds to Transformer-like architecture. On the other hand, position embedding has emerged as a crucial technique in modern large language models, offering superior performance in capturing positional information, which shows great performance for the long context scenario. In this work, we take a circuit complexity perspective and rigorously analyze Transformers augmented with widely adopted positional embeddings. We prove that, under standard complexity assumptions, such models remain incapable of efficiently solving canonical tasks such as arithmetic formula evaluation and Boolean formula value computation. Our results expose a fundamental expressivity limitation that persists despite the remarkable empirical success of positionally-enhanced Transformers. Beyond tightening known complexity bounds, our findings offer new theoretical insights for designing future architectures with provably stronger reasoning and compositional capabilities.
Towards Infinite-Long Prefix in Transformer
Yingyu Liang
|
Zhenmei Shi
|
Zhao Song
|
Chiwun Yang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Prompting and context-based fine-tuning methods, which we call Prefix Learning, have been proposed to enhance the performance of language models on various downstream tasks. They are empirically efficient and effective, matching the performance of full parameter fine-tuning, but the theoretical understandings are limited. In this paper, we aim to address this limitation by studying their ability from the perspective of prefix length. In particular, we provide a convergence guarantee for training an ultra-long prefix in a stylized setting using the Neural Tangent Kernel (NTK) framework. Based on this strong theoretical guarantee, we design and implement an algorithm that only needs to introduce and fine-tune a few extra trainable parameters instead of an infinite-long prefix in each layer of a transformer, and can approximate the prefix attention to a guaranteed polynomial-small error.Preliminary experimental results on vision, natural language, and math data show that our method achieves superior or competitive performance compared to existing methods like full parameters fine-tuning, P-Tuning V2, and LoRA. This demonstrates our method is promising for parameter-efficient fine-tuning.
2020
TextHide: Tackling Data Privacy in Language Understanding Tasks
Yangsibo Huang
|
Zhao Song
|
Danqi Chen
|
Kai Li
|
Sanjeev Arora
Findings of the Association for Computational Linguistics: EMNLP 2020
An unsolved challenge in distributed or federated learning is to effectively mitigate privacy risks without slowing down training or reducing accuracy. In this paper, we propose TextHide aiming at addressing this challenge for natural language understanding tasks. It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data. Such an encryption step is efficient and only affects the task performance slightly. In addition, TextHide fits well with the popular framework of fine-tuning pre-trained language models (e.g., BERT) for any sentence or sentence-pair task. We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations and the averaged accuracy reduction is only 1.9%. We also present an analysis of the security of TextHide using a conjecture about the computational intractability of a mathematical problem.
Search
Fix author
Co-authors
- Yingyu Liang 2
- Zhenmei Shi 2
- Sanjeev Arora 1
- Danqi Chen 1
- Bo Chen (陈波) 1
- show all...