Longyun Wu


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
LongAttn: Selecting Long-context Training Data via Token-level Attention
Longyun Wu | Dawei Zhu | Guangxiang Zhao | Zhuocheng Yu | Junfeng Ran | Xiangyu Wong | Lin Sun | Sujian Li
Findings of the Association for Computational Linguistics: ACL 2025

With the development of large language models (LLMs), there has been an increasing need for significant advancements in handling long contexts. To enhance long-context capabilities, constructing high-quality training data with **long-range dependencies** is crucial. Existing methods to select long-context data often rely on sentence-level analysis,which can be greatly optimized in both performance and efficiency. In this paper, we propose a novel token-level framework, ​**LongAttn**​, which leverages the self-attention mechanism of LLMs to measure the **long-range dependencies** for the data. By calculating token-level dependency strength and distribution uniformity of token scores, LongAttn effectively quantifies ​**long-range dependencies**​, enabling more accurate and efficient data selection. We filter **LongABC-32K** from open-source long-context datasets (ArXiv, Book, and Code). Through our comprehensive experiments, LongAttn has demonstrated its excellent ​**effectiveness**​, ​**scalability**​, and ​**efficiency**​. We will release our code and the high-quality long-context dataset **LongABC-32K** in the future.