Xinbo Wu
2025
Transformer-based Causal Language Models Perform Clustering
Xinbo Wu
|
Lav R. Varshney
Findings of the Association for Computational Linguistics: NAACL 2025
Even though large language models (LLMs) have demonstrated remarkable capability in solving various natural language tasks, the capability of an LLM to follow human instructions is still an area of active development. Recent works (Ouyang et al., 2022; Rafailov et al., 2023; Zhang et al., 2023) have shown great improvements in instruction-following capability through additional training for instruction-following tasks. However, the mechanisms responsible for effective instruction-following capabilities remain inadequately understood. Here, we introduce a simplified instruction-following task and use synthetic datasets to analyze a Transformer-based causal language model. Our findings suggest that the model learns task-specific information by clustering data within its hidden space, with this clustering process evolving dynamically during learning. We also demonstrate how this phenomenon assists the model in handling unseen instances, and validate our results in a more realistic setting. We further present applications in pre-training and alignment, inspired by clustering.
2024
A Meta-Learning Perspective on Transformers for Causal Language Modeling
Xinbo Wu
|
Lav Varshney
Findings of the Association for Computational Linguistics: ACL 2024
The Transformer architecture has become prominent in developing large causal language models. However, mechanisms to explain its capabilities are not well understood. Focused on the training process, here we establish a meta-learning view of the Transformer architecture when trained for the causal language modeling task, by explicating an inner optimization process that may happen within the Transformer. Further, from within the inner optimization, we discover and theoretically analyze a special characteristic of the norms of learned token representations within Transformer-based causal language models. Our analysis is supported by experiments conducted on pre-trained large language models and real-world data.
2016
EDISON: Feature Extraction for NLP, Simplified
Mark Sammons
|
Christos Christodoulopoulos
|
Parisa Kordjamshidi
|
Daniel Khashabi
|
Vivek Srikumar
|
Paul Vijayakumar
|
Mazin Bokhari
|
Xinbo Wu
|
Dan Roth
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
When designing Natural Language Processing (NLP) applications that use Machine Learning (ML) techniques, feature extraction becomes a significant part of the development effort, whether developing a new application or attempting to reproduce results reported for existing NLP tasks. We present EDISON, a Java library of feature generation functions used in a suite of state-of-the-art NLP tools, based on a set of generic NLP data structures. These feature extractors populate simple data structures encoding the extracted features, which the package can also serialize to an intuitive JSON file format that can be easily mapped to formats used by ML packages. EDISON can also be used programmatically with JVM-based (Java/Scala) NLP software to provide the feature extractor input. The collection of feature extractors is organised hierarchically and a simple search interface is provided. In this paper we include examples that demonstrate the versatility and ease-of-use of the EDISON feature extraction suite to show that this can significantly reduce the time spent by developers on feature extraction design for NLP systems. The library is publicly hosted at https://github.com/IllinoisCogComp/illinois-cogcomp-nlp/, and we hope that other NLP researchers will contribute to the set of feature extractors. In this way, the community can help simplify reproduction of published results and the integration of ideas from diverse sources when developing new and improved NLP applications.
Search
Fix data
Co-authors
- Mazin Bokhari 1
- Christos Christodoulopoulos 1
- Daniel Khashabi 1
- Parisa Kordjamshidi 1
- Dan Roth 1
- show all...