Lili Yao
2025
Revealing and Mitigating the Local Pattern Shortcuts of Mamba
WangJie You
|
Zecheng Tang
|
Juntao Li
|
Lili Yao
|
Min Zhang
Findings of the Association for Computational Linguistics: ACL 2025
Large language models (LLMs) have advanced significantly due to the attention mechanism, but their quadratic complexity and linear memory demands limit their performance on long-context tasks. Recently, researchers introduced Mamba, an advanced model built upon State Space Models (SSMs) that offers linear complexity and constant memory. Although Mamba is reported to match or surpass the performance of attention-based models, our analysis reveals a performance gap: Mamba excels in tasks that involve localized key information but faces challenges with tasks that require handling distributed key information. Our controlled experiments suggest that the inconsistency arises from Mamba’s reliance on **local pattern shortcuts** across model scales (10M to 1.4B), which enable Mamba to remember local key information within its limited memory but hinder its ability to retain more dispersed information. Therefore, we introduce a global gate module into the Mamba model to address this issue. Experiments on extensive synthetic tasks, as well as real-world tasks, demonstrate the effectiveness of our method. Notably, with the introduction of only 4M extra parameters, our approach enables the Mamba model (130M) to achieve a significant improvement on tasks with distributed information, increasing its performance from **below 5% to 80%**.
2020
Using PRMSE to evaluate automated scoring systems in the presence of label noise
Anastassia Loukina
|
Nitin Madnani
|
Aoife Cahill
|
Lili Yao
|
Matthew S. Johnson
|
Brian Riordan
|
Daniel F. McCaffrey
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
The effect of noisy labels on the performance of NLP systems has been studied extensively for system training. In this paper, we focus on the effect that noisy labels have on system evaluation. Using automated scoring as an example, we demonstrate that the quality of human ratings used for system evaluation have a substantial impact on traditional performance metrics, making it impossible to compare system evaluations on labels with different quality. We propose that a new metric, PRMSE, developed within the educational measurement community, can help address this issue, and provide practical guidelines on using PRMSE.
2017
Towards Implicit Content-Introducing for Generative Short-Text Conversation Systems
Lili Yao
|
Yaoyuan Zhang
|
Yansong Feng
|
Dongyan Zhao
|
Rui Yan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
The study on human-computer conversation systems is a hot research topic nowadays. One of the prevailing methods to build the system is using the generative Sequence-to-Sequence (Seq2Seq) model through neural networks. However, the standard Seq2Seq model is prone to generate trivial responses. In this paper, we aim to generate a more meaningful and informative reply when answering a given question. We propose an implicit content-introducing method which incorporates additional information into the Seq2Seq model in a flexible way. Specifically, we fuse the general decoding and the auxiliary cue word information through our proposed hierarchical gated fusion unit. Experiments on real-life data demonstrate that our model consistently outperforms a set of competitive baselines in terms of BLEU scores and human evaluation.
Search
Fix author
Co-authors
- Aoife Cahill 1
- Yansong Feng 1
- Matthew S. Johnson 1
- Juntao Li 1
- Anastassia Loukina 1
- show all...