Siqi Zeng
2025
When LRP Diverges from Leave-One-Out in Transformers
Weiqiu You
|
Siqi Zeng
|
Yao-Hung Hubert Tsai
|
Makoto Yamada
|
Han Zhao
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Leave-One-Out (LOO) provides an intuitive measure of feature importance but is computationally prohibitive. While Layer-Wise Relevance Propagation (LRP) offers a potentially efficient alternative, its axiomatic soundness in modern Transformers remains under-examined. In this work, we first show that the bilinear propagation rules used in recent advances of AttnLRP violate implementation invariance. We prove this analytically and confirm it empirically in linear attention layers. Second, we also revisit CP-LRP as a diagnostic baseline and find that bypassing relevance propagation through the softmax layer—back-propagating relevance only through the value matrices—significantly improves alignment with LOO, particularly in the middle-to-late Transformer layers. Overall, our results suggest that (i) bilinear factorization sensitivity and (ii) softmax propagation error potentially jointly undermine LRP’s ability to approximate LOO in Transformers.