Yun Jang


2025

pdf bib
A Sentence-Level Visualization of Attention in Large Language Models
Seongbum Seo | Sangbong Yoo | Hyelim Lee | Yun Jang | Ji Hwan Park | Jeong-Nam Kim
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)

We introduce SAVIS, a sentence-level attention visualization tool that enhances the interpretability of long documents processed by Large Language Models (LLMs). By computing inter-sentence attention (ISA) through token-level attention aggregation, SAVIS reduces the complexity of attention analysis, enabling users to identify meaningful document-level patterns. The tool offers an interactive interface for exploring how sentences relate to each other in model processing. Our comparative analysis with existing visualization tools demonstrates that SAVIS improves task accuracy and reduces error identification time. We demonstrate its effectiveness for text analysis applications through case studies on various analysis tasks. Our open-source tool is available at https://pypi.org/project/savis with a screencast video at https://youtu.be/fTZZPHA55So.