Kohei Tsuji
2025
SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization
Kohei Tsuji
|
Tatsuya Hiraoka
|
Yuchang Cheng
|
Tomoya Iwakura
Proceedings of the 31st International Conference on Computational Linguistics
NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh.
Investigating Neurons and Heads in Transformer-based LLMs for Typographical Errors
Kohei Tsuji
|
Tatsuya Hiraoka
|
Yuchang Cheng
|
Eiji Aramaki
|
Tomoya Iwakura
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This paper investigates how LLMs encode inputs with typos. We hypothesize that specific neurons and attention heads recognize typos and fix them internally using local and global contexts. We introduce a method to identify typo neurons and typo heads that work actively when inputs contain typos. Our experimental results suggest the following: 1) LLMs can fix typos with local contexts when the typo neurons in either the early or late layers are activated, even if those in the other are not. 2) Typo neurons in the middle layers are the core of typo-fixing with global contexts. 3) Typo heads fix typos by widely considering the context not focusing on specific tokens. 4) Typo neurons and typo heads work not only for typo-fixing but also for understanding general contexts.