Length-Aware Multi-Kernel Transformer for Long Document Classification

Guangzeng Han, Jack Tsao, Xiaolei Huang


Abstract
Lengthy documents pose a unique challenge to neural language models due to substantial memory consumption. While existing state-of-the-art (SOTA) models segment long texts into equal-length snippets (e.g., 128 tokens per snippet) or deploy sparse attention networks, these methods have new challenges of context fragmentation and generalizability due to sentence boundaries and varying text lengths. For example, our empirical analysis has shown that SOTA models consistently overfit one set of lengthy documents (e.g., 2000 tokens) while performing worse on texts with other lengths (e.g., 1000 or 4000). In this study, we propose a Length-Aware Multi-Kernel Transformer (LAMKIT) to address the new challenges for the long document classification. LAMKIT encodes lengthy documents by diverse transformer-based kernels for bridging context boundaries and vectorizes text length by the kernels to promote model robustness over varying document lengths. Experiments on five standard benchmarks from health and law domains show LAMKIT outperforms SOTA models up to an absolute 10.9% improvement. We conduct extensive ablation analyses to examine model robustness and effectiveness over varying document lengths.
Anthology ID:
2024.starsem-1.22
Volume:
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Danushka Bollegala, Vered Shwartz
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
278–290
Language:
URL:
https://aclanthology.org/2024.starsem-1.22
DOI:
Bibkey:
Cite (ACL):
Guangzeng Han, Jack Tsao, and Xiaolei Huang. 2024. Length-Aware Multi-Kernel Transformer for Long Document Classification. In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), pages 278–290, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Length-Aware Multi-Kernel Transformer for Long Document Classification (Han et al., *SEM 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.starsem-1.22.pdf