Structural Deep Encoding for Table Question Answering

Raphaël Mouravieff, Benjamin Piwowarski, Sylvain Lamprier


Abstract
Although Transformers-based architectures excel at processing textual information, their naive adaptation for tabular data often involves flattening the table structure. This simplification can lead to the loss of essential inter-dependencies between rows, columns, and cells, while also posing scalability challenges for large tables. To address these issues, prior works have explored special tokens, structured embeddings, and sparse attention patterns. In this paper, we conduct a comprehensive analysis of tabular encoding techniques used in QA, which highlights the crucial role of attention sparsity in preserving structural information of tables. We also introduce a set of novel sparse attention mask designs for tabular data, that not only enhance computational efficiency but also preserve structural integrity, leading to better overall performance.
Anthology ID:
2025.findings-acl.121
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2389–2402
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.121/
DOI:
Bibkey:
Cite (ACL):
Raphaël Mouravieff, Benjamin Piwowarski, and Sylvain Lamprier. 2025. Structural Deep Encoding for Table Question Answering. In Findings of the Association for Computational Linguistics: ACL 2025, pages 2389–2402, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Structural Deep Encoding for Table Question Answering (Mouravieff et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.121.pdf