On Eliciting Syntax from Language Models via Hashing

Yiran Wang, Masao Utiyama


Abstract
Unsupervised parsing, also known as grammar induction, aims to infer syntactic structure from raw text. Recently, binary representation has exhibited remarkable information-preserving capabilities at both lexicon and syntax levels. In this paper, we explore the possibility of leveraging this capability to deduce parsing trees from raw text, relying solely on the implicitly induced grammars within models. To achieve this, we upgrade the bit-level CKY from zero-order to first-order to encode the lexicon and syntax in a unified binary representation space, switch training from supervised to unsupervised under the contrastive hashing framework, and introduce a novel loss function to impose stronger yet balanced alignment signals. Our model shows competitive performance on various datasets, therefore, we claim that our method is effective and efficient enough to acquire high-quality parsing trees from pre-trained language models at a low cost.
Anthology ID:
2024.emnlp-main.479
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8412–8427
Language:
URL:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.emnlp-main.479/
DOI:
10.18653/v1/2024.emnlp-main.479
Bibkey:
Cite (ACL):
Yiran Wang and Masao Utiyama. 2024. On Eliciting Syntax from Language Models via Hashing. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 8412–8427, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
On Eliciting Syntax from Language Models via Hashing (Wang & Utiyama, EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.emnlp-main.479.pdf