Recurrent Alignment with Hard Attention for Hierarchical Text Rating

Chenxi Lin, Ren Jiayu, Guoxiu He, Zhuoren Jiang, Haiyan Yu, Xiaomin Zhu


Abstract
While large language models (LLMs) excel at understanding and generating plain text, they are not tailored to handle hierarchical text structures or directly predict task-specific properties such as text rating. In fact, selectively and repeatedly grasping the hierarchical structure of large-scale text is pivotal for deciphering its essence. To this end, we propose a novel framework for hierarchical text rating utilizing LLMs, which incorporates Recurrent Alignment with Hard Attention (RAHA). Particularly, hard attention mechanism prompts a frozen LLM to selectively focus on pertinent leaf texts associated with the root text and generate symbolic representations of their relationships. Inspired by the gradual stabilization of the Markov Chain, recurrent alignment strategy involves feeding predicted ratings iteratively back into the prompts of another trainable LLM, aligning it to progressively approximate the desired target. Experimental results demonstrate that RAHA outperforms existing state-of-the-art methods on three hierarchical text rating datasets. Theoretical and empirical analysis confirms RAHA’s ability to gradually converge towards the underlying target through multiple inferences. Additional experiments on plain text rating datasets verify the effectiveness of this Markov-like alignment. Our data and code can be available in https://github.com/ECNU-Text-Computing/Markov-LLM.
Anthology ID:
2024.emnlp-main.1037
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18643–18657
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1037
DOI:
10.18653/v1/2024.emnlp-main.1037
Bibkey:
Cite (ACL):
Chenxi Lin, Ren Jiayu, Guoxiu He, Zhuoren Jiang, Haiyan Yu, and Xiaomin Zhu. 2024. Recurrent Alignment with Hard Attention for Hierarchical Text Rating. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18643–18657, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Recurrent Alignment with Hard Attention for Hierarchical Text Rating (Lin et al., EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2024.emnlp-main.1037.pdf