Do LLMs Encode Frame Semantics? Evidence from Frame Identification

Jayanth Krishna Chundru, Rudrashis Poddar, Jie Cao, Tianyu Jiang


Abstract
We investigate whether large language models encode latent knowledge of frame semantics, focusing on frame identification, a core challenge in frame semantic parsing that involves selecting the appropriate semantic frame for a target word in context. Using the FrameNet lexical resource, we evaluate models under prompt-based inference and observe that they can perform frame identification effectively even without explicit supervision. To assess the impact of task-specific training, we fine-tune the model on FrameNet data, which substantially improves in-domain accuracy while generalizing well to out-of-domain benchmarks. Further analysis shows that the models can generate semantically coherent frame definitions, highlighting the model’s internalized understanding of frame semantics.
Anthology ID:
2025.emnlp-main.1499
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29476–29488
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1499/
DOI:
Bibkey:
Cite (ACL):
Jayanth Krishna Chundru, Rudrashis Poddar, Jie Cao, and Tianyu Jiang. 2025. Do LLMs Encode Frame Semantics? Evidence from Frame Identification. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 29476–29488, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Do LLMs Encode Frame Semantics? Evidence from Frame Identification (Chundru et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1499.pdf
Checklist:
 2025.emnlp-main.1499.checklist.pdf