Injecting Frame Semantics into Large Language Models via Prompt-Based Fine-Tuning

Shahid Iqbal Rai, Danilo Croce, Roberto Basili


Abstract
Large Language Models (LLMs) have demonstrated remarkable generalization across diverse NLP tasks, yet they often produce outputs lacking semantic coherence due to insufficient grounding in structured linguistic knowledge. This paper proposes a novel method for injecting Frame Semantics into a pretrained LLaMA model using Low-Rank Adaptation (LoRA). Leveraging FrameNet (a rich resource of over 1,000 semantic frames) we construct a training corpus comprising structured triples of frame definitions, frame elements, and lexical units. Our method encodes these examples into the model via LoRA adapters and evaluates performance using zero-shot prompting for textual entailment and semantic role labeling (SRL) over Framenet. Experimental results show that our adapted frame-aware LLM substantially outperforms the baseline across closed, open-ended, and multiple-choice prompts. Moreover, we observe significant improvements in SRL accuracy, demonstrating the efficacy of combining frame-semantic theory with parameter-efficient pretraining.
Anthology ID:
2025.starsem-1.3
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
31–47
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.3/
DOI:
Bibkey:
Cite (ACL):
Shahid Iqbal Rai, Danilo Croce, and Roberto Basili. 2025. Injecting Frame Semantics into Large Language Models via Prompt-Based Fine-Tuning. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 31–47, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Injecting Frame Semantics into Large Language Models via Prompt-Based Fine-Tuning (Rai et al., *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.3.pdf