Composable Cross-prompt Essay Scoring by Merging Models

Sanwoo Lee, Kun Liang, Yunfang Wu


Abstract
Recent advances in cross-prompt automated essay scoring typically train models jointly on all available source domains, often requiring simultaneous access to unlabeled target domain samples. However, using all sources can lead to suboptimal transfer and high computational cost. Moreover, repeatedly accessing the source essays for continual adaptation raises privacy concerns. We propose a source-free adaptation approach that selectively merges the parameters of individually trained source models without further access to the source datasets. In particular, we mix the task vectors—the parameter updates from fine-tuning—via a weighted sum to efficiently simulate selective joint-training. We use Bayesian optimization to determine the mixing weights using our proposed Prior-encoded Information Maximization (PIM), an unsupervised objective which promotes score discriminability by leveraging useful priors pre-computed from the sources. Experimental results with LLMs on in-dataset and cross-dataset adaptation show that our method (1) consistently outperforms joint-training on all sources, (2) maintains superior robustness compared to other merging methods, (3) excels under severe distribution shifts where recent leading cross-prompt methods struggle, all while retaining computational efficiency.
Anthology ID:
2025.emnlp-main.1240
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24397–24411
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1240/
DOI:
Bibkey:
Cite (ACL):
Sanwoo Lee, Kun Liang, and Yunfang Wu. 2025. Composable Cross-prompt Essay Scoring by Merging Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 24397–24411, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Composable Cross-prompt Essay Scoring by Merging Models (Lee et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1240.pdf
Checklist:
 2025.emnlp-main.1240.checklist.pdf