Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes

Nikita Neveditsin, Pawan Lingras, Vijay Kumar Mago


Abstract
We present a comparative analysis of the parseability of structured outputs generated by small language models for open attribute-value extraction from clinical notes. We evaluate three widely used serialization formats: JSON, YAML, and XML, and find that JSON consistently yields the highest parseability. Structural robustness improves with targeted prompting and larger models, but declines for longer documents and certain note types. Our error analysis identifies recurring format-specific failure patterns. These findings offer practical guidance for selecting serialization formats and designing prompts when deploying language models in privacy-sensitive clinical settings.
Anthology ID:
2025.acl-srw.19
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Jin Zhao, Mingyang Wang, Zhu Liu
Venues:
ACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
286–296
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-srw.19/
DOI:
Bibkey:
Cite (ACL):
Nikita Neveditsin, Pawan Lingras, and Vijay Kumar Mago. 2025. Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 286–296, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Evaluating Structured Output Robustness of Small Language Models for Open Attribute-Value Extraction from Clinical Notes (Neveditsin et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-srw.19.pdf