ConCodeEval: Evaluating Large Language Models for Code Constraints in Domain-Specific Languages

Mehant Kammakomati, Sameer Pimparkhede, Srikanth G. Tamilselvam, Prince Kumar, Pushpak Bhattacharyya


Abstract
System-level programming is essential for modern enterprise infrastructure, enabling the automation and management of complex systems through declarative code. Developers write this code based on schemas, which themselves are a form of code that defines constraints like data types and required fields. These schemas help ensure operational correctness and smooth integration across systems. However, as enterprise schemas become complex, manually writing code adhering to these constraints becomes challenging for developers. Large Language Models (LLMs) have demonstrated potential in code generation and natural language understanding, particularly in zero-shot and few-shot settings. However, applying LLMs to handle constraints represented in code, essential for system-level programming rather than natural language, has not been explored. Hence, we introduce ConCodeEval, a study across two key dimensions: format and constraint efficacy, with a first-of-its-kind benchmark involving two novel experiments for code constraints across five representations (JSON, YAML, XML, Python, and natural language). Our findings suggest that conscious choice of representations can lead to optimal use of LLMs in enterprise use cases involving constraints. Nonetheless, LLMs continue to struggle significantly with code constraints, motivating the need for innovation in this direction.
Anthology ID:
2025.acl-industry.104
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Georg Rehm, Yunyao Li
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1466–1479
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-industry.104/
DOI:
Bibkey:
Cite (ACL):
Mehant Kammakomati, Sameer Pimparkhede, Srikanth G. Tamilselvam, Prince Kumar, and Pushpak Bhattacharyya. 2025. ConCodeEval: Evaluating Large Language Models for Code Constraints in Domain-Specific Languages. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track), pages 1466–1479, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ConCodeEval: Evaluating Large Language Models for Code Constraints in Domain-Specific Languages (Kammakomati et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-industry.104.pdf