@inproceedings{dhaini-etal-2025-regulation,
    title = "From Regulation to Interaction: Expert Views on Aligning Explainable {AI} with the {EU} {AI} Act",
    author = "Dhaini, Mahdi  and
      Ondrus, Lukas  and
      Kasneci, Gjergji",
    editor = "Blodgett, Su Lin  and
      Curry, Amanda Cercas  and
      Dev, Sunipa  and
      Li, Siyan  and
      Madaio, Michael  and
      Wang, Jack  and
      Wu, Sherry Tongshuang  and
      Xiao, Ziang  and
      Yang, Diyi",
    booktitle = "Proceedings of the Fourth Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP)",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.hcinlp-1.19/",
    pages = "230--239",
    ISBN = "979-8-89176-353-1",
    abstract = "Explainable AI (XAI) aims to support people who interact with high-stakes AI-driven decisions, and the EU AI Act mandates that users must be able to interpret system outputs appropriately. Although the Act requires users to interpret outputs and mandates human oversight, it offers no technical guidance for implementing explainability, leaving interpretability methods opaque to non-experts and compliance obligations unclear. To address these gaps, we interviewed eight experts to explore (1) how explainability is defined and perceived under the Act, (2) the practical and regulatory obstacles to XAI implementation, and (3) recommended solutions and future directions. Our findings reveal that experts view explainability as context- and audience-dependent, face challenges from regulatory vagueness and technical trade-offs, and advocate for domain-specific rules, hybrid methods, and user-centered explanations. These insights provide a basis for a potential framework to align XAI methods{---}particularly for AI and Natural Language Processing (NLP) systems{---}with regulatory requirements, and suggest actionable steps for policymakers and practitioners"
}Markdown (Informal)
[From Regulation to Interaction: Expert Views on Aligning Explainable AI with the EU AI Act](https://preview.aclanthology.org/ingest-emnlp/2025.hcinlp-1.19/) (Dhaini et al., HCINLP 2025)
ACL