@inproceedings{fan-etal-2024-goldcoin,
    title = "{G}old{C}oin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory",
    author = "Fan, Wei  and
      Li, Haoran  and
      Deng, Zheye  and
      Wang, Weiqi  and
      Song, Yangqiu",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.emnlp-main.195/",
    doi = "10.18653/v1/2024.emnlp-main.195",
    pages = "3321--3343",
    abstract = "Privacy issues arise prominently during the inappropriate transmission of information between entities. Existing research primarily studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns, while neglecting that privacy is not an isolated, context-free concept limited to traditionally sensitive data (e.g., social security numbers), but intertwined with intricate social contexts that complicate the identification and analysis of potential privacy violations. The advent of Large Language Models (LLMs) offers unprecedented opportunities for incorporating the nuanced scenarios outlined in privacy laws to tackle these complex privacy issues. However, the scarcity of open-source relevant case studies restricts the efficiency of LLMs in aligning with specific legal statutes. To address this challenge, we introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations. Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes (e.g., HIPAA), to assist LLMs in comprehending the complex contexts for identifying privacy risks in the real world. Extensive experimental results demonstrate that GoldCoin markedly enhances LLMs' capabilities in recognizing privacy risks across real court cases, surpassing the baselines on different judicial tasks."
}Markdown (Informal)
[GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory](https://preview.aclanthology.org/ingest-emnlp/2024.emnlp-main.195/) (Fan et al., EMNLP 2024)
ACL