Hanyin Shao
2024
Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage
Hanyin Shao
|
Jie Huang
|
Shen Zheng
|
Kevin Chang
Findings of the Association for Computational Linguistics: EACL 2024
The advancement of large language models (LLMs) brings notable improvements across various applications, while simultaneously raising concerns about potential private data exposure. One notable capability of LLMs is their ability to form associations between different pieces of information, but this raises concerns when it comes to personally identifiable information (PII). This paper delves into the association capabilities of language models, aiming to uncover the factors that influence their proficiency in associating information. Our study reveals that as models scale up, their capacity to associate entities/information intensifies, particularly when target pairs demonstrate shorter co-occurrence distances or higher co-occurrence frequencies. However, there is a distinct performance gap when associating commonsense knowledge versus PII, with the latter showing lower accuracy. Despite the proportion of accurately predicted PII being relatively small, LLMs still demonstrate the capability to predict specific instances of email addresses and phone numbers when provided with appropriate prompts. These findings underscore the potential risk to PII confidentiality posed by the evolving capabilities of LLMs, especially as they continue to expand in scale and power.
2022
Understanding Jargon: Combining Extraction and Generation for Definition Modeling
Jie Huang
|
Hanyin Shao
|
Kevin Chen-Chuan Chang
|
Jinjun Xiong
|
Wen-mei Hwu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Can machines know what twin prime is? From the composition of this phrase, machines may guess twin prime is a certain kind of prime, but it is still difficult to deduce exactly what twin stands for without additional knowledge. Here, twin prime is a jargon - a specialized term used by experts in a particular field. Explaining jargon is challenging since it usually requires domain knowledge to understand. Recently, there is an increasing interest in extracting and generating definitions of words automatically. However, existing approaches, either extraction or generation, perform poorly on jargon. In this paper, we propose to combine extraction and generation for jargon definition modeling: first extract self- and correlative definitional information of target jargon from the Web and then generate the final definitions by incorporating the extracted definitional information. Our framework is remarkably simple but effective: experiments demonstrate our method can generate high-quality definitions for jargon and outperform state-of-the-art models significantly, e.g., BLEU score from 8.76 to 22.66 and human-annotated score from 2.34 to 4.04.
Are Large Pre-Trained Language Models Leaking Your Personal Information?
Jie Huang
|
Hanyin Shao
|
Kevin Chen-Chuan Chang
Findings of the Association for Computational Linguistics: EMNLP 2022
Are Large Pre-Trained Language Models Leaking Your Personal Information? In this paper, we analyze whether Pre-Trained Language Models (PLMs) are prone to leaking personal information. Specifically, we query PLMs for email addresses with contexts of the email address or prompts containing the owner’s name. We find that PLMs do leak personal information due to memorization. However, since the models are weak at association, the risk of specific personal information being extracted by attackers is low. We hope this work could help the community to better understand the privacy risk of PLMs and bring new insights to make PLMs safe.
Search
Co-authors
- Jie Huang 3
- Kevin Chen-Chuan Chang 2
- Jinjun Xiong 1
- Wen-Mei Hwu 1
- Shen Zheng 1
- show all...