Da-Chen Lian
2024
The Semantic Relations in LLMs: An Information-theoretic Compression Approach
Yu-Hsiang Tseng
|
Pin-Er Chen
|
Da-Chen Lian
|
Shu-Kai Hsieh
Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024
Compressibility is closely related to the predictability of the texts from the information theory viewpoint. As large language models (LLMs) are trained to maximize the conditional probabilities of upcoming words, they may capture the subtlety and nuances of the semantic constraints underlying the texts, and texts aligning with the encoded semantic constraints are more compressible than those that do not. This paper systematically tests whether and how LLMs can act as compressors of semantic pairs. Using semantic relations from English and Chinese Wordnet, we empirically demonstrate that texts with correct semantic pairings are more compressible than incorrect ones, measured by the proposed compression advantages index. We also show that, with the Pythia model suite and a fine-tuned model on Chinese Wordnet, compression capacities are modulated by the model’s seen data. These findings are consistent with the view that LLMs encode the semantic knowledge as underlying constraints learned from texts and can act as compressors of semantic information or potentially other structured knowledge.
2023
Evaluating Interfaced LLM Bias
Kai-Ching Yeh
|
Jou-An Chi
|
Da-Chen Lian
|
Shu-Kai Hsieh
Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023)
Search