Shisen Yue
2024
Do Large Language Models Understand Conversational Implicature- A case study with a Chinese sitcom
Shisen Yue
|
Siyuan Song
|
Xinyuan Cheng
|
Hai Hu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Understanding the non-literal meaning of an utterance is critical for large language models(LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp,the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourcedfrom dialogues in the Chinese sitcom My Own Swordsman. It includes 200 carefully handcraftedquestions, all annotated on which Gricean maxims have been violated. We test eight close-sourceand open-source LLMs under two tasks: a multiple-choice question task and an implicature ex-planation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models,including GPT3.5 and several open-source models, demonstrate a lower accuracy ranging from20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation ofthe implicatures generated by LLMs on their reasonability, logic and fluency. While all mod-els generate largely fluent and self-consistent text, their explanations score low on reasonabilityexcept for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of theimplicatures in the conversation. Moreover, we find LLMs’ performance does not vary signif-icantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derivedfrom different maxims differently. Our data and code are available at https://github.com/sjtu-compling/llm-pragmatics.”
Frequency Explains the Inverse Correlation of Large Language Models’ Size, Training Data Amount, and Surprisal’s Fit to Reading Times
Byung-Doh Oh
|
Shisen Yue
|
William Schuler
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent studies have shown that as Transformer-based language models become larger and are trained on very large amounts of data, the fit of their surprisal estimates to naturalistic human reading times degrades. The current work presents a series of analyses showing that word frequency is a key explanatory factor underlying these two trends. First, residual errors from four language model families on four corpora show that the inverse correlation between model size and fit to reading times is the strongest on the subset of least frequent words, which is driven by excessively accurate predictions of larger model variants. Additionally, training dynamics reveal that during later training steps, all model variants learn to predict rare words and that larger model variants do so more accurately, which explains the detrimental effect of both training data amount and model size on fit to reading times. Finally, a feature attribution analysis demonstrates that larger model variants are able to accurately predict rare words based on both an effectively longer context window size as well as stronger local associations compared to smaller model variants. Taken together, these results indicate that Transformer-based language models’ surprisal estimates diverge from human-like expectations due to the superhumanly complex associations they learn for predicting rare words.