2025
pdf
bib
abs
Rolling the DICE on Idiomaticity: How LLMs Fail to Grasp Context
Maggie Mi
|
Aline Villavicencio
|
Nafise Sadat Moosavi
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Human processing of idioms heavily depends on interpreting the surrounding context in which they appear. While large language models (LLMs) have achieved impressive performance on idiomaticity detection benchmarks, this success may be driven by reasoning shortcuts present in existing datasets. To address this, we introduce a novel, controlled contrastive dataset (DICE) specifically designed to assess whether LLMs can effectively leverage context to disambiguate idiomatic meanings. Furthermore, we investigate the influence of collocational frequency and sentence probability—proxies for human processing known to affect idiom resolution—on model performance. Our results show that LLMs frequently fail to resolve idiomaticity when it depends on contextual understanding, performing better on sentences deemed more likely by the model. Additionally, idiom frequency influences performance but does not guarantee accurate interpretation. Our findings emphasize the limitations of current models in grasping contextual meaning and highlight the need for more context-sensitive evaluation.
2024
pdf
bib
abs
Sign of the Times: Evaluating the use of Large Language Models for Idiomaticity Detection
Dylan Phelps
|
Thomas Pickard
|
Maggie Mi
|
Edward Gow-Smith
|
Aline Villavicencio
Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024
Despite the recent ubiquity of large language models and their high zero-shot prompted performance across a wide range of tasks, it is still not known how well they perform on tasks which require processing of potentially idiomatic language. In particular, how well do such models perform in comparison to encoder-only models fine-tuned specifically for idiomaticity tasks? In this work, we attempt to answer this question by looking at the performance of a range of LLMs (both local and software-as-a-service models) on three idiomaticity datasets: SemEval 2022 Task 2a, FLUTE, and MAGPIE. Overall, we find that whilst these models do give competitive performance, they do not match the results of fine-tuned task-specific models, even at the largest scales (e.g. for GPT-4). Nevertheless, we do see consistent performance improvements across model scale. Additionally, we investigate prompting approaches to improve performance, and discuss the practicalities of using LLMs for these tasks.
pdf
bib
abs
ShefCDTeam at SemEval-2024 Task 4: A Text-to-Text Model for Multi-Label Classification
Meredith Gibbons
|
Maggie Mi
|
Xingyi Song
|
Aline Villavicencio
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper presents our findings for SemEval2024 Task 4. We submit only to subtask 1, applying the text-to-text framework using a FLAN-T5 model with a combination of parameter efficient fine-tuning methods - low-rankadaptation and prompt tuning. Overall, we find that the system performs well in English, but performance is limited in Bulgarian, North Macedonian and Arabic. Our analysis raises interesting questions about the effects of labelorder and label names when applying the text-to-text framework.
2023
pdf
bib
Mmi01 at The BabyLM Challenge: Linguistically Motivated Curriculum Learning for Pretraining in Low-Resource Settings
Maggie Mi
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning