@inproceedings{sorensen-choi-2025-opt,
    title = "Opt-{ICL} at {L}e{W}i{D}i-2025: Maximizing In-Context Signal from Rater Examples via Meta-Learning",
    author = "Sorensen, Taylor  and
      Choi, Yejin",
    editor = "Abercrombie, Gavin  and
      Basile, Valerio  and
      Frenda, Simona  and
      Tonelli, Sara  and
      Dudy, Shiran",
    booktitle = "Proceedings of the The 4th Workshop on Perspectivist Approaches to NLP",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.nlperspectives-1.20/",
    pages = "228--241",
    ISBN = "979-8-89176-350-0",
    abstract = "Many natural language processing (NLP) tasks involve subjectivity, ambiguity, or legitimate disagreement between annotators. In this paper, we outline our system for modeling human variation. Our system leverages language models' (LLMs) in-context learning abilities, along with a two-step meta-learning training procedure for 1) post-training on many datasets requiring in-context learning and 2) specializing the model via in-context meta-learning to the particular data distribution of interest. We also evaluate the performance of our system submission to the Learning With Disagreements (LeWiDi) competition, where it was the overall winner on both tasks. Additionally, we perform an ablation study to measure the importance of each system component. We find that including rater examples in-context is crucial for our system{'}s performance, dataset-specific fine-tuning is helpful on the larger datasets, post-training on other in-context datasets is helpful on one of the competition datasets, and that performance improves with model scale."
}Markdown (Informal)
[Opt-ICL at LeWiDi-2025: Maximizing In-Context Signal from Rater Examples via Meta-Learning](https://preview.aclanthology.org/ingest-emnlp/2025.nlperspectives-1.20/) (Sorensen & Choi, NLPerspectives 2025)
ACL