@inproceedings{haley-2024-unreasonable,
    title = "The unreasonable effectiveness of large language models for low-resource clause-level morphology: In-context generalization or prior exposure?",
    author = "Haley, Coleman",
    editor = "Mager, Manuel  and
      Ebrahimi, Abteen  and
      Rijhwani, Shruti  and
      Oncevay, Arturo  and
      Chiruzzo, Luis  and
      Pugh, Robert  and
      von der Wense, Katharina",
    booktitle = "Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.americasnlp-1.20/",
    doi = "10.18653/v1/2024.americasnlp-1.20",
    pages = "174--178",
    abstract = "This paper describes the submission of Team ``Giving it a Shot'' to the AmericasNLP 2024 Shared Task on Creation of Educational Materials for Indigenous Languages. We use a simple few-shot prompting approach with several state of the art large language models, achieving competitive performance on the shared task, with our best system placing third overall. We perform a preliminary analysis to determine to what degree the performance of our model is due to prior exposure to the task languages, finding that generally our performance is better explained as being derived from in-context learning capabilities."
}Markdown (Informal)
[The unreasonable effectiveness of large language models for low-resource clause-level morphology: In-context generalization or prior exposure?](https://preview.aclanthology.org/ingest-emnlp/2024.americasnlp-1.20/) (Haley, AmericasNLP 2024)
ACL