@inproceedings{choi-etal-2025-pre,
    title = "Pre-trained Transformer Models for Standard-to-Standard Alignment Study",
    author = "Choi, Hye-Jeong  and
      Butterfuss, Reese  and
      Fan, Meng",
    editor = "Wilson, Joshua  and
      Ormerod, Christopher  and
      Beiting Parrish, Magdalen",
    booktitle = "Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers",
    month = oct,
    year = "2025",
    address = "Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States",
    publisher = "National Council on Measurement in Education (NCME)",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.aimecon-main.33/",
    pages = "306--311",
    ISBN = "979-8-218-84228-4",
    abstract = "The current study evaluated the accuracy of five pre-trained large language models (LLMs) in matching human judgment for standard-to-standard alignment study. Results demonstrated comparable performance LLMs across despite differences in scale and computational demands. Additionally, incorporating domain labels as auxiliary information did not enhance LLMs performance. These findings provide initial evidence for the viability of open-source LLMs to facilitate alignment study and offer insights into the utility of auxiliary information."
}Markdown (Informal)
[Pre-trained Transformer Models for Standard-to-Standard Alignment Study](https://preview.aclanthology.org/ingest-emnlp/2025.aimecon-main.33/) (Choi et al., AIME-Con 2025)
ACL
- Hye-Jeong Choi, Reese Butterfuss, and Meng Fan. 2025. Pre-trained Transformer Models for Standard-to-Standard Alignment Study. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers, pages 306–311, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).