@inproceedings{jiang-etal-2025-comparison,
    title = "Comparison of {AI} and Human Scoring on A Visual Arts Assessment",
    author = "Jiang, Ning  and
      Huang, Yue  and
      Chen, Jie",
    editor = "Wilson, Joshua  and
      Ormerod, Christopher  and
      Beiting Parrish, Magdalen",
    booktitle = "Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress",
    month = oct,
    year = "2025",
    address = "Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States",
    publisher = "National Council on Measurement in Education (NCME)",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.aimecon-wip.18/",
    pages = "147--154",
    ISBN = "979-8-218-84229-1",
    abstract = "This study examines reliability and comparability of Generative AI scores versus human ratings on two performance tasks{---}text-based and drawing-based{---}in a fourth-grade visual arts assessment. Results show GPT-4 is consistent, aligned with humans but more lenient, and its agreement with humans is slightly lower than that between human raters."
}Markdown (Informal)
[Comparison of AI and Human Scoring on A Visual Arts Assessment](https://preview.aclanthology.org/ingest-emnlp/2025.aimecon-wip.18/) (Jiang et al., AIME-Con 2025)
ACL
- Ning Jiang, Yue Huang, and Jie Chen. 2025. Comparison of AI and Human Scoring on A Visual Arts Assessment. In Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Works in Progress, pages 147–154, Wyndham Grand Pittsburgh, Downtown, Pittsburgh, Pennsylvania, United States. National Council on Measurement in Education (NCME).