2025
pdf
bib
abs
Social Bias in Popular Question-Answering Benchmarks
Angelie Kraft
|
Judith Simon
|
Sonja Schimmler
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Question-answering (QA) and reading comprehension (RC) benchmarks are commonly used for assessing the capabilities of large language models (LLMs) to retrieve and reproduce knowledge. However, we demonstrate that popular QA and RC benchmarks do not cover questions about different demographics or regions in a representative way. We perform a content analysis of 30 benchmark papers and a quantitative analysis of 20 respective benchmark datasets to learn (1) who is involved in the benchmark creation, (2) whether the benchmarks exhibit social bias, or whether this is addressed or prevented, and (3) whether the demographics of the creators and annotators correspond to particular biases in the content. Most benchmark papers analyzed provide insufficient information about those involved in benchmark creation, particularly the annotators. Notably, just one (WinoGrande) explicitly reports measures taken to address social representation issues. Moreover, the data analysis revealed gender, religion, and geographic biases across a wide range of encyclopedic, commonsense, and scholarly benchmarks. Our work adds to the mounting criticism of AI evaluation practices and shines a light on biased benchmarks being a potential source of LLM bias by incentivizing biased inference heuristics.
pdf
bib
Proceedings of the Fifth Workshop on Scholarly Document Processing (SDP 2025)
Tirthankar Ghosal
|
Philipp Mayr
|
Amanpreet Singh
|
Aakanksha Naik
|
Georg Rehm
|
Dayne Freitag
|
Dan Li
|
Sonja Schimmler
|
Anita De Waard
Proceedings of the Fifth Workshop on Scholarly Document Processing (SDP 2025)
pdf
bib
abs
Overview of the Fifth Workshop on Scholarly Document Processing
Tirthankar Ghosal
|
Philipp Mayr
|
Anita De Waard
|
Aakanksha Naik
|
Amanpreet Singh
|
Dayne Freitag
|
Georg Rehm
|
Sonja Schimmler
|
Dan Li
Proceedings of the Fifth Workshop on Scholarly Document Processing (SDP 2025)
The workshop on Scholarly Document Processing (SDP) started in 2020 to accelerate research, inform policy, and educate the public on natural language processing for scientific text. The fifth iteration of the workshop, SDP 2025 was held at the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) in Vienna as a hybrid event. The workshop saw a great increase in interest, with 26 submissions, of which 11 were accepted for the research track. The program consisted of a research track, invited talks and four shared tasks: (1) SciHal25: Hallucination Detection for Scientific Content, (2) SciVQA: Scientific Visual Question Answering, (3) ClimateCheck: Scientific Factchecking of Social Media Posts on Climate Change, and (4) Software Mention Detection in Scholarly Publications (SOMD 25). In addition to the four shared task overview papers, 18 shared task reports were accepted. The program was geared towards NLP, information extraction, information retrieval, and data mining for scholarly documents, with an emphasis on identifying and providing solutions to open challenges.