Alfredo Gomez


2025

pdf bib
SimBA: Simplifying Benchmark Analysis Using Performance Matrices Alone
Nishant Subramani | Alfredo Gomez | Mona T. Diab
Findings of the Association for Computational Linguistics: EMNLP 2025

Modern language models are evaluated on large benchmarks, which are difficult to make sense of, especially for model selection.Looking at the raw evaluation numbers themselves using a model-centric lens, we propose SimBA, a three phase framework to Simplify Benchmark Analysis. The three phases of SimBA are: stalk, where we conduct dataset & model comparisons, prowl, where we discover a representative subset, and pounce, where we use the representative subset to predict performance on a held-out set of models. Applying SimBA to three popular LM benchmarks: HELM, MMLU, and BigBenchLite reveals that across all three benchmarks, datasets and models relate strongly to one another (stalk). We develop an representative set discovery algorithm which covers a benchmark using raw evaluation scores alone. Using our algorithm, we find that with 6.25% (1/16), 1.7% (1/58), and 28.4% (21/74) of the datasets for HELM, MMLU, and BigBenchLite respectively, we achieve coverage levels of at least 95% (prowl). Additionally, using just these representative subsets, we can both preserve model ranks and predict performance on a held-out set of models with near zero mean-squared error (pounce). Taken together, SimBA can help model developers improve efficiency during model training and dataset creators validate whether their newly created dataset differs from existing datasets in a benchmark. Our code is open source, available at https://github.com/nishantsubramani/simba.

2024

pdf bib
Proceedings of the Eighth Widening NLP Workshop
Atnafu Lambebo Tonja | Alfredo Gomez | Chanjun Park | Hellina Hailu Nigatu | Santosh T.Y.S.S | Tanvi Anand | Wiem Ben Rim
Proceedings of the Eighth Widening NLP Workshop

2023

pdf bib
Proceedings of the Seventh Widening NLP Workshop (WiNLP 2023)
Bonaventure F. P. Dossou | Isidora Tourni | Hatem Haddad | Shaily Bhatt | Fatemehsadat Mireshghallah | Sunipa Dev | Tanvi Anand | Weijia Xu | Atnafu Lambebo Tonja | Alfredo Gomez | Chanjun Park
Proceedings of the Seventh Widening NLP Workshop (WiNLP 2023)

2019

bib
Reading KITTY: Pitch Range as an Indicator of Reading Skill
Alfredo Gomez | Alicia Ngo | Alessandra Otondo | Julie Medero
Proceedings of the 2019 Workshop on Widening NLP

While affective outcomes are generally positive for the use of eBooks and computer-based reading tutors in teaching children to read, learning outcomes are often poorer (Korat and Shamir, 2004). We describe the first iteration of Reading Kitty, an iOS application that uses NLP and speech processing to focus children’s time on close reading and prosody in oral reading, while maintaining an emphasis on creativity and artifact creation. We also share preliminary results demonstrating that pitch range can be used to automatically predict readers’ skill level.