Analysis of LLM as a grammatical feature tagger for African American English
Rahul Porwal, Alice Rozet, Jotsna Gowda, Pryce Houck, Kevin Tang, Sarah Moeller
Abstract
African American English (AAE) presents unique challenges in natural language processing (NLP) This research systematically compares the performance of available NLP models—rule-based, transformer-based, and large language models (LLMs)—capable of identifying key grammatical features of AAE, namely Habitual Be and Multiple Negation. These features were selected for their distinct grammatical complexity and frequency of occurrence. The evaluation involved sentence-level binary classification tasks, using both zero-shot and few-shot strategies. The analysis reveals that while LLMs show promise compared to the baseline, they are influenced by biases such as recency and unrelated features in the text such as formality. This study highlights the necessity for improved model training and architectural adjustments to better accommodate AAE’s unique linguistic characteristics. Data and code are available.- Anthology ID:
- 2025.findings-naacl.431
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7744–7756
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.431/
- DOI:
- Cite (ACL):
- Rahul Porwal, Alice Rozet, Jotsna Gowda, Pryce Houck, Kevin Tang, and Sarah Moeller. 2025. Analysis of LLM as a grammatical feature tagger for African American English. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 7744–7756, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Analysis of LLM as a grammatical feature tagger for African American English (Porwal et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.431.pdf