The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP

Julian Michael


Abstract
I propose a paradigm for scientific progress in NLP centered around developing scalable, data-driven theories of linguistic structure. The idea is to collect data in tightly scoped, carefully defined ways which allow for exhaustive annotation of behavioral phenomena of interest, and then use machine learning to construct explanatory theories of these phenomena which can form building blocks for intelligible AI systems. After laying some conceptual groundwork, I describe several investigations into data-driven theories of shallow semantic structure using Question-Answer driven Semantic Role Labeling (QA-SRL), a schema for annotating verbal predicate-argument relations using highly constrained question-answer pairs. While this only scratches the surface of the complex language behaviors of interest in AI, I outline principles for data collection and theoretical modeling which can inform future scientific progress. This note summarizes and draws heavily on my PhD thesis.
Anthology ID:
2023.bigpicture-1.4
Volume:
Proceedings of the Big Picture Workshop
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yanai Elazar, Allyson Ettinger, Nora Kassner, Sebastian Ruder, Noah A. Smith
Venue:
BigPicture
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–52
Language:
URL:
https://aclanthology.org/2023.bigpicture-1.4
DOI:
10.18653/v1/2023.bigpicture-1.4
Bibkey:
Cite (ACL):
Julian Michael. 2023. The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP. In Proceedings of the Big Picture Workshop, pages 40–52, Singapore. Association for Computational Linguistics.
Cite (Informal):
The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP (Michael, BigPicture 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.bigpicture-1.4.pdf