David Dobolyi
2025
Textagon: Boosting Language Models with Theory-guided Parallel Representations
John P. Lalor
|
Ruiyang Qin
|
David Dobolyi
|
Ahmed Abbasi
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Pretrained language models have significantly advanced the state of the art in generating distributed representations of text. However, they do not account for the wide variety of available expert-generated language resources and lexicons that explicitly encode linguistic/domain knowledge. Such lexicons can be paired with learned embeddings to further enhance NLP prediction and linguistic inquiry. In this work we present Textagon, a Python package for generating parallel representations for text based on predefined lexicons and selecting representations that provide the most information. We discuss the motivation behind the software, its implementation, as well as two case studies for its use to demonstrate operational utility.
2021
Constructing a Psychometric Testbed for Fair Natural Language Processing
Ahmed Abbasi
|
David Dobolyi
|
John P. Lalor
|
Richard G. Netemeyer
|
Kendall Smith
|
Yi Yang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Psychometric measures of ability, attitudes, perceptions, and beliefs are crucial for understanding user behavior in various contexts including health, security, e-commerce, and finance. Traditionally, psychometric dimensions have been measured and collected using survey-based methods. Inferring such constructs from user-generated text could allow timely, unobtrusive collection and analysis. In this paper we describe our efforts to construct a corpus for psychometric natural language processing (NLP) related to important dimensions such as trust, anxiety, numeracy, and literacy, in the health domain. We discuss our multi-step process to align user text with their survey-based response items and provide an overview of the resulting testbed which encompasses survey-based psychometric measures and accompanying user-generated text from 8,502 respondents. Our testbed also encompasses self-reported demographic information, including race, sex, age, income, and education - thereby affording opportunities for measuring bias and benchmarking fairness of text classification methods. We report preliminary results on use of the text to predict/categorize users’ survey response labels - and on the fairness of these models. We also discuss the important implications of our work and resulting testbed for future NLP research on psychometrics and fairness.
Search
Fix author
Co-authors
- Ahmed Abbasi 2
- John P. Lalor 2
- Richard G. Netemeyer 1
- Ruiyang Qin 1
- Kendall Smith 1
- show all...
- Yi Yang 1