Classification of Semantic Paraphasias: Optimization of a Word Embedding Model

Katy McKinney-Bock, Steven Bedrick


Abstract
In clinical assessment of people with aphasia, impairment in the ability to recall and produce words for objects (anomia) is assessed using a confrontation naming task, where a target stimulus is viewed and a corresponding label is spoken by the participant. Vector space word embedding models have had inital results in assessing semantic similarity of target-production pairs in order to automate scoring of this task; however, the resulting models are also highly dependent upon training parameters. To select an optimal family of models, we fit a beta regression model to the distribution of performance metrics on a set of 2,880 grid search models and evaluate the resultant first- and second-order effects to explore how parameterization affects model performance. Comparing to SimLex-999, we show that clinical data can be used in an evaluation task with comparable optimal parameter settings as standard NLP evaluation datasets.
Anthology ID:
W19-2007
Volume:
Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP
Month:
June
Year:
2019
Address:
Minneapolis, USA
Venues:
NAACL | RepEval | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
52–62
Language:
URL:
https://aclanthology.org/W19-2007
DOI:
10.18653/v1/W19-2007
Bibkey:
Cite (ACL):
Katy McKinney-Bock and Steven Bedrick. 2019. Classification of Semantic Paraphasias: Optimization of a Word Embedding Model. In Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP, pages 52–62, Minneapolis, USA. Association for Computational Linguistics.
Cite (Informal):
Classification of Semantic Paraphasias: Optimization of a Word Embedding Model (McKinney-Bock & Bedrick, 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/W19-2007.pdf