Team Stanford ACMLab at SemEval 2022 Task 4: Textual Analysis of PCL Using Contextual Word Embeddings

Upamanyu Dass-Vattam, Spencer Wallace, Rohan Sikand, Zach Witzel, Jillian Tang


Abstract
We propose the use of a contextual embedding based-neural model on strictly textual inputs to detect the presence of patronizing or condescending language (PCL). We finetuned a pre-trained BERT model to detect whether or not a paragraph contained PCL (Subtask 1), and furthermore finetuned another pre-trained BERT model to identify the linguistic techniques used to convey the PCL (Subtask 2). Results show that this approach is viable for binary classification of PCL, but breaks when attempting to identify the PCL techniques. Our system placed 32/79 for subtask 1, and 40/49 for subtask 2.
Anthology ID:
2022.semeval-1.56
Volume:
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
418–420
Language:
URL:
https://aclanthology.org/2022.semeval-1.56
DOI:
10.18653/v1/2022.semeval-1.56
Bibkey:
Cite (ACL):
Upamanyu Dass-Vattam, Spencer Wallace, Rohan Sikand, Zach Witzel, and Jillian Tang. 2022. Team Stanford ACMLab at SemEval 2022 Task 4: Textual Analysis of PCL Using Contextual Word Embeddings. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 418–420, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Team Stanford ACMLab at SemEval 2022 Task 4: Textual Analysis of PCL Using Contextual Word Embeddings (Dass-Vattam et al., SemEval 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.semeval-1.56.pdf