Tanin Zeraati
2023
Harnessing Dataset Cartography for Improved Compositional Generalization in Transformers
Osman İnce
|
Tanin Zeraati
|
Semih Yagcioglu
|
Yadollah Yaghoobzadeh
|
Erkut Erdem
|
Aykut Erdem
Findings of the Association for Computational Linguistics: EMNLP 2023
Neural networks have revolutionized language modeling and excelled in various downstream tasks. However, the extent to which these models achieve compositional generalization comparable to human cognitive abilities remains a topic of debate. While existing approaches in the field have mainly focused on novel architectures and alternative learning paradigms, we introduce a pioneering method harnessing the power of dataset cartography (Swayamdipta et al., 2020). By strategically identifying a subset of compositional generalization data using this approach, we achieve a remarkable improvement in model accuracy, yielding enhancements of up to 10% on CFQ and COGS datasets. Notably, our technique incorporates dataset cartography as a curriculum learning criterion, eliminating the need for hyperparameter tuning while consistently achieving superior performance. Our findings highlight the untapped potential of dataset cartography in unleashing the full capabilities of compositional generalization within Transformer models.
2022
UTNLP at SemEval-2022 Task 6: A Comparative Analysis of Sarcasm Detection Using Generative-based and Mutation-based Data Augmentation
Amirhossein Abaskohi
|
Arash Rasouli
|
Tanin Zeraati
|
Behnam Bahrak
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Sarcasm is a term that refers to the use of words to mock, irritate, or amuse someone. It is commonly used on social media. The metaphorical and creative nature of sarcasm presents a significant difficulty for sentiment analysis systems based on affective computing. The methodology and results of our team, UTNLP, in the SemEval-2022 shared task 6 on sarcasm detection are presented in this paper. We put different models, and data augmentation approaches to the test and report on which one works best. The tests begin with traditional machine learning models and progress to transformer-based and attention-based models. We employed data augmentation based on data mutation and data generation. Using RoBERTa and mutation-based data augmentation, our best approach achieved an F1-score of 0.38 in the competition’s evaluation phase. After the competition, we fixed our model’s flaws and achieved anF1-score of 0.414.
Search
Co-authors
- Osman İnce 1
- Semih Yagcioglu 1
- Yadollah Yaghoobzadeh 1
- Erkut Erdem 1
- Aykut Erdem 1
- show all...