Abstract
This paper describes our system for Task 4 of SemEval-2021: Reading Comprehension of Abstract Meaning (ReCAM). We participated in all subtasks where the main goal was to predict an abstract word missing from a statement. We fine-tuned the pre-trained masked language models namely BERT and ALBERT and used an Ensemble of these as our submitted system on Subtask 1 (ReCAM-Imperceptibility) and Subtask 2 (ReCAM-Nonspecificity). For Subtask 3 (ReCAM-Intersection), we submitted the ALBERT model as it gives the best results. We tried multiple approaches and found that Masked Language Modeling(MLM) based approach works the best.- Anthology ID:
- 2021.semeval-1.19
- Volume:
- Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 175–182
- Language:
- URL:
- https://aclanthology.org/2021.semeval-1.19
- DOI:
- 10.18653/v1/2021.semeval-1.19
- Cite (ACL):
- Abhishek Mittal and Ashutosh Modi. 2021. ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 175–182, Online. Association for Computational Linguistics.
- Cite (Informal):
- ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction (Mittal & Modi, SemEval 2021)
- PDF:
- https://preview.aclanthology.org/aacl-23-doi-ingestion/2021.semeval-1.19.pdf
- Code
- amittal151/SemEval-2021-Task4_models
- Data
- ReCAM