Abstract
This paper describes our system used in the SemEval-2021 Task4 Reading Comprehension of Abstract Meaning, achieving 1st for subtask 1 and 2nd for subtask 2 on the leaderboard. We propose an ensemble of ELECTRA-based models with task-adaptive pretraining and a multi-head attention multiple-choice classifier on top of the pre-trained model. The main contributions of our system are 1) revealing the performance discrepancy of different transformer-based pretraining models on the downstream task, 2) presentation of an efficient method to generate large task-adaptive corpora for pretraining. We also investigated several pretraining strategies and contrastive learning objectives. Our system achieves a test accuracy of 95.11 and 94.89 on subtask 1 and subtask 2 respectively.- Anthology ID:
- 2021.semeval-1.5
- Volume:
- Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 51–58
- Language:
- URL:
- https://aclanthology.org/2021.semeval-1.5
- DOI:
- 10.18653/v1/2021.semeval-1.5
- Cite (ACL):
- Jing Zhang, Yimeng Zhuang, and Yinpei Su. 2021. TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 51–58, Online. Association for Computational Linguistics.
- Cite (Informal):
- TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension (Zhang et al., SemEval 2021)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2021.semeval-1.5.pdf
- Data
- GLUE, MultiNLI, NEWSROOM, ReCAM