NLP-IIS@UT at SemEval-2021 Task 4: Machine Reading Comprehension using the Long Document Transformer

Hossein Basafa, Sajad Movahedi, Ali Ebrahimi, Azadeh Shakery, Heshaam Faili


Abstract
This paper presents a technical report of our submission to the 4th task of SemEval-2021, titled: Reading Comprehension of Abstract Meaning. In this task, we want to predict the correct answer based on a question given a context. Usually, contexts are very lengthy and require a large receptive field from the model. Thus, common contextualized language models like BERT miss fine representation and performance due to the limited capacity of the input tokens. To tackle this problem, we used the longformer model to better process the sequences. Furthermore, we utilized the method proposed in the longformer benchmark on wikihop dataset which improved the accuracy on our task data from (23.01% and 22.95%) achieved by the baselines for subtask 1 and 2, respectively, to (70.30% and 64.38%).
Anthology ID:
2021.semeval-1.23
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
205–210
Language:
URL:
https://aclanthology.org/2021.semeval-1.23
DOI:
10.18653/v1/2021.semeval-1.23
Bibkey:
Cite (ACL):
Hossein Basafa, Sajad Movahedi, Ali Ebrahimi, Azadeh Shakery, and Heshaam Faili. 2021. NLP-IIS@UT at SemEval-2021 Task 4: Machine Reading Comprehension using the Long Document Transformer. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 205–210, Online. Association for Computational Linguistics.
Cite (Informal):
NLP-IIS@UT at SemEval-2021 Task 4: Machine Reading Comprehension using the Long Document Transformer (Basafa et al., SemEval 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2021.semeval-1.23.pdf
Data
WikiHop