Team Enigma at ArgMining-EMNLP 2021: Leveraging Pre-trained Language Models for Key Point Matching

Manav Kapadnis, Sohan Patnaik, Siba Panigrahi, Varun Madhavan, Abhilash Nandy


Abstract
We present the system description for our submission towards the Key Point Analysis Shared Task at ArgMining 2021. Track 1 of the shared task requires participants to develop methods to predict the match score between each pair of arguments and key points, provided they belong to the same topic under the same stance. We leveraged existing state of the art pre-trained language models along with incorporating additional data and features extracted from the inputs (topics, key points, and arguments) to improve performance. We were able to achieve mAP strict and mAP relaxed score of 0.872 and 0.966 respectively in the evaluation phase, securing 5th place on the leaderboard. In the post evaluation phase, we achieved a mAP strict and mAP relaxed score of 0.921 and 0.982 respectively.
Anthology ID:
2021.argmining-1.21
Volume:
Proceedings of the 8th Workshop on Argument Mining
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
ArgMining
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
200–205
Language:
URL:
https://aclanthology.org/2021.argmining-1.21
DOI:
10.18653/v1/2021.argmining-1.21
Bibkey:
Cite (ACL):
Manav Kapadnis, Sohan Patnaik, Siba Panigrahi, Varun Madhavan, and Abhilash Nandy. 2021. Team Enigma at ArgMining-EMNLP 2021: Leveraging Pre-trained Language Models for Key Point Matching. In Proceedings of the 8th Workshop on Argument Mining, pages 200–205, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Team Enigma at ArgMining-EMNLP 2021: Leveraging Pre-trained Language Models for Key Point Matching (Kapadnis et al., ArgMining 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.argmining-1.21.pdf
Software:
 2021.argmining-1.21.Software.zip
Code
 manavkapadnis/enigma_argmining