NLITrans at SemEval-2018 Task 12: Transfer of Semantic Knowledge for Argument Comprehension

Timothy Niven, Hung-Yu Kao


Abstract
The Argument Reasoning Comprehension Task is a difficult challenge requiring significant language understanding and complex reasoning over world knowledge. We focus on transfer of a sentence encoder to bootstrap more complicated architectures given the small size of the dataset. Our best model uses a pre-trained BiLSTM to encode input sentences, learns task-specific features for the argument and warrants, then performs independent argument-warrant matching. This model achieves mean test set accuracy of 61.31%. Encoder transfer yields a significant gain to our best model over random initialization. Sharing parameters for independent warrant evaluation provides regularization and effectively doubles the size of the dataset. We demonstrate that regularization comes from ignoring statistical correlations between warrant positions. We also report an experiment with our best model that only matches warrants to reasons, ignoring claims. Performance is still competitive, suggesting that our model is not necessarily learning the intended task.
Anthology ID:
S18-1185
Volume:
Proceedings of the 12th International Workshop on Semantic Evaluation
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marianna Apidianaki, Saif M. Mohammad, Jonathan May, Ekaterina Shutova, Steven Bethard, Marine Carpuat
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1099–1103
Language:
URL:
https://aclanthology.org/S18-1185
DOI:
10.18653/v1/S18-1185
Bibkey:
Cite (ACL):
Timothy Niven and Hung-Yu Kao. 2018. NLITrans at SemEval-2018 Task 12: Transfer of Semantic Knowledge for Argument Comprehension. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 1099–1103, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
NLITrans at SemEval-2018 Task 12: Transfer of Semantic Knowledge for Argument Comprehension (Niven & Kao, SemEval 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/S18-1185.pdf
Code
 IKMLab/arct