Abstract
We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration. Our models learn to identify argument components and the relations between them at the same time. We also tackle the low-precision which arises from imbalanced relation data by experimenting with SMOTE and XGBoost. Our approaches improve over baselines that use the same pre-trained language model but process the argument component task and two relation tasks separately. Furthermore, our results suggest that the tasks to be incorporated into multi-task learning should be taken into consideration as using all relevant tasks does not always lead to the best performance.- Anthology ID:
- 2021.argmining-1.15
- Volume:
- Proceedings of the 8th Workshop on Argument Mining
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Venue:
- ArgMining
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 148–153
- Language:
- URL:
- https://aclanthology.org/2021.argmining-1.15
- DOI:
- 10.18653/v1/2021.argmining-1.15
- Cite (ACL):
- Nhat Tran and Diane Litman. 2021. Multi-task Learning in Argument Mining for Persuasive Online Discussions. In Proceedings of the 8th Workshop on Argument Mining, pages 148–153, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Multi-task Learning in Argument Mining for Persuasive Online Discussions (Tran & Litman, ArgMining 2021)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2021.argmining-1.15.pdf