Multi-Task Learning for Argumentation Mining in Low-Resource Settings
Claudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, Iryna Gurevych
Abstract
We investigate whether and where multi-task learning (MTL) can improve performance on NLP problems related to argumentation mining (AM), in particular argument component identification. Our results show that MTL performs particularly well (and better than single-task learning) when little training data is available for the main task, a common scenario in AM. Our findings challenge previous assumptions that conceptualizations across AM datasets are divergent and that MTL is difficult for semantic or higher-level tasks.- Anthology ID:
- N18-2006
- Volume:
- Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Marilyn Walker, Heng Ji, Amanda Stent
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 35–41
- Language:
- URL:
- https://aclanthology.org/N18-2006
- DOI:
- 10.18653/v1/N18-2006
- Cite (ACL):
- Claudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, and Iryna Gurevych. 2018. Multi-Task Learning for Argumentation Mining in Low-Resource Settings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 35–41, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- Multi-Task Learning for Argumentation Mining in Low-Resource Settings (Schulz et al., NAACL 2018)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/N18-2006.pdf
- Code
- UKPLab/naacl18-multitask_argument_mining