Abstract
In relevance classification, we hope to judge whether some utterances expressed on a topic are relevant or not. A usual method is to train a specific classifier respectively for each topic. However, in that way, it easily causes an underfitting problem in supervised learning model, since annotated data can be insufficient for every single topic. In this paper, we explore the common features beyond different topics and propose our cross-topic relevance embedding aggregation methodology (CREAM) that can expand the range of training data and apply what has been learned from source topics to a target topic. In our experiment, we show that our proposal could capture common features within a small amount of annotated data and improve the performance of relevance classification compared with other baselines.- Anthology ID:
- D19-5520
- Volume:
- Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 147–152
- Language:
- URL:
- https://aclanthology.org/D19-5520
- DOI:
- 10.18653/v1/D19-5520
- Cite (ACL):
- Jiawei Yong. 2019. A Cross-Topic Method for Supervised Relevance Classification. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 147–152, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- A Cross-Topic Method for Supervised Relevance Classification (Yong, WNUT 2019)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/D19-5520.pdf