Abstract
In transductive learning, an unlabeled test set is used for model training. Although this setting deviates from the common assumption of a completely unseen test set, it is applicable in many real-world scenarios, wherein the texts to be processed are known in advance. However, despite its practical advantages, transductive learning is underexplored in natural language processing. Here we conduct an empirical study of transductive learning for neural models and demonstrate its utility in syntactic and semantic tasks. Specifically, we fine-tune language models (LMs) on an unlabeled test set to obtain test-set-specific word representations. Through extensive experiments, we demonstrate that despite its simplicity, transductive LM fine-tuning consistently improves state-of-the-art neural models in in-domain and out-of-domain settings.- Anthology ID:
- D19-1379
- Volume:
- Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Venues:
- EMNLP | IJCNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3665–3671
- Language:
- URL:
- https://aclanthology.org/D19-1379
- DOI:
- 10.18653/v1/D19-1379
- Cite (ACL):
- Hiroki Ouchi, Jun Suzuki, and Kentaro Inui. 2019. Transductive Learning of Neural Language Models for Syntactic and Semantic Analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3665–3671, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- Transductive Learning of Neural Language Models for Syntactic and Semantic Analysis (Ouchi et al., EMNLP-IJCNLP 2019)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/D19-1379.pdf