Abstract
Zero-shot learning aims to recognize unseen objects using their semantic representations. Most existing works use visual attributes labeled by humans, not suitable for large-scale applications. In this paper, we revisit the use of documents as semantic representations. We argue that documents like Wikipedia pages contain rich visual information, which however can easily be buried by the vast amount of non-visual sentences. To address this issue, we propose a semi-automatic mechanism for visual sentence extraction that leverages the document section headers and the clustering structure of visual sentences. The extracted visual sentences, after a novel weighting scheme to distinguish similar classes, essentially form semantic representations like visual attributes but need much less human effort. On the ImageNet dataset with over 10,000 unseen classes, our representations lead to a 64% relative improvement against the commonly used ones.- Anthology ID:
- 2021.naacl-main.250
- Volume:
- Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3117–3128
- Language:
- URL:
- https://aclanthology.org/2021.naacl-main.250
- DOI:
- 10.18653/v1/2021.naacl-main.250
- Cite (ACL):
- Jihyung Kil and Wei-Lun Chao. 2021. Revisiting Document Representations for Large-Scale Zero-Shot Learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3117–3128, Online. Association for Computational Linguistics.
- Cite (Informal):
- Revisiting Document Representations for Large-Scale Zero-Shot Learning (Kil & Chao, NAACL 2021)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/2021.naacl-main.250.pdf
- Code
- heendung/vs-zsl
- Data
- AwA, AwA2, ImageNet, aPY