Shaping Visual Representations with Language for Few-Shot Classification

Jesse Mu, Percy Liang, Noah Goodman


Abstract
By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models. We use language to improve few-shot visual classification in the underexplored scenario where natural language task descriptions are available during training, but unavailable for novel tasks at test time. Existing models for this setting sample new descriptions at test time and use those to classify images. Instead, we propose language-shaped learning (LSL), an end-to-end model that regularizes visual representations to predict language. LSL is conceptually simpler, more data efficient, and outperforms baselines in two challenging few-shot domains.
Anthology ID:
2020.acl-main.436
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4823–4830
Language:
URL:
https://aclanthology.org/2020.acl-main.436
DOI:
10.18653/v1/2020.acl-main.436
Bibkey:
Cite (ACL):
Jesse Mu, Percy Liang, and Noah Goodman. 2020. Shaping Visual Representations with Language for Few-Shot Classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4823–4830, Online. Association for Computational Linguistics.
Cite (Informal):
Shaping Visual Representations with Language for Few-Shot Classification (Mu et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.acl-main.436.pdf
Video:
 http://slideslive.com/38929250
Code
 jayelm/lsl +  additional community code
Data
ShapeWorld