Rosanne Liu
2023
Character-Aware Models Improve Visual Text Rendering
Rosanne Liu
|
Dan Garrette
|
Chitwan Saharia
|
William Chan
|
Adam Roberts
|
Sharan Narang
|
Irina Blok
|
Rj Mical
|
Mohammad Norouzi
|
Noah Constant
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word’s visual makeup as a series of glyphs. To quantify this effect, we conduct a series of experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Applying our learnings to the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
2021
Language Models are Few-shot Multilingual Learners
Genta Indra Winata
|
Andrea Madotto
|
Zhaojiang Lin
|
Rosanne Liu
|
Jason Yosinski
|
Pascale Fung
Proceedings of the 1st Workshop on Multilingual Representation Learning
General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones. Finally, we find the in-context few-shot cross-lingual prediction results of language models are significantly better than random prediction, and they are competitive compared to the existing state-of-the-art cross-lingual models and translation models.
Search
Co-authors
- Genta Indra Winata 1
- Andrea Madotto 1
- Zhaojiang Lin 1
- Jason Yosinski 1
- Pascale Fung 1
- show all...