Alice Heiman
2022
Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish
Ariel Ekgren
|
Amaru Cuba Gyllensten
|
Evangelia Gogoulou
|
Alice Heiman
|
Severine Verlinden
|
Joey Öhman
|
Fredrik Carlsson
|
Magnus Sahlgren
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We present GTP-SW3, a 3.5 billion parameter autoregressive language model, trained on a newly created 100 GB Swedish corpus. This paper provides insights with regards to data collection and training, while highlights the challenges of proper model evaluation. The results of quantitive evaluation through perplexity indicate that GPT-SW3 is a competent model in comparison with existing autoregressive models of similar size. Additionally, we perform an extensive prompting study which reveals the good text generation capabilities of GTP-SW3.
Search
Co-authors
- Ariel Ekgren 1
- Amaru Cuba Gyllensten 1
- Evangelia Gogoulou 1
- Severine Verlinden 1
- Joey Öhman 1
- show all...
Venues
- lrec1