SubmissionNumber#=%=#12 FinalPaperTitle#=%=#Effect of Multilingual and Domain-adapted Continual Pre-training on Few-shot Promptability ShortPaperTitle#=%=# NumberOfPages#=%=#9 CopyrightSigned#=%=#Ken Yano JobTitle#==#Researcher Organization#==#National Institute of Advanced Industrial Science and Technology 2-4-7 Aomi, Koto-Ku, Tokyo, 135-0064, JAPAN Abstract#==#Continual Pre-training (CPT) can help pre-trained large language models (LLMs) effectively adapt to new or under-trained domains or low-resource languages without re-training from scratch. Nevertheless, during CPT, the model's few-shot transfer ability is known to be affected for emergent tasks. We verified this by comparing the performance between the few-shot and fine-tuning settings on the same tasks. We used Llama3-ELAINE-medLLM, which was continually pre-trained on Llama3-8B, targeted for the biomedical domain, and adapted for multilingual languages (English, Japanese, and Chinese). We compared the performance of Llama3-ELAINE-medLLM and Llama3-8B in three emergent tasks: two related domain tasks, entity recognition (NER) and machine translation (MT), and one out-of-domain task, summarization (SUM). Our experimental results show that degradation in few-shot transfer ability does not necessarily indicate the model's underlying potential on the same task after fine-tuning. Author{1}{Firstname}#=%=#Ken Author{1}{Lastname}#=%=#Yano Author{1}{Username}#=%=#yanoken Author{1}{Email}#=%=#yano.ken@aist.go.jp Author{1}{Affiliation}#=%=#The National Institute of Advanced Industrial Science and Technology Author{2}{Firstname}#=%=#Makoto Author{2}{Lastname}#=%=#Miwa Author{2}{Username}#=%=#miwa Author{2}{Email}#=%=#makoto-miwa@toyota-ti.ac.jp Author{2}{Affiliation}#=%=#Toyota Technological Institute ========== èéáğö