@inproceedings{tsiamas-etal-2025-improving,
    title = "Improving Language and Modality Transfer in Translation by Character-level Modeling",
    author = "Tsiamas, Ioannis  and
      Dale, David  and
      Costa-juss{\`a}, Marta R.",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.988/",
    doi = "10.18653/v1/2025.acl-long.988",
    pages = "20171--20187",
    ISBN = "979-8-89176-251-0",
    abstract = "Current translation systems, despite being highly multilingual, cover only 5{\%} of the world{'}s languages. Expanding language coverage to the long-tail of low-resource languages requires data-efficient methods that rely on cross-lingual and cross-modal knowledge transfer. To this end, we propose a character-based approach to improve adaptability to new languages and modalities. Our method leverages SONAR, a multilingual fixed-size embedding space with different modules for encoding and decoding. We use a teacher-student approach with parallel translation data to obtain a character-level encoder. Then, using ASR data, we train a lightweight adapter to connect a massively multilingual CTC ASR model (MMS), to the character-level encoder, potentially enabling speech translation from 1,000+ languages. Experimental results in text translation for 75 languages on FLORES+ demonstrate that our character-based approach can achieve better language transfer than traditional subword-based models, especially outperforming them in low-resource settings, and demonstrating better zero-shot generalizability to unseen languages. Our speech adaptation, maximizing knowledge transfer from the text modality, achieves state-of-the-art results in speech-to-text translation on the FLEURS benchmark on 33 languages, surpassing previous supervised and cascade models, albeit being a zero-shot model with minimal supervision from ASR data."
}Markdown (Informal)
[Improving Language and Modality Transfer in Translation by Character-level Modeling](https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.988/) (Tsiamas et al., ACL 2025)
ACL