Abstract
This paper investigates optimal ways to get maximal coverage from minimal input training corpus. In effect, it seems antagonistic to think of minimal input training with a statistical machine translation system. Since statistics work well with repetition and thus capture well highly occurring words, one challenge has been to figure out the optimal number of “new” words that the system needs to be appropriately trained. Additionally, the goal is to minimize the human translation time for training a new language. In order to account for rapid ramp-up translation, we ran several experiments to figure out the minimal amount of data to obtain optimal translation results.- Anthology ID:
- 2005.mtsummit-posters.17
- Volume:
- Proceedings of Machine Translation Summit X: Posters
- Month:
- September 13-15
- Year:
- 2005
- Address:
- Phuket, Thailand
- Venue:
- MTSummit
- SIG:
- Publisher:
- Note:
- Pages:
- 438–444
- Language:
- URL:
- https://aclanthology.org/2005.mtsummit-posters.17
- DOI:
- Cite (ACL):
- Hemali Majithia, Philip Rennart, and Evelyne Tzoukermann. 2005. Rapid Ramp-up for Statistical Machine Translation: Minimal Training for Maximal Coverage. In Proceedings of Machine Translation Summit X: Posters, pages 438–444, Phuket, Thailand.
- Cite (Informal):
- Rapid Ramp-up for Statistical Machine Translation: Minimal Training for Maximal Coverage (Majithia et al., MTSummit 2005)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2005.mtsummit-posters.17.pdf