Comparing a Hand-crafted to an Automatically Generated Feature Set for Deep Learning: Pairwise Translation Evaluation

Despoina Mouratidis, Katia Lida Kermanidis


Abstract
The automatic evaluation of machine translation (MT) has proven to be a very significant research topic. Most automatic evaluation methods focus on the evaluation of the output of MT as they compute similarity scores that represent translation quality. This work targets on the performance of MT evaluation. We present a general scheme for learning to classify parallel translations, using linguistic information, of two MT model outputs and one human (reference) translation. We present three experiments to this scheme using neural networks (NN). One using string based hand-crafted features (Exp1), the second using automatically trained embeddings from the reference and the two MT outputs (one from a statistical machine translation (SMT) model and the other from a neural ma-chine translation (NMT) model), which are learned using NN (Exp2), and the third experiment (Exp3) that combines information from the other two experiments. The languages involved are English (EN), Greek (GR) and Italian (IT) segments are educational in domain. The proposed language-independent learning scheme which combines information from the two experiments (experiment 3) achieves higher classification accuracy compared with models using BLEU score information as well as other classification approaches, such as Random Forest (RF) and Support Vector Machine (SVM).
Anthology ID:
W19-8708
Volume:
Proceedings of the Human-Informed Translation and Interpreting Technology Workshop (HiT-IT 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Venue:
RANLP
SIG:
Publisher:
Incoma Ltd., Shoumen, Bulgaria
Note:
Pages:
66–74
Language:
URL:
https://aclanthology.org/W19-8708
DOI:
10.26615/issn.2683-0078.2019_008
Bibkey:
Cite (ACL):
Despoina Mouratidis and Katia Lida Kermanidis. 2019. Comparing a Hand-crafted to an Automatically Generated Feature Set for Deep Learning: Pairwise Translation Evaluation. In Proceedings of the Human-Informed Translation and Interpreting Technology Workshop (HiT-IT 2019), pages 66–74, Varna, Bulgaria. Incoma Ltd., Shoumen, Bulgaria.
Cite (Informal):
Comparing a Hand-crafted to an Automatically Generated Feature Set for Deep Learning: Pairwise Translation Evaluation (Mouratidis & Kermanidis, RANLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/W19-8708.pdf