Abstract
Linear models, which support efficient learning and inference, are the workhorses of statistical machine translation; however, linear decision rules are less attractive from a modeling perspective. In this work, we introduce a technique for learning arbitrary, rule-local, non-linear feature transforms that improve model expressivity, but do not sacrifice the efficient inference and learning associated with linear models. To demonstrate the value of our technique, we discard the customary log transform of lexical probabilities and drop the phrasal translation probability in favor of raw counts. We observe that our algorithm learns a variation of a log transform that leads to better translation quality compared to the explicit log transform. We conclude that non-linear responses play an important role in SMT, an observation that we hope will inform the efforts of feature engineers.- Anthology ID:
- Q14-1031
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 2
- Month:
- Year:
- 2014
- Address:
- Cambridge, MA
- Editors:
- Dekang Lin, Michael Collins, Lillian Lee
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 393–404
- Language:
- URL:
- https://aclanthology.org/Q14-1031
- DOI:
- 10.1162/tacl_a_00191
- Cite (ACL):
- Jonathan H. Clark, Chris Dyer, and Alon Lavie. 2014. Locally Non-Linear Learning for Statistical Machine Translation via Discretization and Structured Regularization. Transactions of the Association for Computational Linguistics, 2:393–404.
- Cite (Informal):
- Locally Non-Linear Learning for Statistical Machine Translation via Discretization and Structured Regularization (Clark et al., TACL 2014)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/Q14-1031.pdf