Abstract
In previous work, it has been shown that BERT can adequately align cross-lingual sentences on the word level. Here we investigate whether BERT can also operate as a char-level aligner. The languages examined are English, Fake English, German and Greek. We show that the closer two languages are, the better BERT can align them on the character level. BERT indeed works well in English to Fake English alignment, but this does not generalize to natural languages to the same extent. Nevertheless, the proximity of two languages does seem to be a factor. English is more related to German than to Greek and this is reflected in how well BERT aligns them; English to German is better than English to Greek. We examine multiple setups and show that the similarity matrices for natural languages show weaker relations the further apart two languages are.- Anthology ID:
- 2021.insights-1.3
- Volume:
- Proceedings of the Second Workshop on Insights from Negative Results in NLP
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- João Sedoc, Anna Rogers, Anna Rumshisky, Shabnam Tafreshi
- Venue:
- insights
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 16–22
- Language:
- URL:
- https://aclanthology.org/2021.insights-1.3
- DOI:
- 10.18653/v1/2021.insights-1.3
- Cite (ACL):
- Antonis Maronikolakis, Philipp Dufter, and Hinrich Schütze. 2021. BERT Cannot Align Characters. In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 16–22, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- BERT Cannot Align Characters (Maronikolakis et al., insights 2021)
- PDF:
- https://preview.aclanthology.org/landing_page/2021.insights-1.3.pdf