Abstract
We work with Algerian, an under-resourced non-standardised Arabic variety, for which we compile a new parallel corpus consisting of user-generated textual data matched with normalised and corrected human annotations following data-driven and our linguistically motivated standard. We use an end-to-end deep neural model designed to deal with context-dependent spelling correction and normalisation. Results indicate that a model with two CNN sub-network encoders and an LSTM decoder performs the best, and that word context matters. Additionally, pre-processing data token-by-token with an edit-distance based aligner significantly improves the performance. We get promising results for the spelling correction and normalisation, as a pre-processing step for downstream tasks, on detecting binary Semantic Textual Similarity.- Anthology ID:
- D19-5518
- Volume:
- Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Editors:
- Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 131–140
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/D19-5518/
- DOI:
- 10.18653/v1/D19-5518
- Cite (ACL):
- Wafia Adouane, Jean-Philippe Bernardy, and Simon Dobnik. 2019. Normalising Non-standardised Orthography in Algerian Code-switched User-generated Data. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 131–140, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- Normalising Non-standardised Orthography in Algerian Code-switched User-generated Data (Adouane et al., WNUT 2019)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/D19-5518.pdf