@inproceedings{adouane-etal-2019-normalising,
    title = "Normalising Non-standardised Orthography in {A}lgerian Code-switched User-generated Data",
    author = "Adouane, Wafia  and
      Bernardy, Jean-Philippe  and
      Dobnik, Simon",
    editor = "Xu, Wei  and
      Ritter, Alan  and
      Baldwin, Tim  and
      Rahimi, Afshin",
    booktitle = "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/D19-5518/",
    doi = "10.18653/v1/D19-5518",
    pages = "131--140",
    abstract = "We work with Algerian, an under-resourced non-standardised Arabic variety, for which we compile a new parallel corpus consisting of user-generated textual data matched with normalised and corrected human annotations following data-driven and our linguistically motivated standard. We use an end-to-end deep neural model designed to deal with context-dependent spelling correction and normalisation. Results indicate that a model with two CNN sub-network encoders and an LSTM decoder performs the best, and that word context matters. Additionally, pre-processing data token-by-token with an edit-distance based aligner significantly improves the performance. We get promising results for the spelling correction and normalisation, as a pre-processing step for downstream tasks, on detecting binary Semantic Textual Similarity."
}Markdown (Informal)
[Normalising Non-standardised Orthography in Algerian Code-switched User-generated Data](https://preview.aclanthology.org/iwcs-25-ingestion/D19-5518/) (Adouane et al., WNUT 2019)
ACL