@inproceedings{fei-etal-2023-constructing,
    title = "Constructing Code-mixed {U}niversal {D}ependency Forest for Unbiased Cross-lingual Relation Extraction",
    author = "Fei, Hao  and
      Zhang, Meishan  and
      Zhang, Min  and
      Chua, Tat-Seng",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.findings-acl.599/",
    doi = "10.18653/v1/2023.findings-acl.599",
    pages = "9395--9408",
    abstract = "Latest efforts on cross-lingual relation extraction (XRE) aggressively leverage the language-consistent structural features from the universal dependency (UD) resource, while they may largely suffer from biased transfer (e.g., either target-biased or source-biased) due to the inevitable linguistic disparity between languages. In this work, we investigate an unbiased UD- based XRE transfer by constructing a type of code-mixed UD forest. We first translate the sentence of the source language to the parallel target-side language, for both of which we parse the UD tree respectively. Then, we merge the source-/target-side UD structures as a unified code-mixed UD forest. With such forest features, the gaps of UD-based XRE between the training and predicting phases can be effectively closed. We conduct experiments on the ACE XRE benchmark datasets, where the results demonstrate that the proposed code-mixed UD forests help unbiased UD-based XRE transfer, with which we achieve significant XRE performance gains."
}Markdown (Informal)
[Constructing Code-mixed Universal Dependency Forest for Unbiased Cross-lingual Relation Extraction](https://preview.aclanthology.org/ingest-emnlp/2023.findings-acl.599/) (Fei et al., Findings 2023)
ACL