An ELECTRA Model for Latin Token Tagging Tasks

Wouter Mercelis, Alek Keersmaekers


Abstract
This report describes the KU Leuven / Brepols-CTLO submission to EvaLatin 2022. We present the results of our current small Latin ELECTRA model, which will be expanded to a larger model in the future. For the lemmatization task, we combine a neural token-tagging approach with the in-house rule-based lemma lists from Brepols’ ReFlex software. The results are decent, but suffer from inconsistencies between Brepols’ and EvaLatin’s definitions of a lemma. For POS-tagging, the results come up just short from the first place in this competition, mainly struggling with proper nouns. For morphological tagging, there is much more room for improvement. Here, the constraints added to our Multiclass Multilabel model were often not tight enough, causing missing morphological features. We will further investigate why the combination of the different morphological features, which perform fine on their own, leads to issues.
Anthology ID:
2022.lt4hala-1.30
Volume:
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages
Month:
June
Year:
2022
Address:
Marseille, France
Venue:
LT4HALA
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
189–192
Language:
URL:
https://aclanthology.org/2022.lt4hala-1.30
DOI:
Bibkey:
Cite (ACL):
Wouter Mercelis and Alek Keersmaekers. 2022. An ELECTRA Model for Latin Token Tagging Tasks. In Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages, pages 189–192, Marseille, France. European Language Resources Association.
Cite (Informal):
An ELECTRA Model for Latin Token Tagging Tasks (Mercelis & Keersmaekers, LT4HALA 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.lt4hala-1.30.pdf