2025
pdf
bib
abs
Enhancing AMR Parsing with Group Relative Policy Optimization
Botond Barta
|
Endre Hamerlik
|
Milán Nyist
|
Masato Ito
|
Judit Acs
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
We investigate the capabilities of the openly available Llama 3.2 1B language model for Abstract Meaning Representation (AMR) parsing through supervised fine-tuning, further enhanced by reinforcement learning via Group Relative Policy Optimization (GRPO). Existing supervised methods for AMR parsing face limitations due to static loss functions and challenges in capturing complex semantic phenomena. To address this, our GRPO-based approach explicitly optimizes fine-grained semantic rewards, including Smatch scores, frame-argument correctness, and structural validity of logical operations. Experimental results show that supervised fine-tuning alone establishes Llama as a capable English AMR parser, and subsequent GRPO fine-tuning further improves its performance. Our final model achieves higher Smatch scores, consistently respects critical low-level semantic constraints, and outperforms existing parsers on high-level semantic evaluation metrics across diverse linguistic phenomena.
2024
pdf
bib
abs
From News to Summaries: Building a Hungarian Corpus for Extractive and Abstractive Summarization
Botond Barta
|
Dorina Lakatos
|
Attila Nagy
|
Milán Konor Nyist
|
Judit Ács
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Training summarization models requires substantial amounts of training data. However for less resourceful languages like Hungarian, openly available models and datasets are notably scarce. To address this gap our paper introduces an open-source Hungarian corpus suitable for training abstractive and extractive summarization models. The dataset is assembled from segments of the Common Crawl corpus undergoing thorough cleaning, preprocessing and deduplication. In addition to abstractive summarization we generate sentence-level labels for extractive summarization using sentence similarity. We train baseline models for both extractive and abstractive summarization using the collected dataset. To demonstrate the effectiveness of the trained models, we perform both quantitative and qualitative evaluation. Our models and dataset will be made publicly available, encouraging replication, further research, and real-world applications across various domains.
2023
pdf
bib
abs
TreeSwap: Data Augmentation for Machine Translation via Dependency Subtree Swapping
Attila Nagy
|
Dorina Lakatos
|
Botond Barta
|
Judit Ács
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Data augmentation methods for neural machine translation are particularly useful when limited amount of training data is available, which is often the case when dealing with low-resource languages. We introduce a novel augmentation method, which generates new sentences by swapping objects and subjects across bisentences. This is performed simultaneously based on the dependency parse trees of the source and target sentences. We name this method TreeSwap. Our results show that TreeSwap achieves consistent improvements over baseline models in 4 language pairs in both directions on resource-constrained datasets. We also explore domain-specific corpora, but find that our method does not make significant improvements on law, medical and IT data. We report the scores of similar augmentation methods and find that TreeSwap performs comparably. We also analyze the generated sentences qualitatively and find that the augmentation produces a correct translation in most cases. Our code is available on Github.
2021
pdf
bib
abs
SIGMORPHON 2021 Shared Task on Morphological Reinflection: Generalization Across Languages
Tiago Pimentel
|
Maria Ryskina
|
Sabrina J. Mielke
|
Shijie Wu
|
Eleanor Chodroff
|
Brian Leonard
|
Garrett Nicolai
|
Yustinus Ghanggo Ate
|
Salam Khalifa
|
Nizar Habash
|
Charbel El-Khaissi
|
Omer Goldman
|
Michael Gasser
|
William Lane
|
Matt Coler
|
Arturo Oncevay
|
Jaime Rafael Montoya Samame
|
Gema Celeste Silva Villegas
|
Adam Ek
|
Jean-Philippe Bernardy
|
Andrey Shcherbakov
|
Aziyana Bayyr-ool
|
Karina Sheifer
|
Sofya Ganieva
|
Matvey Plugaryov
|
Elena Klyachko
|
Ali Salehi
|
Andrew Krizhanovsky
|
Natalia Krizhanovsky
|
Clara Vania
|
Sardana Ivanova
|
Aelita Salchak
|
Christopher Straughn
|
Zoey Liu
|
Jonathan North Washington
|
Duygu Ataman
|
Witold Kieraś
|
Marcin Woliński
|
Totok Suhardijanto
|
Niklas Stoehr
|
Zahroh Nuriah
|
Shyam Ratan
|
Francis M. Tyers
|
Edoardo M. Ponti
|
Grant Aiton
|
Richard J. Hatcher
|
Emily Prud’hommeaux
|
Ritesh Kumar
|
Mans Hulden
|
Botond Barta
|
Dorina Lakatos
|
Gábor Szolnok
|
Judit Ács
|
Mohit Raj
|
David Yarowsky
|
Ryan Cotterell
|
Ben Ambridge
|
Ekaterina Vylomova
Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
This year’s iteration of the SIGMORPHON Shared Task on morphological reinflection focuses on typological diversity and cross-lingual variation of morphosyntactic features. In terms of the task, we enrich UniMorph with new data for 32 languages from 13 language families, with most of them being under-resourced: Kunwinjku, Classical Syriac, Arabic (Modern Standard, Egyptian, Gulf), Hebrew, Amharic, Aymara, Magahi, Braj, Kurdish (Central, Northern, Southern), Polish, Karelian, Livvi, Ludic, Veps, Võro, Evenki, Xibe, Tuvan, Sakha, Turkish, Indonesian, Kodi, Seneca, Asháninka, Yanesha, Chukchi, Itelmen, Eibela. We evaluate six systems on the new data and conduct an extensive error analysis of the systems’ predictions. Transformer-based models generally demonstrate superior performance on the majority of languages, achieving >90% accuracy on 65% of them. The languages on which systems yielded low accuracy are mainly under-resourced, with a limited amount of data. Most errors made by the systems are due to allomorphy, honorificity, and form variation. In addition, we observe that systems especially struggle to inflect multiword lemmas. The systems also produce misspelled forms or end up in repetitive loops (e.g., RNN-based models). Finally, we report a large drop in systems’ performance on previously unseen lemmas.
pdf
bib
abs
BME Submission for SIGMORPHON 2021 Shared Task 0. A Three Step Training Approach with Data Augmentation for Morphological Inflection
Gábor Szolnok
|
Botond Barta
|
Dorina Lakatos
|
Judit Ács
Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
We present the BME submission for the SIGMORPHON 2021 Task 0 Part 1, Generalization Across Typologically Diverse Languages shared task. We use an LSTM encoder-decoder model with three step training that is first trained on all languages, then fine-tuned on each language family and finally fine-tuned on individual languages. We use a different type of data augmentation technique in the first two steps. Our system outperformed the only other submission. Although it remains worse than the Transformer baseline released by the organizers, our model is simpler and our data augmentation techniques are easily applicable to new languages. We perform ablation studies and show that the augmentation techniques and the three training steps often help but sometimes have a negative effect. Our code is publicly available.