Susan Üsküdarlı
Also published as: Susan Uskudarli
2026
TimeRes: A Turkish Benchmark For Evaluating Temporal Understanding of Large Language Models
Habib Yağız Demir | Ümit Atlamaz | Susan Üsküdarlı
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Habib Yağız Demir | Ümit Atlamaz | Susan Üsküdarlı
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Temporal information is an essential part of communication, and understanding language requires processing it effectively. Despite recent advances, Large Language Models (LLMs) still struggle with temporal understanding.Existing benchmarks primarily focus on English and underexplore how linguistic structure contributes to temporal meaning.As a result, temporal understanding in languages other than English remains largely understudied.In this paper, we introduce TimeRes, a Turkish benchmark for evaluating temporal understanding of LLMs. TimeRes aims to investigate comprehension of Reichenbach’s temporal points and reported speech through date arithmetic.Our dataset includes 4,600 questions across 4 tasks at two levels of complexity, and presents a paired question formulation to distinguish temporal discourse understanding from temporal arithmetic capabilities.We evaluated six LLMs, and demonstrated that models struggle to resolve reported speech and fail to generalize across word order variations.
TurkBench: A Benchmark for Evaluating Turkish Large Language Models
Cagri Toraman | Ahmet Kaan Sever | Ayşe Aysu Cengiz | Elif Ecem Arslan | Görkem Sevinç | Sarp Kantar | Mete Mert Birdal | Yusuf Faruk Güldemir | Ali Buğra Kanburoğlu | Sezen Felekoğlu | Birsen Şahin Kütük | Büşra Tufan | Elif Genç | Serkan Coşkun | Gupse Ekin Demir | Muhammed Emin Arayıcı | Olgun Dursun | Onur Gungor | Susan Üsküdarlı | Abdullah Topraksoy | Esra Darıcı
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
Cagri Toraman | Ahmet Kaan Sever | Ayşe Aysu Cengiz | Elif Ecem Arslan | Görkem Sevinç | Sarp Kantar | Mete Mert Birdal | Yusuf Faruk Güldemir | Ali Buğra Kanburoğlu | Sezen Felekoğlu | Birsen Şahin Kütük | Büşra Tufan | Elif Genç | Serkan Coşkun | Gupse Ekin Demir | Muhammed Emin Arayıcı | Olgun Dursun | Onur Gungor | Susan Üsküdarlı | Abdullah Topraksoy | Esra Darıcı
Proceedings of the Second Workshop Natural Language Processing for Turkic Languages (SIGTURK 2026)
With the recent surge in the development of large language models, the need for comprehensive and language-specific evaluation benchmarks has become critical. While significant progress has been made in evaluating English-language models, benchmarks for other languages, particularly those with unique linguistic characteristics such as Turkish, remain less developed. Our study introduces TurkBench, a comprehensive benchmark designed to assess the capabilities of generative large language models in the Turkish language. TurkBench involves 8,151 data samples across 21 distinct subtasks. These are organized under six main categories of evaluation: Knowledge, Language Understanding, Reasoning, Content Moderation, Turkish Grammar and Vocabulary, and Instruction Following. The diverse range of tasks and the culturally relevant data would provide researchers and developers with a valuable tool for evaluating their models and identifying areas for improvement. We further publish our benchmark for online submissions at https://huggingface.co/turkbench
2024
TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation
Gökçe Uludoğan | Zeynep Balal | Furkan Akkurt | Meliksah Turker | Onur Gungor | Susan Üsküdarlı
Findings of the Association for Computational Linguistics: ACL 2024
Gökçe Uludoğan | Zeynep Balal | Furkan Akkurt | Meliksah Turker | Onur Gungor | Susan Üsküdarlı
Findings of the Association for Computational Linguistics: ACL 2024
The recent advances in natural language processing have predominantly favored well-resourced English-centric models, resulting in a significant gap with low-resource languages. In this work, we introduce TURNA, a language model developed for the low-resource language Turkish and is capable of both natural language understanding and generation tasks.TURNA is pretrained with an encoder-decoder architecture based on the unified framework UL2 with a diverse corpus that we specifically curated for this purpose. We evaluated TURNA with three generation tasks and five understanding tasks for Turkish. The results show that TURNA outperforms several multilingual models in both understanding and generation tasks and competes with monolingual Turkish models in understanding tasks.
Evaluating the Quality of a Corpus Annotation Scheme Using Pretrained Language Models
Furkan Akkurt | Onur Gungor | Büşra Marşan | Tunga Gungor | Balkiz Ozturk Basaran | Arzucan Özgür | Susan Uskudarli
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Furkan Akkurt | Onur Gungor | Büşra Marşan | Tunga Gungor | Balkiz Ozturk Basaran | Arzucan Özgür | Susan Uskudarli
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Pretrained language models and large language models are increasingly used to assist in a great variety of natural language tasks. In this work, we explore their use in evaluating the quality of alternative corpus annotation schemes. For this purpose, we analyze two alternative annotations of the Turkish BOUN treebank, versions 2.8 and 2.11, in the Universal Dependencies framework using large language models. Using a suitable prompt generated using treebank annotations, large language models are used to recover the surface forms of sentences. Based on the idea that the large language models capture the characteristics of the languages, we expect that the better annotation scheme would yield the sentences with higher success. The experiments conducted on a subset of the treebank show that the new annotation scheme (2.11) results in a successful recovery percentage of about 2 points higher. All the code developed for this work is available at https://github.com/boun-tabi/eval-ud .
2023
TULAP - An Accessible and Sustainable Platform for Turkish Natural Language Processing Resources
Susan Uskudarli | Muhammet Şen | Furkan Akkurt | Merve Gürbüz | Onur Gungor | Arzucan Özgür | Tunga Güngör
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Susan Uskudarli | Muhammet Şen | Furkan Akkurt | Merve Gürbüz | Onur Gungor | Arzucan Özgür | Tunga Güngör
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Access to natural language processing resources is essential for their continuous improvement. This can be especially challenging in educational institutions where the software development effort required to package and release research outcomes may be overwhelming and under-recognized. Access towell-prepared and reliable research outcomes is important both for their developers as well as the greater research community. This paper presents an approach to address this concern with two main goals: (1) to create an open-source easily deployable platform where resources can be easily shared and explored, and (2) to use this platform to publish open-source Turkish NLP resources (datasets and tools) created by a research lab. The Turkish Natural Language Processing (TULAP) was designed and developed as an easy-to-use platform to share dataset and tool resources which supports interactive tool demos. Numerous open access Turkish NLP resources have been shared on TULAP. All tools are containerized to support portability for custom use. This paper describes the design, implementation, and deployment of TULAP with use cases (available at https://tulap.cmpe.boun.edu.tr/). A short video demonstrating our system is available at https://figshare.com/articles/media/TULAP_Demo/22179047.
Search
Fix author
Co-authors
- Onur Güngör 4
- Furkan Akkurt 3
- Tunga Gungor 2
- Arzucan Özgür 2
- Muhammed Emin Arayıcı 1
- Elif Ecem Arslan 1
- Ümit Atlamaz 1
- Zeynep Balal 1
- Mete Mert Birdal 1
- Ayşe Aysu Cengiz 1
- Serkan Coşkun 1
- Esra Darıcı 1
- Habib Yağız Demir 1
- Gupse Ekin Demir 1
- Olgun Dursun 1
- Sezen Felekoğlu 1
- Elif Genç 1
- Yusuf Faruk Güldemir 1
- Merve Gürbüz 1
- Ali Buğra Kanburoğlu 1
- Sarp Kantar 1
- Birsen Şahin Kütük 1
- Büşra Marşan 1
- Ahmet Kaan Sever 1
- Görkem Sevinç 1
- Abdullah Topraksoy 1
- Cagri Toraman 1
- Büşra Tufan 1
- Meliksah Turker 1
- Gökçe Uludoğan 1
- Balkız Öztürk Başaran 1
- Muhammet Şen 1