Sebastian Nehrdich


2022

pdf
SansTib, a Sanskrit - Tibetan Parallel Corpus and Bilingual Sentence Embedding Model
Sebastian Nehrdich
Proceedings of the Thirteenth Language Resources and Evaluation Conference

This paper presents the development of SansTib, a Sanskrit - Classical Tibetan parallel corpus automatically aligned on sentence-level, and a bilingual sentence embedding model. The corpus has a size of about 317,289 sentence pairs and 14,420,771 tokens and thereby is a considerable improvement over previous resources for these two languages. The data is incorporated into the BuddhaNexus database to make it accessible to a larger audience. It also presents a gold evaluation dataset and assesses the quality of the automatic alignment.

pdf
Accurate Dependency Parsing and Tagging of Latin
Sebastian Nehrdich | Oliver Hellwig
Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages

Having access to high-quality grammatical annotations is important for downstream tasks in NLP as well as for corpus-based research. In this paper, we describe experiments with the Latin BERT word embeddings that were recently be made available by Bamman and Burns (2020). We show that these embeddings produce competitive results in the low-level task morpho-syntactic tagging. In addition, we describe a graph-based dependency parser that is trained with these embeddings and that clearly outperforms various baselines.

2018

pdf
Sanskrit Word Segmentation Using Character-level Recurrent and Convolutional Neural Networks
Oliver Hellwig | Sebastian Nehrdich
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

The paper introduces end-to-end neural network models that tokenize Sanskrit by jointly splitting compounds and resolving phonetic merges (Sandhi). Tokenization of Sanskrit depends on local phonetic and distant semantic features that are incorporated using convolutional and recurrent elements. Contrary to most previous systems, our models do not require feature engineering or extern linguistic resources, but operate solely on parallel versions of raw and segmented text. The models discussed in this paper clearly improve over previous approaches to Sanskrit word segmentation. As they are language agnostic, we will demonstrate that they also outperform the state of the art for the related task of German compound splitting.