Taelin Karidi
2021
On the Relation between Syntactic Divergence and Zero-Shot Performance
Ofir Arviv
|
Dmitry Nikolaev
|
Taelin Karidi
|
Omri Abend
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
We explore the link between the extent to which syntactic relations are preserved in translation and the ease of correctly constructing a parse tree in a zero-shot setting. While previous work suggests such a relation, it tends to focus on the macro level and not on the level of individual edges—a gap we aim to address. As a test case, we take the transfer of Universal Dependencies (UD) parsing from English to a diverse set of languages and conduct two sets of experiments. In one, we analyze zero-shot performance based on the extent to which English source edges are preserved in translation. In another, we apply three linguistically motivated transformations to UD, creating more cross-lingually stable versions of it, and assess their zero-shot parsability. In order to compare parsing performance across different schemes, we perform extrinsic evaluation on the downstream task of cross-lingual relation extraction (RE) using a subset of a standard English RE benchmark translated to Russian and Korean. In both sets of experiments, our results suggest a strong relation between cross-lingual stability and zero-shot parsing performance.
Putting Words in BERT’s Mouth: Navigating Contextualized Vector Spaces with Pseudowords
Taelin Karidi
|
Yichu Zhou
|
Nathan Schneider
|
Omri Abend
|
Vivek Srikumar
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
We present a method for exploring regions around individual points in a contextualized vector space (particularly, BERT space), as a way to investigate how these regions correspond to word senses. By inducing a contextualized “pseudoword” vector as a stand-in for a static embedding in the input layer, and then performing masked prediction of a word in the sentence, we are able to investigate the geometry of the BERT-space in a controlled manner around individual instances. Using our method on a set of carefully constructed sentences targeting highly ambiguous English words, we find substantial regularity in the contextualized space, with regions that correspond to distinct word senses; but between these regions there are occasionally “sense voids”—regions that do not correspond to any intelligible sense.
2020
Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences
Dmitry Nikolaev
|
Ofir Arviv
|
Taelin Karidi
|
Neta Kenneth
|
Veronika Mitnik
|
Lilja Maria Saeboe
|
Omri Abend
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies. We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.
Search
Co-authors
- Omri Abend 3
- Dmitry Nikolaev 2
- Ofir Arviv 2
- Neta Kenneth 1
- Veronika Mitnik 1
- show all...