Muhammed Kocyigit
2022
Better Quality Estimation for Low Resource Corpus Mining
Muhammed Kocyigit
|
Jiho Lee
|
Derry Wijaya
Findings of the Association for Computational Linguistics: ACL 2022
Quality Estimation (QE) models have the potential to change how we evaluate and maybe even train machine translation models. However, these models still lack the robustness to achieve general adoption. We show that Stateof-the-art QE models, when tested in a Parallel Corpus Mining (PCM) setting, perform unexpectedly bad due to a lack of robustness to out-of-domain examples. We propose a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE performance. We show that our method improves QE performance significantly in the MLQE challenge and the robustness of QE models when tested in the Parallel Corpus Mining setup. We increase the accuracy in PCM by more than 0.80, making it on par with state-of-the-art PCM methods that use millions of sentence pairs to train their models. In comparison, we use a thousand times less data, 7K parallel sentences in total, and propose a novel low resource PCM method.
On Measuring Social Biases in Prompt-Based Multi-Task Learning
Afra Feyza Akyürek
|
Sejin Paik
|
Muhammed Kocyigit
|
Seda Akbiyik
|
Serife Leman Runyun
|
Derry Wijaya
Findings of the Association for Computational Linguistics: NAACL 2022
Large language models trained on a mixture of NLP tasks that are converted into a text-to-text format using prompts, can generalize into novel forms of language and handle novel tasks. A large body of work within prompt engineering attempts to understand the effects of input forms and prompts in achieving superior performance. We consider an alternative measure and inquire whether the way in which an input is encoded affects social biases promoted in outputs. In this paper, we study T0, a large-scale multi-task text-to-text language model trained using prompt-based learning. We consider two different forms of semantically equivalent inputs: question-answer format and premise-hypothesis format. We use an existing bias benchmark for the former BBQ and create the first bias benchmark in natural language inference BBNLI with hand-written hypotheses while also converting each benchmark into the other form. The results on two benchmarks suggest that given two different formulations of essentially the same input, T0 conspicuously acts more biased in question answering form, which is seen during training, compared to premise-hypothesis form which is unlike its training examples. Code and data are released under https://github.com/feyzaakyurek/bbnli.
Search
Co-authors
- Derry Tanti Wijaya 2
- Jiho Lee 1
- Afra Feyza Akyürek 1
- Sejin Paik 1
- Seda Akbiyik 1
- show all...