Kfir Bar


2021

pdf bib
The IDC System for Sentiment Classification and Sarcasm Detection in Arabic
Abraham Israeli | Yotam Nahum | Shai Fine | Kfir Bar
Proceedings of the Sixth Arabic Natural Language Processing Workshop

Sentiment classification and sarcasm detection attract a lot of attention by the NLP research community. However, solving these two problems in Arabic and on the basis of social network data (i.e., Twitter) is still of lower interest. In this paper we present designated solutions for sentiment classification and sarcasm detection tasks that were introduced as part of a shared task by Abu Farha et al. (2021). We adjust the existing state-of-the-art transformer pretrained models for our needs. In addition, we use a variety of machine-learning techniques such as down-sampling, augmentation, bagging, and usage of meta-features to improve the models performance. We achieve an F1-score of 0.75 over the sentiment classification problem where the F1-score is calculated over the positive and negative classes (the neutral class is not taken into account). We achieve an F1-score of 0.66 over the sarcasm detection problem where the F1-score is calculated over the sarcastic class only. In both cases, the above reported results are evaluated over the ArSarcasm-v2–an extended dataset of the ArSarcasm (Farha and Magdy, 2020) that was introduced as part of the shared task. This reflects an improvement to the state-of-the-art results in both tasks.

pdf bib
Supporting Undotted Arabic with Pre-trained Language Models
Aviad Rom | Kfir Bar
Proceedings of The Fourth International Conference on Natural Language and Speech Processing (ICNLSP 2021)

2020

pdf bib
Transliteration of Judeo-Arabic Texts into Arabic Script Using Recurrent Neural Networks
Ori Terner | Kfir Bar | Nachum Dershowitz
Proceedings of the Fifth Arabic Natural Language Processing Workshop

We trained a model to automatically transliterate Judeo-Arabic texts into Arabic script, enabling Arabic readers to access those writings. We employ a recurrent neural network (RNN), combined with the connectionist temporal classification (CTC) loss to deal with unequal input/output lengths. This obligates adjustments in the training data to avoid input sequences that are shorter than their corresponding outputs. We also utilize a pretraining stage with a different loss function to improve network converge. Since only a single source of parallel text was available for training, we take advantage of the possibility of generating data synthetically. We train a model that has the capability to memorize words in the output language, and that also utilizes context for distinguishing ambiguities in the transliteration. We obtain an improvement over the baseline 9.5% character error, achieving 2% error with our best configuration. To measure the contribution of context to learning, we also tested word-shuffled data, for which the error rises to 2.5%.

2019

pdf bib
Semantic Characteristics of Schizophrenic Speech
Kfir Bar | Vered Zilberstein | Ido Ziv | Heli Baram | Nachum Dershowitz | Samuel Itzikowitz | Eiran Vadim Harel
Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology

Natural language processing tools are used to automatically detect disturbances in transcribed speech of schizophrenia inpatients who speak Hebrew. We measure topic mutation over time and show that controls maintain more cohesive speech than inpatients. We also examine differences in how inpatients and controls use adjectives and adverbs to describe content words and show that the ones used by controls are more common than the those of inpatients. We provide experimental results and show their potential for automatically detecting schizophrenia in patients by means only of their speech patterns.

2016

pdf bib
SLS at SemEval-2016 Task 3: Neural-based Approaches for Ranking in Community Question Answering
Mitra Mohtarami | Yonatan Belinkov | Wei-Ning Hsu | Yu Zhang | Tao Lei | Kfir Bar | Scott Cyphers | Jim Glass
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

2014

pdf bib
The Tel Aviv University System for the Code-Switching Workshop Shared Task
Kfir Bar | Nachum Dershowitz
Proceedings of the First Workshop on Computational Approaches to Code Switching

2012

pdf bib
Building an Arabic Multiword Expressions Repository
Abdelati Hawwari | Kfir Bar | Mona Diab
Proceedings of the ACL 2012 Joint Workshop on Statistical Parsing and Semantic Processing of Morphologically Rich Languages

pdf bib
Deriving Paraphrases for Highly Inflected Languages from Comparable Documents
Kfir Bar | Nachum Dershowitz
Proceedings of COLING 2012

2010

pdf bib
Tel Aviv University’s system description for IWSLT 2010
Kfir Bar | Nachum Dershowitz
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign

pdf bib
Using Synonyms for Arabic-to-English Example-Based Translation
Kfir Bar | Nachum Dershowitz
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Student Research Workshop

An implementation of a non-structural Example-Based Machine Translation system that translates sentences from Arabic to English, using a parallel corpus aligned at the sentence level, is described. Source-language synonyms were derived automatically and used to help locate potential translation examples for fragments of a given input sentence. The smaller the parallel corpus, the greater the contribution provided by synonyms. Considering the degree of relevance of the subject matter of a potential match contributes to the quality of the final results.