2023
pdf
abs
JCT at SemEval-2023 Tasks 12 A and 12B: Sentiment Analysis for Tweets Written in Low-resource African Languages using Various Machine Learning and Deep Learning Methods, Resampling, and HyperParameter Tuning
Ron Keinan
|
Yaakov Hacohen-Kerner
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
In this paper, we describe our submissions to the SemEval-2023 contest. We tackled subtask 12 - “AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset”. We developed different models for 12 African languages and a 13th model for a multilingual dataset built from these 12 languages. We applied a wide variety of word and char n-grams based on their tf-idf values, 4 classical machine learning methods, 2 deep learning methods, and 3 oversampling methods. We used 12 sentiment lexicons and applied extensive hyperparameter tuning.
2022
pdf
abs
JCT at SemEval-2022 Task 4-A: Patronism Detection in Posts Written in English using Preprocessing Methods and various Machine Leaerning Methods
Yaakov HaCohen-Kerner
|
Ilan Meyrowitsch
|
Matan Fchima
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
In this paper, we describe our submissions to SemEval-2022 subtask 4-A - “Patronizing and Condescending Language Detection: Binary Classification”. We developed different models for this subtask. We applied 11 supervised machine learning methods and 9 preprocessing methods. Our best submission was a model we built with BertForSequenceClassification. Our experiments indicate that pre-processing stage is a must for a successful model. The dataset for Subtask 1 is highly imbalanced dataset. The f1-scores on the oversampled imbalanced training dataset were higher the results on the original training dataset.
pdf
abs
JCT at SemEval-2022 Task 6-A: Sarcasm Detection in Tweets Written in English and Arabic using Preprocessing Methods and Word N-grams
Yaakov HaCohen-Kerner
|
Matan Fchima
|
Ilan Meyrowitsch
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
In this paper, we describe our submissions to SemEval-2022 contest. We tackled subtask 6-A - “iSarcasmEval: Intended Sarcasm Detection In English and Arabic – Binary Classification”. We developed different models for two languages: English and Arabic. We applied 4 supervised machine learning methods, 6 preprocessing methods for English and 3 for Arabic, and 3 oversampling methods. Our best submitted model for the English test dataset was a SVC model that balanced the dataset using SMOTE and removed stop words. For the Arabic test dataset our best submitted model was a SVC model that preprocessed removed longation.
2020
pdf
abs
Creating Expert Knowledge by Relying on Language Learners: a Generic Approach for Mass-Producing Language Resources by Combining Implicit Crowdsourcing and Language Learning
Lionel Nicolas
|
Verena Lyding
|
Claudia Borg
|
Corina Forascu
|
Karën Fort
|
Katerina Zdravkova
|
Iztok Kosem
|
Jaka Čibej
|
Špela Arhar Holdt
|
Alice Millour
|
Alexander König
|
Christos Rodosthenous
|
Federico Sangati
|
Umair ul Hassan
|
Anisia Katinskaia
|
Anabela Barreiro
|
Lavinia Aparaschivei
|
Yaakov HaCohen-Kerner
Proceedings of the Twelfth Language Resources and Evaluation Conference
We introduce in this paper a generic approach to combine implicit crowdsourcing and language learning in order to mass-produce language resources (LRs) for any language for which a crowd of language learners can be involved. We present the approach by explaining its core paradigm that consists in pairing specific types of LRs with specific exercises, by detailing both its strengths and challenges, and by discussing how much these challenges have been addressed at present. Accordingly, we also report on on-going proof-of-concept efforts aiming at developing the first prototypical implementation of the approach in order to correct and extend an LR called ConceptNet based on the input crowdsourced from language learners. We then present an international network called the European Network for Combining Language Learning with Crowdsourcing Techniques (enetCollect) that provides the context to accelerate the implementation of this generic approach. Finally, we exemplify how it can be used in several language learning scenarios to produce a multitude of NLP resources and how it can therefore alleviate the long-standing NLP issue of the lack of LRs.
pdf
abs
JCT at SemEval-2020 Task 12: Offensive Language Detection in Tweets Using Preprocessing Methods, Character and Word N-grams
Moshe Uzan
|
Yaakov HaCohen-Kerner
Proceedings of the Fourteenth Workshop on Semantic Evaluation
In this paper, we describe our submissions to SemEval-2020 contest. We tackled subtask 12 - “Multilingual Offensive Language Identification in Social Media”. We developed different models for four languages: Arabic, Danish, Greek, and Turkish. We applied three supervised machine learning methods using various combinations of character and word n-gram features. In addition, we applied various combinations of basic preprocessing methods. Our best submission was a model we built for offensive language identification in Danish using Random Forest. This model was ranked at the 6 position out of 39 submissions. Our result is lower by only 0.0025 than the result of the team that won the 4 place using entirely non-neural methods. Our experiments indicate that char ngram features are more helpful than word ngram features. This phenomenon probably occurs because tweets are more characterized by characters than by words, tweets are short, and contain various special sequences of characters, e.g., hashtags, shortcuts, slang words, and typos.
2019
pdf
abs
JCTDHS at SemEval-2019 Task 5: Detection of Hate Speech in Tweets using Deep Learning Methods, Character N-gram Features, and Preprocessing Methods
Yaakov HaCohen-Kerner
|
Elyashiv Shayovitz
|
Shalom Rochman
|
Eli Cahn
|
Gal Didi
|
Ziv Ben-David
Proceedings of the 13th International Workshop on Semantic Evaluation
In this paper, we describe our submissions to SemEval-2019 contest. We tackled subtask A - “a binary classification where systems have to predict whether a tweet with a given target (women or immigrants) is hateful or not hateful”, a part of task 5 “Multilingual detection of hate speech against immigrants and women in Twitter (hatEval)”. Our system JCTDHS (Jerusalem College of Technology Detects Hate Speech) was developed for tweets written in English. We applied various supervised ML methods, various combinations of n-gram features using the TF-IDF scheme and. In addition, we applied various combinations of eight basic preprocessing methods. Our best submission was a special bidirectional RNN, which was ranked at the 11th position out of 68 submissions.
pdf
abs
JCTICOL at SemEval-2019 Task 6: Classifying Offensive Language in Social Media using Deep Learning Methods, Word/Character N-gram Features, and Preprocessing Methods
Yaakov HaCohen-Kerner
|
Ziv Ben-David
|
Gal Didi
|
Eli Cahn
|
Shalom Rochman
|
Elyashiv Shayovitz
Proceedings of the 13th International Workshop on Semantic Evaluation
In this paper, we describe our submissions to SemEval-2019 task 6 contest. We tackled all three sub-tasks in this task “OffensEval - Identifying and Categorizing Offensive Language in Social Media”. In our system called JCTICOL (Jerusalem College of Technology Identifies and Categorizes Offensive Language), we applied various supervised ML methods. We applied various combinations of word/character n-gram features using the TF-IDF scheme. In addition, we applied various combinations of seven basic preprocessing methods. Our best submission, an RNN model was ranked at the 25th position out of 65 submissions for the most complex sub-task (C).
2016
pdf
abs
Semantically Motivated Hebrew Verb-Noun Multi-Word Expressions Identification
Chaya Liebeskind
|
Yaakov HaCohen-Kerner
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Identification of Multi-Word Expressions (MWEs) lies at the heart of many natural language processing applications. In this research, we deal with a particular type of Hebrew MWEs, Verb-Noun MWEs (VN-MWEs), which combine a verb and a noun with or without other words. Most prior work on MWEs classification focused on linguistic and statistical information. In this paper, we claim that it is essential to utilize semantic information. To this end, we propose a semantically motivated indicator for classifying VN-MWE and define features that are related to various semantic spaces and combine them as features in a supervised classification framework. We empirically demonstrate that our semantic feature set yields better performance than the common linguistic and statistical feature sets and that combining semantic features contributes to the VN-MWEs identification task.
pdf
abs
A Lexical Resource of Hebrew Verb-Noun Multi-Word Expressions
Chaya Liebeskind
|
Yaakov HaCohen-Kerner
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
A verb-noun Multi-Word Expression (MWE) is a combination of a verb and a noun with or without other words, in which the combination has a meaning different from the meaning of the words considered separately. In this paper, we present a new lexical resource of Hebrew Verb-Noun MWEs (VN-MWEs). The VN-MWEs of this resource were manually collected and annotated from five different web resources. In addition, we analyze the lexical properties of Hebrew VN-MWEs by classifying them to three types: morphological, syntactic, and semantic. These two contributions are essential for designing algorithms for automatic VN-MWEs extraction. The analysis suggests some interesting features of VN-MWEs for exploration. The lexical resource enables to sample a set of positive examples for Hebrew VN-MWEs. This set of examples can either be used for training supervised algorithms or as seeds in unsupervised bootstrapping algorithms. Thus, this resource is a first step towards automatic identification of Hebrew VN-MWEs, which is important for natural language understanding, generation and translation systems.
2015
pdf
Distinguishing between True and False Stories using various Linguistic Features
Yaakov Hacohen-Kerner
|
Rakefet Dilmon
|
Shimon Friedlich
|
Daniel Nisim Cohen
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters
pdf
Automatic Classification of Spoken Languages using Diverse Acoustic Features
Yaakov Hacohen-Kerner
|
Ruben Hagege
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters
2014
pdf
Keyphrase Extraction using Textual and Visual Features
Yaakov HaCohen-Kerner
|
Stefanos Vrochidis
|
Dimitris Liparas
|
Anastasia Moumtzidou
|
Ioannis Kompatsiaris
Proceedings of the Third Workshop on Vision and Language
2010
pdf
Detection of Simple Plagiarism in Computer Science Papers
Yaakov HaCohen-Kerner
|
Aharon Tayeb
|
Natan Ben-Dror
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
2008
pdf
Combined One Sense Disambiguation of Abbreviations
Yaakov HaCohen-Kerner
|
Ariel Kass
|
Ariel Peretz
Proceedings of ACL-08: HLT, Short Papers