Mohab El-karef

Also published as: Mohab Elkaref


2021

pdf
A Joint Training Approach to Tweet Classification and Adverse Effect Extraction and Normalization for SMM4H 2021
Mohab Elkaref | Lamiece Hassan
Proceedings of the Sixth Social Media Mining for Health (#SMM4H) Workshop and Shared Task

In this work we describe our submissions to the Social Media Mining for Health (SMM4H) 2021 Shared Task. We investigated the effectiveness of a joint training approach to Task 1, specifically classification, extraction and normalization of Adverse Drug Effect (ADE) mentions in English tweets. Our approach performed well on the normalization task, achieving an above average f1 score of 24%, but less so on classification and extraction, with f1 scores of 22% and 37% respectively. Our experiments also showed that a larger dataset with more negative results led to stronger results than a smaller more balanced dataset, even when both datasets have the same positive examples. Finally we also submitted a tuned BERT model for Task 6: Classification of Covid-19 tweets containing symptoms, which achieved an above average f1 score of 96%.

2019

pdf
Recursive LSTM Tree Representation for Arc-Standard Transition-Based Dependency Parsing
Mohab Elkaref | Bernd Bohnet
Proceedings of the Third Workshop on Universal Dependencies (UDW, SyntaxFest 2019)

2015

pdf bib
Domain Adaptation for Dependency Parsing via Self-Training
Juntao Yu | Mohab Elkaref | Bernd Bohnet
Proceedings of the 14th International Conference on Parsing Technologies

2014

pdf
Exploring Options for Fast Domain Adaptation of Dependency Parsers
Viktor Pekar | Juntao Yu | Mohab El-karef | Bernd Bohnet
Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages