Miriam Fernandez
2020
Exploiting Citation Knowledge in Personalised Recommendation of Recent Scientific Publications
Anita Khadka
|
Iván Cantador
|
Miriam Fernandez
Proceedings of the Twelfth Language Resources and Evaluation Conference
In this paper we address the problem of providing personalised recommendations of recent scientific publications to a particular user, and explore the use of citation knowledge to do so. For this purpose, we have generated a novel dataset that captures authors’ publication history and is enriched with different forms of paper citation knowledge, namely citation graphs, citation positions, citation contexts, and citation types. Through a number of empirical experiments on such dataset, we show that the exploitation of the extracted knowledge, particularly the type of citation, is a promising approach for recommending recently published papers that may not be cited yet. The dataset, which we make publicly available, also represents a valuable resource for further investigation on academic information retrieval and filtering.
2019
SenZi: A Sentiment Analysis Lexicon for the Latinised Arabic (Arabizi)
Taha Tobaili
|
Miriam Fernandez
|
Harith Alani
|
Sanaa Sharafeddine
|
Hazem Hajj
|
Goran Glavaš
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Arabizi is an informal written form of dialectal Arabic transcribed in Latin alphanumeric characters. It has a proven popularity on chat platforms and social media, yet it suffers from a severe lack of natural language processing (NLP) resources. As such, texts written in Arabizi are often disregarded in sentiment analysis tasks for Arabic. In this paper we describe the creation of a sentiment lexicon for Arabizi that was enriched with word embeddings. The result is a new Arabizi lexicon consisting of 11.3K positive and 13.3K negative words. We evaluated this lexicon by classifying the sentiment of Arabizi tweets achieving an F1-score of 0.72. We provide a detailed error analysis to present the challenges that impact the sentiment analysis of Arabizi.
2014
On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of Twitter
Hassan Saif
|
Miriam Fernandez
|
Yulan He
|
Harith Alani
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier’s feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and shrinking the feature space.
Search
Co-authors
- Harith Alani 2
- Taha Tobaili 1
- Sanaa Sharafeddine 1
- Hazem Hajj 1
- Goran Glavaš 1
- show all...