2020
pdf
bib
Proceedings of the 3rd International Workshop on Rumours and Deception in Social Media (RDSM)
Ahmet Aker
|
Arkaitz Zubiaga
Proceedings of the 3rd International Workshop on Rumours and Deception in Social Media (RDSM)
2019
pdf
bib
abs
Identification of Good and Bad News on Twitter
Piush Aggarwal
|
Ahmet Aker
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Social media plays a great role in news dissemination which includes good and bad news. However, studies show that news, in general, has a significant impact on our mental stature and that this influence is more in bad news. An ideal situation would be that we have a tool that can help to filter out the type of news we do not want to consume. In this paper, we provide the basis for such a tool. In our work, we focus on Twitter. We release a manually annotated dataset containing 6,853 tweets from 5 different topical categories. Each tweet is annotated with good and bad labels. We also investigate various machine learning systems and features and evaluate their performance on the newly generated dataset. We also perform a comparative analysis with sentiments showing that sentiment alone is not enough to distinguish between good and bad news.
pdf
abs
SemEval-2019 Task 7: RumourEval, Determining Rumour Veracity and Support for Rumours
Genevieve Gorrell
|
Elena Kochkina
|
Maria Liakata
|
Ahmet Aker
|
Arkaitz Zubiaga
|
Kalina Bontcheva
|
Leon Derczynski
Proceedings of the 13th International Workshop on Semantic Evaluation
Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of “fake news” has become a mainstream concern. However automated support for rumour verification remains in its infancy. It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour’s veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved.
2018
pdf
abs
Information Nutrition Labels: A Plugin for Online News Evaluation
Vincentius Kevin
|
Birte Högden
|
Claudia Schwenger
|
Ali Şahan
|
Neelu Madan
|
Piush Aggarwal
|
Anusha Bangaru
|
Farid Muradov
|
Ahmet Aker
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
In this paper we present a browser plugin NewsScan that assists online news readers in evaluating the quality of online content they read by providing information nutrition labels for online news articles. In analogy to groceries, where nutrition labels help consumers make choices that they consider best for themselves, information nutrition labels tag online news articles with data that help readers judge the articles they engage with. This paper discusses the choice of the labels, their implementation and visualization.
pdf
abs
Uni-DUE Student Team: Tackling fact checking through decomposable attention neural network
Jan Kowollik
|
Ahmet Aker
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
In this paper we present our system for the FEVER Challenge. The task of this challenge is to verify claims by extracting information from Wikipedia. Our system has two parts. In the first part it performs a search for candidate sentences by treating the claims as query. In the second part it filters out noise from these candidates and uses the remaining ones to decide whether they support or refute or entail not enough information to verify the claim. We show that this system achieves a FEVER score of 0.3927 on the FEVER shared task development data set which is a 25.5% improvement over the baseline score.
pdf
Multi-lingual Argumentative Corpora in English, Turkish, Greek, Albanian, Croatian, Serbian, Macedonian, Bulgarian, Romanian and Arabic
Alfred Sliwa
|
Yuan Ma
|
Ruishen Liu
|
Niravkumar Borad
|
Seyedeh Ziyaei
|
Mina Ghobadi
|
Firas Sabbah
|
Ahmet Aker
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
pdf
abs
Can Rumour Stance Alone Predict Veracity?
Sebastian Dungs
|
Ahmet Aker
|
Norbert Fuhr
|
Kalina Bontcheva
Proceedings of the 27th International Conference on Computational Linguistics
Prior manual studies of rumours suggested that crowd stance can give insights into the actual rumour veracity. Even though numerous studies of automatic veracity classification of social media rumours have been carried out, none explored the effectiveness of leveraging crowd stance to determine veracity. We use stance as an additional feature to those commonly used in earlier studies. We also model the veracity of a rumour using variants of Hidden Markov Models (HMM) and the collective stance information. This paper demonstrates that HMMs that use stance and tweets’ times as the only features for modelling true and false rumours achieve F1 scores in the range of 80%, outperforming those approaches where stance is used jointly with content and user based features.
2017
pdf
abs
Simple Open Stance Classification for Rumour Analysis
Ahmet Aker
|
Leon Derczynski
|
Kalina Bontcheva
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
Stance classification determines the attitude, or stance, in a (typically short) text. The task has powerful applications, such as the detection of fake news or the automatic extraction of attitudes toward entities or events in the media. This paper describes a surprisingly simple and efficient classification approach to open stance classification in Twitter, for rumour and veracity classification. The approach profits from a novel set of automatically identifiable problem-specific features, which significantly boost classifier accuracy and achieve above state-of-the-art results on recent benchmark datasets. This calls into question the value of using complex sophisticated models for stance classification without first doing informed feature extraction.
pdf
abs
An Extensible Multilingual Open Source Lemmatizer
Ahmet Aker
|
Johann Petrak
|
Firas Sabbah
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
We present GATE DictLemmatizer, a multilingual open source lemmatizer for the GATE NLP framework that currently supports English, German, Italian, French, Dutch, and Spanish, and is easily extensible to other languages. The software is freely available under the LGPL license. The lemmatization is based on the Helsinki Finite-State Transducer Technology (HFST) and lemma dictionaries automatically created from Wiktionary. We evaluate the performance of the lemmatizers against TreeTagger, which is only freely available for research purposes. Our evaluation shows that DictLemmatizer achieves similar or even better results than TreeTagger for languages where there is support from HFST. The performance drops when there is no support from HFST and the entire lemmatization process is based on lemma dictionaries. However, the results are still satisfactory given the fact that DictLemmatizer isopen-source and can be easily extended to other languages. The software for extending the lemmatizer by creating word lists from Wiktionary dictionaries is also freely available as open-source software.
pdf
abs
Projection of Argumentative Corpora from Source to Target Languages
Ahmet Aker
|
Huangpan Zhang
Proceedings of the 4th Workshop on Argument Mining
Argumentative corpora are costly to create and are available in only few languages with English dominating the area. In this paper we release the first publicly available Mandarin argumentative corpus. The corpus is created by exploiting the idea of comparable corpora from Statistical Machine Translation. We use existing corpora in English and manually map the claims and premises to comparable corpora in Mandarin. We also implement a simple solution to automate this approach with the view of creating argumentative corpora in other less-resourced languages. In this way we introduce a new task of multi-lingual argument mapping that can be evaluated using our English-Mandarin argumentative corpus. The preliminary results of our automatic argument mapper mirror the simplicity of our approach, but provide a baseline for further improvements.
pdf
abs
What works and what does not: Classifier and feature analysis for argument mining
Ahmet Aker
|
Alfred Sliwa
|
Yuan Ma
|
Ruishen Lui
|
Niravkumar Borad
|
Seyedeh Ziyaei
|
Mina Ghobadi
Proceedings of the 4th Workshop on Argument Mining
This paper offers a comparative analysis of the performance of different supervised machine learning methods and feature sets on argument mining tasks. Specifically, we address the tasks of extracting argumentative segments from texts and predicting the structure between those segments. Eight classifiers and different combinations of six feature types reported in previous work are evaluated. The results indicate that overall best performing features are the structural ones. Although the performance of classifiers varies depending on the feature combinations and corpora used for training and testing, Random Forest seems to be among the best performing classifiers. These results build a basis for further development of argument mining techniques and can guide an implementation of argument mining into different applications such as argument based search.
pdf
abs
Automatic Summarization of Online Debates
Nattapong Sanchan
|
Ahmet Aker
|
Kalina Bontcheva
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with RANLP 2017
Debate summarization is one of the novel and challenging research areas in automatic text summarization which has been largely unexplored. In this paper, we develop a debate summarization pipeline to summarize key topics which are discussed or argued in the two opposing sides of online debates. We view that the generation of debate summaries can be achieved by clustering, cluster labeling, and visualization. In our work, we investigate two different clustering approaches for the generation of the summaries. In the first approach, we generate the summaries by applying purely term-based clustering and cluster labeling. The second approach makes use of X-means for clustering and Mutual Information for labeling the clusters. Both approaches are driven by ontologies. We visualize the results using bar charts. We think that our results are a smooth entry for users aiming to receive the first impression about what is discussed within a debate topic containing waste number of argumentations.
2016
pdf
abs
What’s the Issue Here?: Task-based Evaluation of Reader Comment Summarization Systems
Emma Barker
|
Monica Paramita
|
Adam Funk
|
Emina Kurtic
|
Ahmet Aker
|
Jonathan Foster
|
Mark Hepple
|
Robert Gaizauskas
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Automatic summarization of reader comments in on-line news is an extremely challenging task and a capability for which there is a clear need. Work to date has focussed on producing extractive summaries using well-known techniques imported from other areas of language processing. But are extractive summaries of comments what users really want? Do they support users in performing the sorts of tasks they are likely to want to perform with reader comments? In this paper we address these questions by doing three things. First, we offer a specification of one possible summary type for reader comment, based on an analysis of reader comment in terms of issues and viewpoints. Second, we define a task-based evaluation framework for reader comment summarization that allows summarization systems to be assessed in terms of how well they support users in a time-limited task of identifying issues and characterising opinion on issues in comments. Third, we describe a pilot evaluation in which we used the task-based evaluation framework to evaluate a prototype reader comment clustering and summarization system, demonstrating the viability of the evaluation framework and illustrating the sorts of insight such an evaluation affords.
pdf
abs
Creation of comparable corpora for English-Urdu, Arabic, Persian
Murad Abouammoh
|
Kashif Shah
|
Ahmet Aker
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Statistical Machine Translation (SMT) relies on the availability of rich parallel corpora. However, in the case of under-resourced languages or some specific domains, parallel corpora are not readily available. This leads to under-performing machine translation systems in those sparse data settings. To overcome the low availability of parallel resources the machine translation community has recognized the potential of using comparable resources as training data. However, most efforts have been related to European languages and less in middle-east languages. In this study, we report comparable corpora created from news articles for the pair English ―{Arabic, Persian, Urdu} languages. The data has been collected over a period of a year, entails Arabic, Persian and Urdu languages. Furthermore using the English as a pivot language, comparable corpora that involve more than one language can be created, e.g. English- Arabic - Persian, English - Arabic - Urdu, English ― Urdu - Persian, etc. Upon request the data can be provided for research purposes.
pdf
USFD at SemEval-2016 Task 1: Putting different State-of-the-Arts into a Box
Ahmet Aker
|
Frederic Blain
|
Andres Duque
|
Marina Fomicheva
|
Jurica Seva
|
Kashif Shah
|
Daniel Beck
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
pdf
The SENSEI Annotated Corpus: Human Summaries of Reader Comment Conversations in On-line News
Emma Barker
|
Monica Lestari Paramita
|
Ahmet Aker
|
Emina Kurtic
|
Mark Hepple
|
Robert Gaizauskas
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
pdf
Automatic label generation for news comment clusters
Ahmet Aker
|
Monica Paramita
|
Emina Kurtic
|
Adam Funk
|
Emma Barker
|
Mark Hepple
|
Rob Gaizauskas
Proceedings of the 9th International Natural Language Generation conference
2015
pdf
Comment-to-Article Linking in the Online News Domain
Ahmet Aker
|
Emina Kurtic
|
Mark Hepple
|
Rob Gaizauskas
|
Giuseppe Di Fabbrizio
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue
2014
pdf
bib
Assigning Terms to Domains by Document Classification
Robert Gaizauskas
|
Emma Barker
|
Monica Lestari Paramita
|
Ahmet Aker
Proceedings of the 4th International Workshop on Computational Terminology (Computerm)
pdf
A Poodle or a Dog? Evaluating Automatic Image Annotation Using Human Descriptions at Different Levels of Granularity
Josiah Wang
|
Fei Yan
|
Ahmet Aker
|
Robert Gaizauskas
Proceedings of the Third Workshop on Vision and Language
pdf
abs
Bootstrapping Term Extractors for Multiple Languages
Ahmet Aker
|
Monica Paramita
|
Emma Barker
|
Robert Gaizauskas
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Terminology extraction resources are needed for a wide range of human language technology applications, including knowledge management, information extraction, semantic search, cross-language information retrieval and automatic and assisted translation. We create a low cost method for creating terminology extraction resources for 21 non-English EU languages. Using parallel corpora and a projection method, we create a General POS Tagger for these languages. We also investigate the use of EuroVoc terms and Wikipedia corpus to automatically create term grammar for each language. Our results show that these automatically generated resources can assist term extraction process with similar performance to manually generated resources. All resources resulted in this experiment are freely available for download.
pdf
abs
Bilingual dictionaries for all EU languages
Ahmet Aker
|
Monica Paramita
|
Mārcis Pinnis
|
Robert Gaizauskas
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Bilingual dictionaries can be automatically generated using the GIZA++ tool. However, these dictionaries contain a lot of noise, because of which the quality of outputs of tools relying on the dictionaries are negatively affected. In this work we present three different methods for cleaning noise from automatically generated bilingual dictionaries: LLR, pivot and translation based approach. We have applied these approaches on the GIZA++ dictionaries – dictionaries covering official EU languages – in order to remove noise. Our evaluation showed that all methods help to reduce noise. However, the best performance is achieved using the transliteration based approach. We provide all bilingual dictionaries (the original GIZA++ dictionaries and the cleaned ones) free for download. We also provide the cleaning tools and scripts for free download.
2013
pdf
Extracting bilingual terminologies from comparable corpora
Ahmet Aker
|
Monica Paramita
|
Rob Gaizauskas
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2012
pdf
abs
Correlation between Similarity Measures for Inter-Language Linked Wikipedia Articles
Monica Lestari Paramita
|
Paul Clough
|
Ahmet Aker
|
Robert Gaizauskas
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Wikipedia articles in different languages have been mined to support various tasks, such as Cross-Language Information Retrieval (CLIR) and Statistical Machine Translation (SMT). Articles on the same topic in different languages are often connected by inter-language links, which can be used to identify similar or comparable content. In this work, we investigate the correlation between similarity measures utilising language-independent and language-dependent features and respective human judgments. A collection of 800 Wikipedia pairs from 8 different language pairs were collected and judged for similarity by two assessors. We report the development of this corpus and inter-assessor agreement between judges across the languages. Results show that similarity measured using language independent features is comparable to using an approach based on translating non-English documents. In both cases the correlation with human judgments is low but also dependent upon the language pair. The results and corpus generated from this work also provide insights into the measurement of cross-language similarity.
pdf
abs
A Corpus of Spontaneous Multi-party Conversation in Bosnian Serbo-Croatian and British English
Emina Kurtić
|
Bill Wells
|
Guy J. Brown
|
Timothy Kempton
|
Ahmet Aker
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
In this paper we present a corpus of audio and video recordings of spontaneous, face-to-face multi-party conversation in two languages. Freely available high quality recordings of mundane, non-institutional, multi-party talk are still sparse, and this corpus aims to contribute valuable data suitable for study of multiple aspects of spoken interaction. In particular, it constitutes a unique resource for spoken Bosnian Serbo-Croatian (BSC), an under-resourced language with no spoken resources available at present. The corpus consists of just over 3 hours of free conversation in each of the target languages, BSC and British English (BE). The audio recordings have been made on separate channels using head-set microphones, as well as using a microphone array, containing 8 omni-directional microphones. The data has been segmented and transcribed using segmentation notions and transcription conventions developed from those of the conversation analysis research tradition. Furthermore, the transcriptions have been automatically aligned with the audio at the word and phone level, using the method of forced alignment. In this paper we describe the procedures behind the corpus creation and present the main features of the corpus for the study of conversation.
pdf
abs
Assessing Crowdsourcing Quality through Objective Tasks
Ahmet Aker
|
Mahmoud El-Haj
|
M-Dyaa Albakour
|
Udo Kruschwitz
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
The emergence of crowdsourcing as a commonly used approach to collect vast quantities of human assessments on a variety of tasks represents nothing less than a paradigm shift. This is particularly true in academic research where it has suddenly become possible to collect (high-quality) annotations rapidly without the need of an expert. In this paper we investigate factors which can influence the quality of the results obtained through Amazon's Mechanical Turk crowdsourcing platform. We investigated the impact of different presentation methods (free text versus radio buttons), workers' base (USA versus India as the main bases of MTurk workers) and payment scale (about $4, $8 and $10 per hour) on the quality of the results. For each run we assessed the results provided by 25 workers on a set of 10 tasks. We run two different experiments using objective tasks: maths and general text questions. In both tasks the answers are unique, which eliminates the uncertainty usually present in subjective tasks, where it is not clear whether the unexpected answer is caused by a lack of worker's motivation, the worker's interpretation of the task or genuine ambiguity. In this work we present our results comparing the influence of the different factors used. One of the interesting findings is that our results do not confirm previous studies which concluded that an increase in payment attracts more noise. We also find that the country of origin only has an impact in some of the categories and only in general text questions but there is no significant difference at the top pay.
pdf
abs
A light way to collect comparable corpora from the Web
Ahmet Aker
|
Evangelos Kanoulas
|
Robert Gaizauskas
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Statistical Machine Translation (SMT) relies on the availability of rich parallel corpora. However, in the case of under-resourced languages, parallel corpora are not readily available. To overcome this problem previous work has recognized the potential of using comparable corpora as training data. The process of obtaining such data usually involves (1) downloading a separate list of documents for each language, (2) matching the documents between two languages usually by comparing the document contents, and finally (3) extracting useful data for SMT from the matched document pairs. This process requires a large amount of time and resources since a huge volume of documents needs to be downloaded to increase the chances of finding good document pairs. In this work we aim to reduce the amount of time and resources spent for tasks 1 and 2. Instead of obtaining full documents we first obtain just titles along with some meta-data such as time and date of publication. Titles can be obtained through Web Search and RSS News feed collections so that download of the full documents is not needed. We show experimentally that titles can be used to approximate the comparison between documents using full document contents.
pdf
abs
Collecting and Using Comparable Corpora for Statistical Machine Translation
Inguna Skadiņa
|
Ahmet Aker
|
Nikos Mastropavlos
|
Fangzhong Su
|
Dan Tufis
|
Mateja Verlic
|
Andrejs Vasiļjevs
|
Bogdan Babych
|
Paul Clough
|
Robert Gaizauskas
|
Nikos Glaros
|
Monica Lestari Paramita
|
Mārcis Pinnis
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Lack of sufficient parallel data for many languages and domains is currently one of the major obstacles to further advancement of automated translation. The ACCURAT project is addressing this issue by researching methods how to improve machine translation systems by using comparable corpora. In this paper we present tools and techniques developed in the ACCURAT project that allow additional data needed for statistical machine translation to be extracted from comparable corpora. We present methods and tools for acquisition of comparable corpora from the Web and other sources, for evaluation of the comparability of collected corpora, for multi-level alignment of comparable corpora and for extraction of lexical and terminological data for machine translation. Finally, we present initial evaluation results on the utility of collected corpora in domain-adapted machine translation and real-life applications.
pdf
Automatic Bilingual Phrase Extraction from Comparable Corpora
Ahmet Aker
|
Yang Feng
|
Robert Gaizauskas
Proceedings of COLING 2012: Posters
2011
pdf
Multi-Document Summarization by Capturing the Information Users are Interested in
Elena Lloret
|
Laura Plaza
|
Ahmet Aker
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011
2010
pdf
Multi-Document Summarization Using A* Search and Discriminative Learning
Ahmet Aker
|
Trevor Cohn
|
Robert Gaizauskas
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
pdf
Generating Image Descriptions Using Dependency Relational Patterns
Ahmet Aker
|
Robert Gaizauskas
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
pdf
abs
Model Summaries for Location-related Images
Ahmet Aker
|
Robert Gaizauskas
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
At present there is no publicly available data set to evaluate the performance of different summarization systems on the task of generating location-related extended image captions. In this paper we describe a corpus of human generated model captions in English and German. We have collected 932 model summaries in English from existing image descriptions and machine translated these summaries into German. We also performed post-editing on the translated German summaries to ensure high quality. Both English and German summaries are evaluated using a readability assessment as in DUC and TAC to assess their quality. Our model summaries performed similar to the ones reported in Dang (2005) and thus are suitable for evaluating automatic summarization systems on the task of generating image descriptions for location related images. In addition, we also investigated whether post-editing of machine-translated model summaries is necessary for automated ROUGE evaluations. We found a high correlation in ROUGE scores between post-edited and non-post-edited model summaries which indicates that the expensive process of post-editing is not necessary.
2009
pdf
bib
Summary Generation for Toponym-referenced Images using Object Type Language Models
Ahmet Aker
|
Robert Gaizauskas
Proceedings of the International Conference RANLP-2009
2008
pdf
Evaluating automatically generated user-focused multi-document summaries for geo-referenced images
Ahmet Aker
|
Robert Gaizauskas
Coling 2008: Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization