Andrew Trotman


2021

pdf bib
BERT’s The Word : Sarcasm Target Detection using BERT
Pradeesh Parameswaran | Andrew Trotman | Veronica Liesaputra | David Eyers
Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association

In 2019, the Australasian Language Technology Association (ALTA) organised a shared task to detect the target of sarcastic comments posted on social media. However, there were no winners as it proved to be a difficult task. In this work, we revisit the task posted by ALTA by using transformers, specifically BERT, given the current success of the transformer-based model in various NLP tasks. We conducted our experiments on two BERT models (TD-BERT and BERT-AEN). We evaluated our model on the data set provided by ALTA (Reddit) and two additional data sets: ‘book snippets’ and ‘Tweets’. Our results show that our proposed method achieves a 15.2% improvement from the current state-of-the-art system on the Reddit data set and 4% improvement on Tweets.

pdf bib
Quick, get me a Dr. BERT: Automatic Grading of Evidence using Transfer Learning
Pradeesh Parameswaran | Andrew Trotman | Veronica Liesaputra | David Eyers
Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association

We describe our methods for automatically grading the level of clinical evidence in medical papers, as part of the ALTA 2021 shared task. We use a combination of transfer learning and a hand-crafted, feature-based classifier. Our system (�orangutanV3�) obtained an accuracy score of 0.4918, which placed third in the leaderboard. From our failure analysis, we find that our classification techniques do not appropriately handle cases when the conclusions of across the medical papers are themselves inconclusive. We believe that this shortcoming can be overcome�thus improving the classification accuracy�by incorporating document similarity techniques.

2020

pdf bib
Classifying Judgements using Transfer Learning
Pradeesh Parameswaran | Andrew Trotman | Veronica Liesaputra | David Eyers
Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association

We describe our method for classifying short texts into the APPRAISAL framework, work we conducted as part of the ALTA 2020 shared task. We tackled this problem using transfer learning. Our team, “orangutanV2” placed equal first in the shared task, with a mean F1-score of 0.1026 on the private data set.

2019

pdf bib
Detecting Target of Sarcasm using Ensemble Methods
Pradeesh Parameswaran | Andrew Trotman | Veronica Liesaputra | David Eyers
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

We describe our methods in trying to detect the target of sarcasm as part of ALTA 2019 shared task. We use combination of ensemble of clas- sifiers and a rule-based system. Our team ob- tained a Dice-Sorensen Coefficient score of 0.37150, which placed 2nd in the public leader- board. Despite no team beating the baseline score for the private dataset, we present our findings and also some of the challenges and future improvements which can be used in or- der to tackle the problem.

2010

pdf bib
The Noisier the Better: Identifying Multilingual Word Translations Using a Single Monolingual Corpus
Reinhard Rapp | Michael Zock | Andrew Trotman | Yue Xu
Proceedings of the 4th Workshop on Cross Lingual Information Access

pdf bib
A Voting Mechanism for Named Entity Translation in English–Chinese Question Answering
Ling-Xiang Tang | Shlomo Geva | Andrew Trotman | Yue Xu
Proceedings of the 4th Workshop on Cross Lingual Information Access

pdf bib
A Boundary-Oriented Chinese Segmentation Method Using N-Gram Mutual Information
Ling-Xiang Tang | Shlomo Geva | Andrew Trotman | Yue Xu
CIPS-SIGHAN Joint Conference on Chinese Language Processing