Timothy Baldwin

Also published as: Tim Baldwin


2023

pdf
Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP
Xudong Han | Timothy Baldwin | Trevor Cohn
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct. However, current progress is hampered by a plurality of definitions of bias, means of quantification, and oftentimes vague relation between debiasing algorithms and theoretical measures of bias. This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning, with two key contributions: (1) making clear inter-relations among the current gamut of methods, and their relation to fairness theory; and (2) addressing the practical problem of model selection, which involves a trade-off between fairness and accuracy and has led to systemic issues in fairness research. Putting them together, we make several recommendations to help shape future work.

pdf
NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages
Genta Indra Winata | Alham Fikri Aji | Samuel Cahyawijaya | Rahmad Mahendra | Fajri Koto | Ade Romadhony | Kemal Kurniawan | David Moeljadi | Radityo Eko Prasojo | Pascale Fung | Timothy Baldwin | Jey Han Lau | Rico Sennrich | Sebastian Ruder
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Natural language processing (NLP) has a significant impact on society via technologies such as machine translation and search engines. Despite its success, NLP technology is only widely available for high-resource languages such as English and Chinese, while it remains inaccessible to many languages due to the unavailability of data resources and benchmarks. In this work, we focus on developing resources for languages in Indonesia. Despite being the second most linguistically diverse country, most languages in Indonesia are categorized as endangered and some are even extinct. We develop the first-ever parallel resource for 10 low-resource languages in Indonesia. Our resource includes sentiment and machine translation datasets, and bilingual lexicons. We provide extensive analyses and describe challenges for creating such resources. We hope this work can spark NLP research on Indonesian and other underrepresented languages.

pdf
Promoting Fairness in Classification of Quality of Medical Evidence
Simon Suster | Timothy Baldwin | Karin Verspoor
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks

Automatically rating the quality of published research is a critical step in medical evidence synthesis. While several methods have been proposed, their algorithmic fairness has been overlooked even though significant risks may follow when such systems are deployed in biomedical contexts. In this work, we study fairness on two systems along two sensitive attributes, participant sex and medical area. In some cases, we find important inequalities, leading us to apply various debiasing methods. Upon examining an interplay of systems’ predictive performance, fairness, as well as medically critical selective classification capabilities and calibration performance, we find that fairness can sometimes improve through debiasing, but at a cost in other performance measures.

pdf
Unsupervised Paraphrasing of Multiword Expressions
Takashi Wada | Yuji Matsumoto | Timothy Baldwin | Jey Han Lau
Findings of the Association for Computational Linguistics: ACL 2023

We propose an unsupervised approach to paraphrasing multiword expressions (MWEs) in context. Our model employs only monolingual corpus data and pre-trained language models (without fine-tuning), and does not make use of any external resources such as dictionaries. We evaluate our method on the SemEval 2022 idiomatic semantic text similarity task, and show that it outperforms all unsupervised systems and rivals supervised systems.

pdf
Cost-effective Distillation of Large Language Models
Sayantan Dasgupta | Trevor Cohn | Timothy Baldwin
Findings of the Association for Computational Linguistics: ACL 2023

Knowledge distillation (KD) involves training a small “student” model to replicate the strong performance of a high-capacity “teacher” model, enabling efficient deployment in resource-constrained settings. Top-performing methods tend to be task- or architecture-specific and lack generalizability. Several existing approaches require pretraining of the teacher on task-specific datasets, which can be costly for large and unstable for small datasets. Here we propose an approach for improving KD through a novel distillation loss agnostic to the task and model architecture. We successfully apply our method to the distillation of the BERT-base and achieve highly competitive results from the distilled student across a range of GLUE tasks, especially for tasks with smaller datasets.

pdf
NusaCrowd: Open Source Initiative for Indonesian NLP Resources
Samuel Cahyawijaya | Holy Lovenia | Alham Fikri Aji | Genta Winata | Bryan Wilie | Fajri Koto | Rahmad Mahendra | Christian Wibisono | Ade Romadhony | Karissa Vincentio | Jennifer Santoso | David Moeljadi | Cahya Wirawan | Frederikus Hudi | Muhammad Satrio Wicaksono | Ivan Parmonangan | Ika Alfina | Ilham Firdausi Putra | Samsul Rahmadani | Yulianti Oenang | Ali Septiandri | James Jaya | Kaustubh Dhole | Arie Suryani | Rifki Afina Putri | Dan Su | Keith Stevens | Made Nindyatama Nityasya | Muhammad Adilazuarda | Ryan Hadiwijaya | Ryandito Diandaru | Tiezheng Yu | Vito Ghifari | Wenliang Dai | Yan Xu | Dyah Damapuspita | Haryo Wibowo | Cuk Tho | Ichwanul Karo Karo | Tirana Fatyanosa | Ziwei Ji | Graham Neubig | Timothy Baldwin | Sebastian Ruder | Pascale Fung | Herry Sujaini | Sakriani Sakti | Ayu Purwarianti
Findings of the Association for Computational Linguistics: ACL 2023

We present NusaCrowd, a collaborative initiative to collect and unify existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have brought together 137 datasets and 118 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their value is demonstrated through multiple experiments.NusaCrowd’s data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and the local languages of Indonesia. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and the local languages of Indonesia. Our work strives to advance natural language processing (NLP) research for languages that are under-represented despite being widely spoken.

2022

pdf
LipKey: A Large-Scale News Dataset for Absent Keyphrases Generation and Abstractive Summarization
Fajri Koto | Timothy Baldwin | Jey Han Lau
Proceedings of the 29th International Conference on Computational Linguistics

Summaries, keyphrases, and titles are different ways of concisely capturing the content of a document. While most previous work has released the datasets of keyphrases and summarization separately, in this work, we introduce LipKey, the largest news corpus with human-written abstractive summaries, absent keyphrases, and titles. We jointly use the three elements via multi-task training and training as joint structured inputs, in the context of document summarization. We find that including absent keyphrases and titles as additional context to the source document improves transformer-based summarization models.

pdf
Unsupervised Lexical Substitution with Decontextualised Embeddings
Takashi Wada | Timothy Baldwin | Yuji Matsumoto | Jey Han Lau
Proceedings of the 29th International Conference on Computational Linguistics

We propose a new unsupervised method for lexical substitution using pre-trained language models. Compared to previous approaches that use the generative capability of language models to predict substitutes, our method retrieves substitutes based on the similarity of contextualised and decontextualised word embeddings, i.e. the average contextual representation of a word in multiple contexts. We conduct experiments in English and Italian, and show that our method substantially outperforms strong baselines and establishes a new state-of-the-art without any explicit supervision or fine-tuning. We further show that our method performs particularly well at predicting low-frequency substitutes, and also generates a diverse list of substitute candidates, reducing morphophonetic or morphosyntactic biases induced by article-noun agreement.

pdf
Noisy Label Regularisation for Textual Regression
Yuxia Wang | Timothy Baldwin | Karin Verspoor
Proceedings of the 29th International Conference on Computational Linguistics

Training with noisy labelled data is known to be detrimental to model performance, especially for high-capacity neural network models in low-resource domains. Our experiments suggest that standard regularisation strategies, such as weight decay and dropout, are ineffective in the face of noisy labels. We propose a simple noisy label detection method that prevents error propagation from the input layer. The approach is based on the observation that the projection of noisy labels is learned through memorisation at advanced stages of learning, and that the Pearson correlation is sensitive to outliers. Extensive experiments over real-world human-disagreement annotations as well as randomly-corrupted and data-augmented labels, across various tasks and domains, demonstrate that our method is effective, regularising noisy labels and improving generalisation performance.

pdf
MultiSpanQA: A Dataset for Multi-Span Question Answering
Haonan Li | Martin Tomko | Maria Vasardani | Timothy Baldwin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Most existing reading comprehension datasets focus on single-span answers, which can be extracted as a single contiguous span from a given text passage. Multi-span questions, i.e., questions whose answer is a series of multiple discontiguous spans in the text, are common real life but are less studied. In this paper, we present MultiSpanQA, a new dataset that focuses on multi-span questions. Raw questions and contexts are extracted from the Natural Questions dataset. After multi-span re-annotation, MultiSpanQA consists of over a total of 6,000 multi-span questions in the basic version, and over 19,000 examples with unanswerable questions, and questions with single-, and multi-span answers in the expanded version. We introduce new metrics for the purposes of multi-span question answering evaluation, and establish several baselines using advanced models. Finally, we propose a new model which beats all baselines and achieves state-of-the-art on our dataset.

pdf
Optimising Equal Opportunity Fairness in Model Training
Aili Shen | Xudong Han | Trevor Cohn | Timothy Baldwin | Lea Frermann
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Real-world datasets often encode stereotypes and societal biases. Such biases can be implicitly captured by trained models, leading to biased predictions and exacerbating existing societal preconceptions. Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias. However, a disconnect between fairness criteria and training objectives makes it difficult to reason theoretically about the effectiveness of different techniques. In this work, we propose two novel training objectives which directly optimise for the widely-used criterion of equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.

pdf
Improving negation detection with negation-focused pre-training
Thinh Truong | Timothy Baldwin | Trevor Cohn | Karin Verspoor
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Negation is a common linguistic feature that is crucial in many language understanding tasks, yet it remains a hard problem due to diversity in its expression in different types of text. Recent works show that state-of-the-art NLP models underperform on samples containing negation in various tasks, and that negation detection models do not transfer well across domains. We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking, to better incorporate negation information into language models. Extensive experiments on common benchmarks show that our proposed approach improves negation detection performance and generalizability over the strong baseline NegBERT (Khandelwal and Sawant, 2020).

pdf
CULG: Commercial Universal Language Generation
Haonan Li | Yameng Huang | Yeyun Gong | Jian Jiao | Ruofei Zhang | Timothy Baldwin | Nan Duan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track

Pre-trained language models (PLMs) have dramatically improved performance for many natural language processing (NLP) tasks in domains such as finance and healthcare. However, the application of PLMs in the domain of commerce, especially marketing and advertising, remains less studied. In this work, we adapt pre-training methods to the domain of commerce, by proposing CULG, a large-scale commercial universal language generation model which is pre-trained on a corpus drawn from 10 markets across 7 languages. We propose 4 commercial generation tasks and a two-stage training strategy for pre-training, and demonstrate that the proposed strategy yields performance improvements on three generation tasks as compared to single-stage pre-training. Extensive experiments show that our model outperforms other models by a large margin on commercial generation tasks, and we conclude with a discussion on additional applications over other markets, languages, and tasks.

pdf
LED down the rabbit hole: exploring the potential of global attention for biomedical multi-document summarisation
Yulia Otmakhova | Thinh Hung Truong | Timothy Baldwin | Trevor Cohn | Karin Verspoor | Jey Han Lau
Proceedings of the Third Workshop on Scholarly Document Processing

In this paper we report the experiments performed for the submission to the Multidocument summarisation for Literature Review (MSLR) Shared Task. In particular, we adopt Primera model to the biomedical domain by placing global attention on important biomedical entities in several ways. We analyse the outputs of 23 resulting models and report some patterns related to the presence of additional global attention, number of training steps and the inputs configuration.

pdf
Can Pretrained Language Models Generate Persuasive, Faithful, and Informative Ad Text for Product Descriptions?
Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)

For any e-commerce service, persuasive, faithful, and informative product descriptions can attract shoppers and improve sales. While not all sellers are capable of providing such interesting descriptions, a language generation system can be a source of such descriptions at scale, and potentially assist sellers to improve their product descriptions. Most previous work has addressed this task based on statistical approaches (Wang et al., 2017), limited attributes such as titles (Chen et al., 2019; Chan et al., 2020), and focused on only one product type (Wang et al., 2017; Munigala et al., 2018; Hong et al., 2021). In this paper, we jointly train image features and 10 text attributes across 23 diverse product types, with two different target text types with different writing styles: bullet points and paragraph descriptions. Our findings suggest that multimodal training with modern pretrained language models can generate fluent and persuasive advertisements, but are less faithful and informative, especially out of domain.

pdf
Automatic Explanation Generation For Climate Science Claims
Rui Xing | Shraey Bhatia | Timothy Baldwin | Jey Han Lau
Proceedings of the The 20th Annual Workshop of the Australasian Language Technology Association

Climate change is an existential threat to humanity, the proliferation of unsubstantiated claims relating to climate science is manipulating public perception, motivating the need for fact-checking in climate science. In this work, we draw on recent work that uses retrieval-augmented generation for veracity prediction and explanation generation, in framing explanation generation as a query-focused multi-document summarization task. We adapt PRIMERA to the climate science domain by adding additional global attention on claims. Through automatic evaluation and qualitative analysis, we demonstrate that our method is effective at generating explanations.

pdf
Evaluating the Examiner: The Perils of Pearson Correlation for Validating Text Similarity Metrics
Gisela Vallejo | Timothy Baldwin | Lea Frermann
Proceedings of the The 20th Annual Workshop of the Australasian Language Technology Association

In recent years, researchers have developed question-answering based approaches to automatically evaluate system summaries, reporting improved validity compared to word overlap-based metrics like ROUGE, in terms of correlation with human ratings of criteria including fluency and hallucination. In this paper, we take a closer look at one particular metric, QuestEval, and ask whether: (1) it can serve as a more general metric for long document similarity assessment; and (2) a single correlation score between metric scores and human ratings, as the currently standard approach, is sufficient for metric validation. We find that correlation scores can be misleading, and that score distributions and outliers should be taken into account. With these caveats in mind, QuestEval can be a promising candidate for long document similarity assessment.

pdf
What does it take to bake a cake? The RecipeRef corpus and anaphora resolution in procedural text
Biaoyan Fang | Timothy Baldwin | Karin Verspoor
Findings of the Association for Computational Linguistics: ACL 2022

Procedural text contains rich anaphoric phenomena, yet has not received much attention in NLP. To fill this gap, we investigate the textual properties of two types of procedural text, recipes and chemical patents, and generalize an anaphora annotation framework developed for the chemical domain for modeling anaphoric phenomena in recipes. We apply this framework to annotate the RecipeRef corpus with both bridging and coreference relations. Through comparison to chemical patents, we show the complexity of anaphora resolution in recipes. We demonstrate empirically that transfer learning from the chemical domain improves resolution of anaphora in recipes, suggesting transferability of general procedural knowledge.

pdf
Does Representational Fairness Imply Empirical Fairness?
Aili Shen | Xudong Han | Trevor Cohn | Timothy Baldwin | Lea Frermann
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022

NLP technologies can cause unintended harms if learned representations encode sensitive attributes of the author, or predictions systematically vary in quality across groups. Popular debiasing approaches, like adversarial training, remove sensitive information from representations in order to reduce disparate performance, however the relation between representational fairness and empirical (performance) fairness has not been systematically studied. This paper fills this gap, and proposes a novel debiasing method building on contrastive learning to encourage a latent space that separates instances based on target label, while mixing instances that share protected attributes. Our results show the effectiveness of our new method and, more importantly, show across a set of diverse debiasing methods that representational fairness does not imply empirical fairness. This work highlights the importance of aligning and understanding the relation of the optimization objective and final fairness target.

pdf
M3: Multi-level dataset for Multi-document summarisation of Medical studies
Yulia Otmakhova | Karin Verspoor | Timothy Baldwin | Antonio Jimeno Yepes | Jey Han Lau
Findings of the Association for Computational Linguistics: EMNLP 2022

We present M3 (Multi-level dataset for Multi-document summarisation of Medical studies), a benchmark dataset for evaluating the quality of summarisation systems in the biomedical domain. The dataset contains sets of multiple input documents and target summaries of three levels of complexity: documents, sentences, and propositions. The dataset also includes several levels of annotation, including biomedical entities, direction, and strength of relations between them, and the discourse relationships between the input documents (“contradiction” or “agreement”). We showcase usage scenarios of the dataset by testing 10 generic and domain-specific summarisation models in a zero-shot setting, and introduce a probing task based on counterfactuals to test if models are aware of the direction and strength of the conclusions generated from input studies.

pdf
Easy-First Bottom-Up Discourse Parsing via Sequence Labelling
Andrew Shen | Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the 3rd Workshop on Computational Approaches to Discourse

We propose a novel unconstrained bottom-up approach for rhetorical discourse parsing based on sequence labelling of adjacent pairs of discourse units (DUs), based on the framework of Koto et al. (2021). We describe the unique training requirements of an unconstrained parser, and explore two different training procedures: (1) fixed left-to-right; and (2) random order in tree construction. Additionally, we introduce a novel dynamic oracle for unconstrained bottom-up parsing. Our proposed parser achieves competitive results for bottom-up rhetorical discourse parsing.

pdf
Uncertainty Estimation and Reduction of Pre-trained Models for Text Regression
Yuxia Wang | Daniel Beck | Timothy Baldwin | Karin Verspoor
Transactions of the Association for Computational Linguistics, Volume 10

State-of-the-art classification and regression models are often not well calibrated, and cannot reliably provide uncertainty estimates, limiting their utility in safety-critical applications such as clinical decision-making. While recent work has focused on calibration of classifiers, there is almost no work in NLP on calibration in a regression setting. In this paper, we quantify the calibration of pre- trained language models for text regression, both intrinsically and extrinsically. We further apply uncertainty estimates to augment training data in low-resource domains. Our experiments on three regression tasks in both self-training and active-learning settings show that uncertainty estimation can be used to increase overall performance and enhance model generalization.

pdf
The patient is more dead than alive: exploring the current state of the multi-document summarisation of the biomedical literature
Yulia Otmakhova | Karin Verspoor | Timothy Baldwin | Jey Han Lau
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Although multi-document summarisation (MDS) of the biomedical literature is a highly valuable task that has recently attracted substantial interest, evaluation of the quality of biomedical summaries lacks consistency and transparency. In this paper, we examine the summaries generated by two current models in order to understand the deficiencies of existing evaluation approaches in the context of the challenges that arise in the MDS task. Based on this analysis, we propose a new approach to human evaluation and identify several challenges that must be overcome to develop effective biomedical MDS systems.

pdf
One Country, 700+ Languages: NLP Challenges for Underrepresented Languages and Dialects in Indonesia
Alham Fikri Aji | Genta Indra Winata | Fajri Koto | Samuel Cahyawijaya | Ade Romadhony | Rahmad Mahendra | Kemal Kurniawan | David Moeljadi | Radityo Eko Prasojo | Timothy Baldwin | Jey Han Lau | Sebastian Ruder
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

NLP research is impeded by a lack of resources and awareness of the challenges presented by underrepresented languages and dialects. Focusing on the languages spoken in Indonesia, the second most linguistically diverse and the fourth most populous nation of the world, we provide an overview of the current state of NLP research for Indonesia’s 700+ languages. We highlight challenges in Indonesian NLP and how these affect the performance of current NLP systems. Finally, we provide general recommendations to help develop NLP technology not only for languages of Indonesia but also other underrepresented languages.

pdf
Balancing out Bias: Achieving Fairness Through Balanced Training
Xudong Han | Timothy Baldwin | Trevor Cohn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups. Dataset balancing has been shown to be effective at mitigating bias, however existing approaches do not directly account for correlations between author demographics and linguistic variables, limiting their effectiveness. To achieve Equal Opportunity fairness, such as equal job opportunity without regard to demographics, this paper introduces a simple, but highly effective, objective for countering bias using balanced training.We extend the method in the form of a gated model, which incorporates protected attributes as input, and show that it is effective at reducing bias in predictions through demographic input perturbation, outperforming all other bias mitigation techniques when combined with balanced training.

pdf
FairLib: A Unified Framework for Assessing and Improving Fairness
Xudong Han | Aili Shen | Yitong Li | Lea Frermann | Timothy Baldwin | Trevor Cohn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

This paper presents FairLib, an open-source python library for assessing and improving model fairness. It provides a systematic framework for quickly accessing benchmark datasets, reproducing existing debiasing baseline models, developing new methods, evaluating models with different metrics, and visualizing their results.Its modularity and extensibility enable the framework to be used for diverse types of inputs, including natural language, images, and audio.We implement 14 debiasing methods, including pre-processing,at-training-time, and post-processing approaches. The built-in metrics cover the most commonly acknowledged fairness criteria and can be further generalized and customized for fairness evaluation.

pdf bib
Cloze Evaluation for Deeper Understanding of Commonsense Stories in Indonesian
Fajri Koto | Timothy Baldwin | Jey Han Lau
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)

Story comprehension that involves complex causal and temporal relations is a critical task in NLP, but previous studies have focused predominantly on English, leaving open the question of how the findings generalize to other languages, such as Indonesian. In this paper, we follow the Story Cloze Test framework of Mostafazadeh et al. (2016) in evaluating story understanding in Indonesian, by constructing a four-sentence story with one correct ending and one incorrect ending. To investigate commonsense knowledge acquisition in language models, we experimented with: (1) a classification task to predict the correct ending; and (2) a generation task to complete the story with a single sentence. We investigate these tasks in two settings: (i) monolingual training and ii) zero-shot cross-lingual transfer between Indonesian and English.

pdf
Towards Fair Dataset Distillation for Text Classification
Xudong Han | Aili Shen | Yitong Li | Lea Frermann | Timothy Baldwin | Trevor Cohn
Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)

With the growing prevalence of large-scale language models, their energy footprint and potential to learn and amplify historical biases are two pressing challenges. Dataset distillation (DD) — a method for reducing the dataset size by learning a small number of synthetic samples which encode the information in the original dataset — is a method for reducing the cost of model training, however its impact on fairness has not been studied. We investigate how DD impacts on group bias, with experiments over two language classification tasks, concluding that vanilla DD preserves the bias of the dataset. We then show how existing debiasing methods can be combined with DD to produce models that are fair and accurate, at reduced training cost.

pdf
Systematic Evaluation of Predictive Fairness
Xudong Han | Aili Shen | Trevor Cohn | Timothy Baldwin | Lea Frermann
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Mitigating bias in training on biased datasets is an important open problem. Several techniques have been proposed, however the typical evaluation regime is very limited, considering very narrow data conditions. For instance, the effect of target class imbalance and stereotyping is under-studied. To address this gap, we examine the performance of various debiasing methods across multiple tasks, spanning binary classification (Twitter sentiment), multi-class classification (profession prediction), and regression (valence prediction). Through extensive experimentation, we find that data conditions have a strong influence on relative model performance, and that general conclusions cannot be drawn about method efficacy when evaluating only on standard datasets, as is current practice in fairness research.

pdf
Not another Negation Benchmark: The NaN-NLI Test Suite for Sub-clausal Negation
Thinh Hung Truong | Yulia Otmakhova | Timothy Baldwin | Trevor Cohn | Jey Han Lau | Karin Verspoor
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Negation is poorly captured by current language models, although the extent of this problem is not widely understood. We introduce a natural language inference (NLI) test suite to enable probing the capabilities of NLP methods, with the aim of understanding sub-clausal negation. The test suite contains premise–hypothesis pairs where the premise contains sub-clausal negation and the hypothesis is constructed by making minimal modifications to the premise in order to reflect different possible interpretations. Aside from adopting standard NLI labels, our test suite is systematically constructed under a rigorous linguistic framework. It includes annotation of negation types and constructions grounded in linguistic theory, as well as the operations used to construct hypotheses. This facilitates fine-grained analysis of model performance. We conduct experiments using pre-trained language models to demonstrate that our test suite is more challenging than existing benchmarks focused on negation, and show how our annotation supports a deeper understanding of the current NLI capabilities in terms of negation and quantification.

2021

pdf
On the (In)Effectiveness of Images for Text Classification
Chunpeng Ma | Aili Shen | Hiyori Yoshikawa | Tomoya Iwakura | Daniel Beck | Timothy Baldwin
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Images are core components of multi-modal learning in natural language processing (NLP), and results have varied substantially as to whether images improve NLP tasks or not. One confounding effect has been that previous NLP research has generally focused on sophisticated tasks (in varying settings), generally applied to English only. We focus on text classification, in the context of assigning named entity classes to a given Wikipedia page, where images generally complement the text and the Wikipedia page can be in one of a number of different languages. Our experiments across a range of languages show that images complement NLP models (including BERT) trained without external pre-training, but when combined with BERT models pre-trained on large-scale external data, images contribute nothing.

pdf
Top-down Discourse Parsing via Sequence Labelling
Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

We introduce a top-down approach to discourse parsing that is conceptually simpler than its predecessors (Kobayashi et al., 2020; Zhang et al., 2020). By framing the task as a sequence labelling problem where the goal is to iteratively segment a document into individual discourse units, we are able to eliminate the decoder and reduce the search space for splitting points. We explore both traditional recurrent models and modern pre-trained transformer models for the task, and additionally introduce a novel dynamic oracle for top-down parsing. Based on the Full metric, our proposed LSTM model sets a new state-of-the-art for RST parsing.

pdf
ChEMU-Ref: A Corpus for Modeling Anaphora Resolution in the Chemical Domain
Biaoyan Fang | Christian Druckenbrodt | Saber A Akhondi | Jiayuan He | Timothy Baldwin | Karin Verspoor
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Chemical patents contain rich coreference and bridging links, which are the target of this research. Specially, we introduce a novel annotation scheme, based on which we create the ChEMU-Ref dataset from reaction description snippets in English-language chemical patents. We propose a neural approach to anaphora resolution, which we show to achieve strong results, especially when jointly trained over coreference and bridging links.

pdf
Diverse Adversaries for Mitigating Bias in Training
Xudong Han | Timothy Baldwin | Trevor Cohn
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Adversarial learning can learn fairer and less biased models of language processing than standard training. However, current adversarial techniques only partially mitigate the problem of model bias, added to which their training procedures are often unstable. In this paper, we propose a novel approach to adversarial learning based on the use of multiple diverse discriminators, whereby discriminators are encouraged to learn orthogonal hidden representations from one another. Experimental results show that our method substantially improves over standard adversarial removal methods, in terms of reducing bias and stability of training.

pdf
Decoupling Adversarial Training for Fair NLP
Xudong Han | Timothy Baldwin | Trevor Cohn
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Evaluating the Efficacy of Summarization Evaluation across Languages
Fajri Koto | Jey Han Lau | Timothy Baldwin
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
KFCNet: Knowledge Filtering and Contrastive Learning for Generative Commonsense Reasoning
Haonan Li | Yeyun Gong | Jian Jiao | Ruofei Zhang | Timothy Baldwin | Nan Duan
Findings of the Association for Computational Linguistics: EMNLP 2021

Pre-trained language models have led to substantial gains over a broad range of natural language processing (NLP) tasks, but have been shown to have limitations for natural language generation tasks with high-quality requirements on the output, such as commonsense generation and ad keyword generation. In this work, we present a novel Knowledge Filtering and Contrastive learning Network (KFCNet) which references external knowledge and achieves better generation performance. Specifically, we propose a BERT-based filter model to remove low-quality candidates, and apply contrastive learning separately to each of the encoder and decoder, within a general encoder–decoder architecture. The encoder contrastive module helps to capture global target semantics during encoding, and the decoder contrastive module enhances the utility of retrieved prototypes while learning general features. Extensive experiments on the CommonGen benchmark show that our model outperforms the previous state of the art by a large margin: +6.6 points (42.5 vs. 35.9) for BLEU-4, +3.7 points (33.3 vs. 29.6) for SPICE, and +1.3 points (18.3 vs. 17.0) for CIDEr. We further verify the effectiveness of the proposed contrastive module on ad keyword generation, and show that our model has potential commercial value.

pdf
‘Just What do You Think You’re Doing, Dave?’ A Checklist for Responsible Data Use in NLP
Anna Rogers | Timothy Baldwin | Kobi Leins
Findings of the Association for Computational Linguistics: EMNLP 2021

A key part of the NLP ethics movement is responsible use of data, but exactly what that means or how it can be best achieved remain unclear. This position paper discusses the core legal and ethical principles for collection and sharing of textual data, and the tensions between them. We propose a potential checklist for responsible data (re-)use that could both standardise the peer review of conference submissions, as well as enable a more in-depth view of published research across the community. Our proposal aims to contribute to the development of a consistent standard for data (re-)use, embraced across NLP conferences.

pdf bib
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Wei Xu | Alan Ritter | Tim Baldwin | Afshin Rahimi
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

pdf
MultiLexNorm: A Shared Task on Multilingual Lexical Normalization
Rob van der Goot | Alan Ramponi | Arkaitz Zubiaga | Barbara Plank | Benjamin Muller | Iñaki San Vicente Roncal | Nikola Ljubešić | Özlem Çetinoğlu | Rahmad Mahendra | Talha Çolakoğlu | Timothy Baldwin | Tommaso Caselli | Wladimir Sidorenko
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MultiLexNorm shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 13 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-of-speech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system.

pdf
Evaluating Hierarchical Document Categorisation
Qian Sun | Aili Shen | Hiyori Yoshikawa | Chunpeng Ma | Daniel Beck | Tomoya Iwakura | Timothy Baldwin
Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association

Hierarchical document categorisation is a special case of multi-label document categorisation, where there is a taxonomic hierarchy among the labels. While various approaches have been proposed for hierarchical document categorisation, there is no standard benchmark dataset, resulting in different methods being evaluated independently and there being no empirical consensus on what methods perform best. In this work, we examine different combinations of neural text encoders and hierarchical methods in an end-to-end framework, and evaluate over three datasets. We find that the performance of hierarchical document categorisation is determined not only by how the hierarchical information is modelled, but also the structure of the label hierarchy and class distribution.

pdf
Fairness-aware Class Imbalanced Learning
Shivashankar Subramanian | Afshin Rahimi | Timothy Baldwin | Trevor Cohn | Lea Frermann
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Class imbalance is a common challenge in many NLP tasks, and has clear connections to bias, in that bias in training data often leads to higher accuracy for majority groups at the expense of minority groups. However there has traditionally been a disconnect between research on class-imbalanced learning and mitigating bias, and only recently have the two been looked at through a common lens. In this work we evaluate long-tail learning methods for tweet sentiment and occupation classification, and extend a margin-loss based approach with methods to enforce fairness. We empirically show through controlled experiments that the proposed approaches help mitigate both class imbalance and demographic biases.

pdf
Evaluating Debiasing Techniques for Intersectional Biases
Shivashankar Subramanian | Xudong Han | Timothy Baldwin | Trevor Cohn | Lea Frermann
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Bias is pervasive for NLP models, motivating the development of automatic debiasing techniques. Evaluation of NLP debiasing methods has largely been limited to binary attributes in isolation, e.g., debiasing with respect to binary gender or race, however many corpora involve multiple such attributes, possibly with higher cardinality. In this paper we argue that a truly fair model must consider ‘gerrymandering’ groups which comprise not only single attributes, but also intersectional groups. We evaluate a form of bias-constrained model which is new to NLP, as well an extension of the iterative nullspace projection technique which can handle multiple identities.

pdf
IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization
Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

We present IndoBERTweet, the first large-scale pretrained model for Indonesian Twitter that is trained by extending a monolingually-trained Indonesian BERT model with additive domain-specific vocabulary. We focus in particular on efficient model adaptation under vocabulary mismatch, and benchmark different ways of initializing the BERT embedding layer for new word types. We find that initializing with the average BERT subword embedding makes pretraining five times faster, and is more effective than proposed methods for vocabulary adaptation in terms of extrinsic evaluation over seven Twitter-based datasets.

pdf
Evaluating Document Coherence Modeling
Aili Shen | Meladel Mistica | Bahar Salehi | Hang Li | Timothy Baldwin | Jianzhong Qi
Transactions of the Association for Computational Linguistics, Volume 9

While pretrained language models (LMs) have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear. As a step towards a better understanding of their discourse modeling capabilities, we propose a sentence intrusion detection task. We examine the performance of a broad range of pretrained LMs on this detection task for English. Lacking a dataset for the task, we introduce INSteD, a novel intruder sentence detection dataset, containing 170,000+ documents constructed from English Wikipedia and CNN news articles. Our experiments show that pretrained LMs perform impressively in in-domain evaluation, but experience a substantial drop in the cross-domain setting, indicating limited generalization capacity. Further results over a novel linguistic probe dataset show that there is substantial room for improvement, especially in the cross- domain setting.

pdf
Automatic Classification of Neutralization Techniques in the Narrative of Climate Change Scepticism
Shraey Bhatia | Jey Han Lau | Timothy Baldwin
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Neutralisation techniques, e.g. denial of responsibility and denial of victim, are used in the narrative of climate change scepticism to justify lack of action or to promote an alternative view. We first draw on social science to introduce the problem to the community of nlp, present the granularity of the coding schema and then collect manual annotations of neutralised techniques in text relating to climate change, and experiment with supervised and semi- supervised BERT-based models.

pdf
Discourse Probing of Pretrained Language Models
Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level relations. We experiment with 7 pretrained LMs, 4 languages, and 7 discourse probing tasks, and find BART to be overall the best model at capturing discourse — but only in its encoder, with BERT performing surprisingly well as the baseline model. Across the different models, there are substantial differences in which layers best capture discourse information, and large disparities between models.

pdf
Semi-automatic Triage of Requests for Free Legal Assistance
Meladel Mistica | Jey Han Lau | Brayden Merrifield | Kate Fazio | Timothy Baldwin
Proceedings of the Natural Legal Language Processing Workshop 2021

Free legal assistance is critically under-resourced, and many of those who seek legal help have their needs unmet. A major bottleneck in the provision of free legal assistance to those most in need is the determination of the precise nature of the legal problem. This paper describes a collaboration with a major provider of free legal assistance, and the deployment of natural language processing models to assign area-of-law categories to real-world requests for legal assistance. In particular, we focus on an investigation of models to generate efficiencies in the triage process, but also the risks associated with naive use of model predictions, including fairness across different user demographics.

pdf
Automatic Resolution of Domain Name Disputes
Wayan Oger Vihikan | Meladel Mistica | Inbar Levy | Andrew Christie | Timothy Baldwin
Proceedings of the Natural Legal Language Processing Workshop 2021

We introduce the new task of domain name dispute resolution (DNDR), that predicts the outcome of a process for resolving disputes about legal entitlement to a domain name. TheICANN UDRP establishes a mandatory arbitration process for a dispute between a trade-mark owner and a domain name registrant pertaining to a generic Top-Level Domain (gTLD) name (one ending in .COM, .ORG, .NET, etc). The nature of the problem leads to a very skewed data set, which stems from being able to register a domain name with extreme ease, very little expense, and no need to prove an entitlement to it. In this paper, we describe thetask and associated data set. We also present benchmarking results based on a range of mod-els, which show that simple baselines are in general difficult to beat due to the skewed data distribution, but in the specific case of the respondent having submitted a response, a fine-tuned BERT model offers considerable improvements over a majority-class model

pdf
A Simple yet Effective Method for Sentence Ordering
Aili Shen | Timothy Baldwin
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Sentence ordering is the task of arranging a given bag of sentences so as to maximise the coherence of the overall text. In this work, we propose a simple yet effective training method that improves the capacity of models to capture overall text coherence based on training over pairs of sentences/segments. Experimental results show the superiority of our proposed method in in- and cross-domain settings. The utility of our method is also verified over a multi-document summarisation task.

pdf bib
Learning Contextualised Cross-lingual Word Embeddings and Alignments for Extremely Low-Resource Languages Using Parallel Corpora
Takashi Wada | Tomoharu Iwata | Yuji Matsumoto | Timothy Baldwin | Jey Han Lau
Proceedings of the 1st Workshop on Multilingual Representation Learning

We propose a new approach for learning contextualised cross-lingual word embeddings based on a small parallel corpus (e.g. a few hundred sentence pairs). Our method obtains word embeddings via an LSTM encoder-decoder model that simultaneously translates and reconstructs an input sentence. Through sharing model parameters among different languages, our model jointly trains the word embeddings in a common cross-lingual space. We also propose to combine word and subword embeddings to make use of orthographic similarities across different languages. We base our experiments on real-world data from endangered languages, namely Yongning Na, Shipibo-Konibo, and Griko. Our experiments on bilingual lexicon induction and word alignment tasks show that our model outperforms existing methods by a large margin for most language pairs. These results demonstrate that, contrary to common belief, an encoder-decoder translation model is beneficial for learning cross-lingual representations even in extremely low-resource conditions. Furthermore, our model also works well on high-resource conditions, achieving state-of-the-art performance on a German-English word-alignment task.

2020

pdf
Learning from Unlabelled Data for Clinical Semantic Textual Similarity
Yuxia Wang | Karin Verspoor | Timothy Baldwin
Proceedings of the 3rd Clinical Natural Language Processing Workshop

Domain pretraining followed by task fine-tuning has become the standard paradigm for NLP tasks, but requires in-domain labelled data for task fine-tuning. To overcome this, we propose to utilise domain unlabelled data by assigning pseudo labels from a general model. We evaluate the approach on two clinical STS datasets, and achieve r= 0.80 on N2C2-STS. Further investigation reveals that if the data distribution of unlabelled sentence pairs is closer to the test data, we can obtain better performance. By leveraging a large general-purpose STS dataset and small-scale in-domain training data, we obtain further improvements to r= 0.90, a new SOTA.

pdf
A Multi-pass Sieve for Clinical Concept Normalization
Yuxia Wang | Brian Hur | Karin Verspoor | Timothy Baldwin
Traitement Automatique des Langues, Volume 61, Numéro 2 : TAL et Santé [NLP and Health]

pdf
Evaluating the Utility of Model Configurations and Data Augmentation on Clinical Semantic Textual Similarity
Yuxia Wang | Fei Liu | Karin Verspoor | Timothy Baldwin
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing

In this paper, we apply pre-trained language models to the Semantic Textual Similarity (STS) task, with a specific focus on the clinical domain. In low-resource setting of clinical STS, these large models tend to be impractical and prone to overfitting. Building on BERT, we study the impact of a number of model design choices, namely different fine-tuning and pooling strategies. We observe that the impact of domain-specific fine-tuning on clinical STS is much less than that in the general domain, likely due to the concept richness of the domain. Based on this, we propose two data augmentation techniques. Experimental results on N2C2-STS 1 demonstrate substantial improvements, validating the utility of the proposed methods.

pdf
Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes
Brian Hur | Timothy Baldwin | Karin Verspoor | Laura Hardefeldt | James Gilkerson
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing

Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.

pdf
Liputan6: A Large-scale Indonesian Dataset for Text Summarization
Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from Liputan6.com, an online news portal, and obtain 215,827 document–summary pairs. We leverage pre-trained language models to develop benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have low ROUGE scores, and expose both issues with ROUGE itself, as well as with extractive and abstractive summarization models.

pdf
IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP
Fajri Koto | Afshin Rahimi | Jey Han Lau | Timothy Baldwin
Proceedings of the 28th International Conference on Computational Linguistics

Although the Indonesian language is spoken by almost 200 million people and the 10th most spoken language in the world, it is under-represented in NLP research. Previous work on Indonesian has been hampered by a lack of annotated datasets, a sparsity of language resources, and a lack of resource standardization. In this work, we release the IndoLEM dataset comprising seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse. We additionally release IndoBERT, a new pre-trained language model for Indonesian, and evaluate it over IndoLEM, in addition to benchmarking it against existing resources. Our experiments show that IndoBERT achieves state-of-the-art performance over most of the tasks in IndoLEM.

pdf
Target Word Masking for Location Metonymy Resolution
Haonan Li | Maria Vasardani | Martin Tomko | Timothy Baldwin
Proceedings of the 28th International Conference on Computational Linguistics

Existing metonymy resolution approaches rely on features extracted from external resources like dictionaries and hand-crafted lexical resources. In this paper, we propose an end-to-end word-level classification approach based only on BERT, without dependencies on taggers, parsers, curated dictionaries of place names, or other external resources. We show that our approach achieves the state-of-the-art on 5 datasets, surpassing conventional BERT models and benchmarks by a large margin. We also show that our approach generalises well to unseen data.

pdf
WikiUMLS: Aligning UMLS to Wikipedia via Cross-lingual Neural Ranking
Afshin Rahimi | Timothy Baldwin | Karin Verspoor
Proceedings of the 28th International Conference on Computational Linguistics

We present our work on aligning the Unified Medical Language System (UMLS) to Wikipedia, to facilitate manual alignment of the two resources. We propose a cross-lingual neural reranking model to match a UMLS concept with a Wikipedia page, which achieves a recall@1of 72%, a substantial improvement of 20% over word- and char-level BM25, enabling manual alignment with minimal effort. We release our resources, including ranked Wikipedia pages for 700k UMLSconcepts, and WikiUMLS, a dataset for training and evaluation of alignment models between UMLS and Wikipedia collected from Wikidata. This will provide easier access to Wikipedia for health professionals, patients, and NLP systems, including in multilingual settings.

pdf
Information Extraction from Legal Documents: A Study in the Context of Common Law Court Judgements
Meladel Mistica | Geordie Z. Zhang | Hui Chia | Kabir Manandhar Shrestha | Rohit Kumar Gupta | Saket Khandelwal | Jeannie Paterson | Timothy Baldwin | Daniel Beck
Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association

‘Common Law’ judicial systems follow the doctrine of precedent, which means the legal principles articulated in court judgements are binding in subsequent cases in lower courts. For this reason, lawyers must search prior judgements for the legal principles that are relevant to their case. The difficulty for those within the legal profession is that the information that they are looking for may be contained within a few paragraphs or sentences, but those few paragraphs may be buried within a hundred-page document. In this study, we create a schema based on the relevant information that legal professionals seek within judgements and perform text classification based on it, with the aim of not only assisting lawyers in researching cases, but eventually enabling large-scale analysis of legal judgements to find trends in court outcomes over time.

pdf
Popularity Prediction of Online Petitions using a Multimodal DeepRegression Model
Kotaro Kitayama | Shivashankar Subramanian | Timothy Baldwin
Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association

Online petitions offer a mechanism for peopleto initiate a request for change and gather sup-port from others to demonstrate support for thecause. In this work, we model the task of peti-tion popularity using both text and image rep-resentations across four different languages,and including petition metadata. We evaluateour proposed approach using a dataset of 75kpetitions from Avaaz.org, and find strong com-plementarity between text and images.

pdf
Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis?
Kobi Leins | Jey Han Lau | Timothy Baldwin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

As part of growing NLP capabilities, coupled with an awareness of the ethical dimensions of research, questions have been raised about whether particular datasets and tasks should be deemed off-limits for NLP research. We examine this question with respect to a paper on automatic legal sentencing from EMNLP 2019 which was a source of some debate, in asking whether the paper should have been allowed to be published, who should have been charged with making such a decision, and on what basis. We focus in particular on the role of data statements in ethically assessing research, but also discuss the topic of dual use, and examine the outcomes of similar debates in other scientific disciplines.

pdf
Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
Nitika Mathur | Timothy Baldwin | Trevor Cohn
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric’s efficacy. Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected. Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation.

pdf bib
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
Wei Xu | Alan Ritter | Tim Baldwin | Afshin Rahimi
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)

pdf
Improved Topic Representations of Medical Documents to Assist COVID-19 Literature Exploration
Yulia Otmakhova | Karin Verspoor | Timothy Baldwin | Simon Šuster
Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020

Efficient discovery and exploration of biomedical literature has grown in importance in the context of the COVID-19 pandemic, and topic-based methods such as latent Dirichlet allocation (LDA) are a useful tool for this purpose. In this study we compare traditional topic models based on word tokens with topic models based on medical concepts, and propose several ways to improve topic coherence and specificity.

2019

pdf
Deep Ordinal Regression for Pledge Specificity Prediction
Shivashankar Subramanian | Trevor Cohn | Timothy Baldwin
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Many pledges are made in the course of an election campaign, forming important corpora for political analysis of campaign strategy and governmental accountability. At present, there are no publicly available annotated datasets of pledges, and most political analyses rely on manual annotations. In this paper we collate a novel dataset of manifestos from eleven Australian federal election cycles, with over 12,000 sentences annotated with specificity (e.g., rhetorical vs detailed pledge) on a fine-grained scale. We propose deep ordinal regression approaches for specificity prediction, under both supervised and semi-supervised settings, and provide empirical results demonstrating the effectiveness of the proposed techniques over several baseline approaches. We analyze the utility of pledge specificity modeling across a spectrum of policy issues in performing ideology prediction, and further provide qualitative analysis in terms of capturing party-specific issue salience across election cycles.

pdf bib
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
Wei Xu | Alan Ritter | Tim Baldwin | Afshin Rahimi
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

pdf
Modelling Uncertainty in Collaborative Document Quality Assessment
Aili Shen | Daniel Beck | Bahar Salehi | Jianzhong Qi | Timothy Baldwin
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

In the context of document quality assessment, previous work has mainly focused on predicting the quality of a document relative to a putative gold standard, without paying attention to the subjectivity of this task. To imitate people’s disagreement over inherently subjective tasks such as rating the quality of a Wikipedia article, a document quality assessment system should provide not only a prediction of the article quality but also the uncertainty over its predictions. This motivates us to measure the uncertainty in document quality predictions, in addition to making the label prediction. Experimental results show that both Gaussian processes (GPs) and random forests (RFs) can yield competitive results in predicting the quality of Wikipedia articles, while providing an estimate of uncertainty when there is inconsistency in the quality labels from the Wikipedia contributors. We additionally evaluate our methods in the context of a semi-automated document quality class assignment decision-making process, where there is asymmetric risk associated with overestimates and underestimates of document quality. Our experiments suggest that GPs provide more reliable estimates in this context.

pdf
Reevaluating Argument Component Extraction in Low Resource Settings
Anirudh Joshi | Timothy Baldwin | Richard Sinnott | Cecile Paris
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)

Argument component extraction is a challenging and complex high-level semantic extraction task. As such, it is both expensive to annotate (meaning training data is limited and low-resource by nature), and hard for current-generation deep learning methods to model. In this paper, we reevaluate the performance of state-of-the-art approaches in both single- and multi-task learning settings using combinations of character-level, GloVe, ELMo, and BERT encodings using standard BiLSTM-CRF encoders. We use evaluation metrics that are more consistent with evaluation practice in named entity recognition to understand how well current baselines address this challenge and compare their performance to lower-level semantic tasks such as CoNLL named entity recognition. We find that performance utilizing various pre-trained representations and training methodologies often leaves a lot to be desired as it currently stands, and suggest future pathways for improvement.

pdf
Contextualization of Morphological Inflection
Ekaterina Vylomova | Ryan Cotterell | Trevor Cohn | Timothy Baldwin | Jason Eisner
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Critical to natural language generation is the production of correctly inflected text. In this paper, we isolate the task of predicting a fully inflected sentence from its partially lemmatized version. Unlike traditional morphological inflection or surface realization, our task input does not provide “gold” tags that specify what morphological features to realize on each lemmatized word; rather, such features must be inferred from sentential context. We develop a neural hybrid graphical model that explicitly reconstructs morphological features before predicting the inflected forms, and compare this to a system that directly predicts the inflected forms without relying on any morphological annotation. We experiment on several typologically diverse languages from the Universal Dependencies treebanks, showing the utility of incorporating linguistically-motivated latent variables into NLP models.

pdf
Modelling Tibetan Verbal Morphology
Qianji Di | Ekaterina Vylomova | Tim Baldwin
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

pdf
Feature-guided Neural Model Training for Supervised Document Representation Learning
Aili Shen | Bahar Salehi | Jianzhong Qi | Timothy Baldwin
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

pdf
Improved Document Modelling with a Neural Discourse Parser
Fajri Koto | Jey Han Lau | Timothy Baldwin
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

Despite the success of attention-based neural models for natural language generation and classification tasks, they are unable to capture the discourse structure of larger documents. We hypothesize that explicit discourse representations have utility for NLP tasks over longer documents or document sequences, which sequence-to-sequence models are unable to capture. For abstractive summarization, for instance, conventional neural models simply match source documents and the summary in a latent space without explicit representation of text structure or relations. In this paper, we propose to use neural discourse representations obtained from a rhetorical structure theory (RST) parser to enhance document representations. Specifically, document representations are generated for discourse spans, known as the elementary discourse units (EDUs). We empirically investigate the benefit of the proposed approach on two different tasks: abstractive summarization and popularity prediction of online petitions. We find that the proposed approach leads to substantial improvements in all cases.

pdf
Does an LSTM forget more than a CNN? An empirical study of catastrophic forgetting in NLP
Gaurav Arora | Afshin Rahimi | Timothy Baldwin
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

Catastrophic forgetting — whereby a model trained on one task is fine-tuned on a second, and in doing so, suffers a “catastrophic” drop in performance over the first task — is a hurdle in the development of better transfer learning techniques. Despite impressive progress in reducing catastrophic forgetting, we have limited understanding of how different architectures and hyper-parameters affect forgetting in a network. With this study, we aim to understand factors which cause forgetting during sequential training. Our primary finding is that CNNs forget less than LSTMs. We show that max-pooling is the underlying operation which helps CNNs alleviate forgetting compared to LSTMs. We also found that curriculum learning, placing a hard task towards the end of task sequence, reduces forgetting. We analysed the effect of fine-tuning contextual embeddings on catastrophic forgetting and found that using embeddings as feature extractor is preferable to fine-tuning in continual learning setup.

pdf
Detecting Chemical Reactions in Patents
Hiyori Yoshikawa | Dat Quoc Nguyen | Zenan Zhai | Christian Druckenbrodt | Camilo Thorne | Saber A. Akhondi | Timothy Baldwin | Karin Verspoor
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

Extracting chemical reactions from patents is a crucial task for chemists working on chemical exploration. In this paper we introduce the novel task of detecting the textual spans that describe or refer to chemical reactions within patents. We formulate this task as a paragraph-level sequence tagging problem, where the system is required to return a sequence of paragraphs which contain a description of a reaction. To address this new task, we construct an annotated dataset from an existing proprietary database of chemical reactions manually extracted from patents. We introduce several baseline methods for the task and evaluate them over our dataset. Through error analysis, we discuss what makes the task complex and challenging, and suggest possible directions for future research.

pdf
Target Based Speech Act Classification in Political Campaign Text
Shivashankar Subramanian | Trevor Cohn | Timothy Baldwin
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)

We study pragmatics in political campaign text, through analysis of speech acts and the target of each utterance. We propose a new annotation schema incorporating domain-specific speech acts, such as commissive-action, and present a novel annotated corpus of media releases and speech transcripts from the 2016 Australian election cycle. We show how speech acts and target referents can be modeled as sequential classification, and evaluate several techniques, exploiting contextualized word representations, semi-supervised learning, task dependencies and speaker meta-data.

pdf
UniMelb at SemEval-2019 Task 12: Multi-model combination for toponym resolution
Haonan Li | Minghan Wang | Timothy Baldwin | Martin Tomko | Maria Vasardani
Proceedings of the 13th International Workshop on Semantic Evaluation

This paper describes our submission to SemEval-2019 Task 12 on toponym resolution over scientific articles. We train separate NER models for toponym detection over text extracted from tables vs. text from the body of the paper, and train another auxiliary model to eliminate misdetected toponyms. For toponym disambiguation, we use an SVM classifier with hand-engineered features. The best setting achieved a strict micro-F1 score of 80.92% and overlap micro-F1 score of 86.88% in the toponym detection subtask, ranking 2nd out of 8 teams on F1 score. For toponym disambiguation and end-to-end resolution, we officially ranked 2nd and 3rd, respectively.

pdf
Semi-supervised Stochastic Multi-Domain Learning using Variational Inference
Yitong Li | Timothy Baldwin | Trevor Cohn
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly heterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning.

pdf
Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation
Nitika Mathur | Timothy Baldwin | Trevor Cohn
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Accurate, automatic evaluation of machine translation is critical for system tuning, and evaluating progress in the field. We proposed a simple unsupervised metric, and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences. We find that these models rival or surpass all existing metrics in the WMT 2017 sentence-level and system-level tracks, and our trained model has a substantially higher correlation with human judgements than all existing metrics on the WMT 2017 to-English sentence level dataset.

pdf
How Well Do Embedding Models Capture Non-compositionality? A View from Multiword Expressions
Navnita Nandakumar | Timothy Baldwin | Bahar Salehi
Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP

In this paper, we apply various embedding methods on multiword expressions to study how well they capture the nuances of non-compositional data. Our results from a pool of word-, character-, and document-level embbedings suggest that Word2vec performs the best, followed by FastText and Infersent. Moreover, we find that recently-proposed contextualised embedding models such as Bert and ELMo are not adept at handling non-compositionality in multiword expressions.

2018

pdf
Language and the Shifting Sands of Domain, Space and Time (Invited Talk)
Timothy Baldwin
Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018)

In this talk, I will first present recent work on domain debiasing in the context of language identification, then discuss a new line of work on language variety analysis in the form of dialect map generation. Finally, I will reflect on the interplay between time and space on language variation, and speculate on how these can be captured in a single model.

pdf bib
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
Wei Xu | Alan Ritter | Tim Baldwin | Afshin Rahimi
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text

pdf bib
Twitter Geolocation using Knowledge-Based Methods
Taro Miyazaki | Afshin Rahimi | Trevor Cohn | Timothy Baldwin
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text

Automatic geolocation of microblog posts from their text content is particularly difficult because many location-indicative terms are rare terms, notably entity names such as locations, people or local organisations. Their low frequency means that key terms observed in testing are often unseen in training, such that standard classifiers are unable to learn weights for them. We propose a method for reasoning over such terms using a knowledge base, through exploiting their relations with other entities. Our technique uses a graph embedding over the knowledge base, which we couple with a text representation to learn a geolocation classifier, trained end-to-end. We show that our method improves over purely text-based methods, which we ascribe to more robust treatment of low-count and out-of-vocabulary entities.

pdf
Preferred Answer Selection in Stack Overflow: Better Text Representations ... and Metadata, Metadata, Metadata
Steven Xu | Andrew Bennett | Doris Hoogeveen | Jey Han Lau | Timothy Baldwin
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text

Community question answering (cQA) forums provide a rich source of data for facilitating non-factoid question answering over many technical domains. Given this, there is considerable interest in answer retrieval from these kinds of forums. However this is a difficult task as the structure of these forums is very rich, and both metadata and text features are important for successful retrieval. While there has recently been a lot of work on solving this problem using deep learning models applied to question/answer text, this work has not looked at how to make use of the rich metadata available in cQA forums. We propose an attention-based model which achieves state-of-the-art results for text-based answer selection alone, and by making use of complementary meta-data, achieves a substantially higher result over two reference datasets novel to this work.

pdf
Deep-speare: A joint neural model of poetic language, meter and rhyme
Jey Han Lau | Trevor Cohn | Timothy Baldwin | Julian Brooke | Adam Hammond
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.

pdf
Semi-supervised User Geolocation via Graph Convolutional Networks
Afshin Rahimi | Trevor Cohn | Timothy Baldwin
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the state-of-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.

pdf
Towards Robust and Privacy-preserving Text Representations
Yitong Li | Timothy Baldwin | Trevor Cohn
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Written text often provides sufficient clues to identify the author, their gender, age, and other important attributes. Consequently, the authorship of training and evaluation corpora can have unforeseen impacts, including differing model performance for different user groups, as well as privacy implications. In this paper, we propose an approach to explicitly obscure important author characteristics at training time, such that representations learned are invariant to these attributes. Evaluating on two tasks, we show that this leads to increased privacy in the learned representations, as well as more robust models to varying evaluation conditions, including out-of-domain corpora.

pdf
Content-based Popularity Prediction of Online Petitions Using a Deep Regression Model
Shivashankar Subramanian | Timothy Baldwin | Trevor Cohn
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Online petitions are a cost-effective way for citizens to collectively engage with policy-makers in a democracy. Predicting the popularity of a petition — commonly measured by its signature count — based on its textual content has utility for policymakers as well as those posting the petition. In this work, we model this task using CNN regression with an auxiliary ordinal regression objective. We demonstrate the effectiveness of our proposed approach using UK and US government petition datasets.

pdf
Narrative Modeling with Memory Chains and Semantic Supervision
Fei Liu | Trevor Cohn | Timothy Baldwin
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Story comprehension requires a deep semantic understanding of the narrative, making it a challenging task. Inspired by previous studies on ROC Story Cloze Test, we propose a novel method, tracking various semantic aspects with external neural memory chains while encouraging each to focus on a particular semantic aspect. Evaluated on the task of story ending prediction, our model demonstrates superior performance to a collection of competitive baselines, setting a new state of the art.

pdf
Encoding Sentiment Information into Word Vectors for Sentiment Analysis
Zhe Ye | Fang Li | Timothy Baldwin
Proceedings of the 27th International Conference on Computational Linguistics

General-purpose pre-trained word embeddings have become a mainstay of natural language processing, and more recently, methods have been proposed to encode external knowledge into word embeddings to benefit specific downstream tasks. The goal of this paper is to encode sentiment knowledge into pre-trained word vectors to improve the performance of sentiment analysis. Our proposed method is based on a convolutional neural network (CNN) and an external sentiment lexicon. Experiments on four popular sentiment analysis datasets show that this method improves the accuracy of sentiment analysis compared to a number of benchmark methods.

pdf
The Company They Keep: Extracting Japanese Neologisms Using Language Patterns
James Breen | Timothy Baldwin | Francis Bond
Proceedings of the 9th Global Wordnet Conference

We describe an investigation into the identification and extraction of unrecorded potential lexical items in Japanese text by detecting text passages containing selected language patterns typically associated with such items. We identified a set of suitable patterns, then tested them with two large collections of text drawn from the WWW and Twitter. Samples of the extracted items were evaluated, and it was demonstrated that the approach has considerable potential for identifying terms for later lexicographic analysis.

pdf
A Comparative Study of Embedding Models in Predicting the Compositionality of Multiword Expressions
Navnita Nandakumar | Bahar Salehi | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2018

In this paper, we perform a comparative evaluation of off-the-shelf embedding models over the task of compositionality prediction of multiword expressions("MWEs"). Our experimental results suggest that character- and document-level models capture knowledge of MWE compositionality and are effective in modelling varying levels of compositionality, with the advantage over word-level models that they do not require token-level identification of MWEs in the training corpus.

pdf
Towards Efficient Machine Translation Evaluation by Modelling Annotators
Nitika Mathur | Timothy Baldwin | Trevor Cohn
Proceedings of the Australasian Language Technology Association Workshop 2018

Accurate evaluation of translation has long been a difficult, yet important problem. Current evaluations use direct assessment (DA), based on crowd sourcing judgements from a large pool of workers, along with quality control checks, and a robust method for combining redundant judgements. In this paper we show that the quality control mechanism is overly conservative, which increases the time and expense of the evaluation. We propose a model that does not rely on a pre-processing step to filter workers and takes into account varying annotator reliabilities. Our model effectively weights each worker's scores based on the inferred precision of the worker, and is much more reliable than the mean of either the raw scores or the standardised scores. We also show that DA does not deliver on the promise of longitudinal evaluation, and propose redesigning the structure of the annotation tasks that can solve this problem.

pdf
Hierarchical Structured Model for Fine-to-Coarse Manifesto Text Analysis
Shivashankar Subramanian | Trevor Cohn | Timothy Baldwin
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Election manifestos document the intentions, motives, and views of political parties. They are often used for analysing a party’s fine-grained position on a particular issue, as well as for coarse-grained positioning of a party on the left–right spectrum. In this paper we propose a two-stage model for automatically performing both levels of analysis over manifestos. In the first step we employ a hierarchical multi-task structured deep model to predict fine- and coarse-grained positions, and in the second step we perform post-hoc calibration of coarse-grained positions using probabilistic soft logic. We empirically show that the proposed model outperforms state-of-art approaches at both granularities using manifestos from twelve countries, written in ten different languages.

pdf
Recurrent Entity Networks with Delayed Memory Update for Targeted Aspect-Based Sentiment Analysis
Fei Liu | Trevor Cohn | Timothy Baldwin
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

While neural networks have been shown to achieve impressive results for sentence-level sentiment analysis, targeted aspect-based sentiment analysis (TABSA) — extraction of fine-grained opinion polarity w.r.t. a pre-defined set of aspects — remains a difficult task. Motivated by recent advances in memory-augmented models for machine reading, we propose a novel architecture, utilising external “memory chains” with a delayed memory update mechanism to track entities. On a TABSA task, the proposed model demonstrates substantial improvements over state-of-the-art approaches, including those using external knowledge bases.

pdf
What’s in a Domain? Learning Domain-Robust Text Representations using Adversarial Training
Yitong Li | Timothy Baldwin | Trevor Cohn
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

Most real world language problems require learning from heterogenous corpora, raising the problem of learning robust models which generalise well to both similar (in domain) and dissimilar (out of domain) instances to those seen in training. This requires learning an underlying task, while not learning irrelevant signals and biases specific to individual domains. We propose a novel method to optimise both in- and out-of-domain accuracy based on joint learning of a structured neural model with domain-specific and domain-general components, coupled with adversarial training for domain. Evaluating on multi-domain language identification and multi-domain sentiment analysis, we show substantial improvements over standard domain adaptation techniques, and domain-adversarial training.

pdf
UniMelb at SemEval-2018 Task 12: Generative Implication using LSTMs, Siamese Networks and Semantic Representations with Synonym Fuzzing
Anirudh Joshi | Tim Baldwin | Richard O. Sinnott | Cecile Paris
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes a warrant classification system for SemEval 2018 Task 12, that attempts to learn semantic representations of reasons, claims and warrants. The system consists of 3 stacked LSTMs: one for the reason, one for the claim, and one shared Siamese Network for the 2 candidate warrants. Our main contribution is to force the embeddings into a shared feature space using vector operations, semantic similarity classification, Siamese networks, and multi-task learning. In doing so, we learn a form of generative implication, in encoding implication interrelationships between reasons, claims, and the associated correct and incorrect warrants. We augment the limited data in the task further by utilizing WordNet synonym “fuzzing”. When applied to SemEval 2018 Task 12, our system performs well on the development data, and officially ranked 8th among 21 teams.

pdf
Topic Intrusion for Automatic Topic Model Evaluation
Shraey Bhatia | Jey Han Lau | Timothy Baldwin
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Topic coherence is increasingly being used to evaluate topic models and filter topics for end-user applications. Topic coherence measures how well topic words relate to each other, but offers little insight on the utility of the topics in describing the documents. In this paper, we explore the topic intrusion task — the task of guessing an outlier topic given a document and a few topics — and propose a method to automate it. We improve upon the state-of-the-art substantially, demonstrating its viability as an alternative method for topic model evaluation.

2017

pdf bib
Improving End-to-End Memory Networks with Unified Weight Tying
Fei Liu | Trevor Cohn | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2017

pdf
Joint Sentence-Document Model for Manifesto Text Analysis
Shivashankar Subramanian | Trevor Cohn | Timothy Baldwin | Julian Brooke
Proceedings of the Australasian Language Technology Association Workshop 2017

pdf
A Hybrid Model for Quality Assessment of Wikipedia Articles
Aili Shen | Jianzhong Qi | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2017

pdf
Automatic Negation and Speculation Detection in Veterinary Clinical Text
Katherine Cheng | Timothy Baldwin | Karin Verspoor
Proceedings of the Australasian Language Technology Association Workshop 2017

pdf
Topically Driven Neural Language Model
Jey Han Lau | Timothy Baldwin | Trevor Cohn
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Language models are typically applied at the sentence level, without access to the broader document context. We present a neural language model that incorporates document context in the form of a topic model-like architecture, thus providing a succinct representation of the broader document context outside of the current sentence. Experiments over a range of datasets demonstrate that our model outperforms a pure sentence-based model in terms of language model perplexity, and leads to topics that are potentially more coherent than those produced by a standard LDA topic model. Our model also has the ability to generate related sentences for a topic, providing another way to interpret topics.

pdf
A Neural Model for User Geolocation and Lexical Dialectology
Afshin Rahimi | Trevor Cohn | Timothy Baldwin
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We propose a simple yet effective text-based user geolocation model based on a neural network with one hidden layer, which achieves state of the art performance over three Twitter benchmark geolocation datasets, in addition to producing word and phrase embeddings in the hidden layer that we show to be useful for detecting dialectal terms. As part of our analysis of dialectal terms, we release DAREDS, a dataset for evaluating dialect term detection methods.

pdf
Unsupervised Acquisition of Comprehensive Multiword Lexicons using Competition in an n-gram Lattice
Julian Brooke | Jan Šnajder | Timothy Baldwin
Transactions of the Association for Computational Linguistics, Volume 5

We present a new model for acquiring comprehensive multiword lexicons from large corpora based on competition among n-gram candidates. In contrast to the standard approach of simple ranking by association measure, in our model n-grams are arranged in a lattice structure based on subsumption and overlap relationships, with nodes inhibiting other nodes in their vicinity when they are selected as a lexical item. We show how the configuration of such a lattice can be optimized tractably, and demonstrate using annotations of sampled n-grams that our method consistently outperforms alternatives by at least 0.05 F-score across several corpora and languages.

pdf
Robust Training under Linguistic Adversity
Yitong Li | Trevor Cohn | Timothy Baldwin
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we evaluate our method with a convolutional neural model across a range of sentiment analysis datasets. Compared with a baseline and the dropout method, our method achieves better overall performance.

pdf
Context-Aware Prediction of Derivational Word-forms
Ekaterina Vylomova | Ryan Cotterell | Timothy Baldwin | Trevor Cohn
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Derivational morphology is a fundamental and complex characteristic of language. In this paper we propose a new task of predicting the derivational form of a given base-form lemma that is appropriate for a given context. We present an encoder-decoder style neural network to produce a derived form character-by-character, based on its corresponding character-level representation of the base form and the context. We demonstrate that our model is able to generate valid context-sensitive derivations from known base forms, but is less accurate under lexicon agnostic setting.

pdf
Improving Evaluation of Document-level Machine Translation Quality Estimation
Yvette Graham | Qingsong Ma | Timothy Baldwin | Qun Liu | Carla Parra | Carolina Scarton
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Meaningful conclusions about the relative performance of NLP systems are only possible if the gold standard employed in a given evaluation is both valid and reliable. In this paper, we explore the validity of human annotations currently employed in the evaluation of document-level quality estimation for machine translation (MT). We demonstrate the degree to which MT system rankings are dependent on weights employed in the construction of the gold standard, before proposing direct human assessment as a valid alternative. Experiments show direct assessment (DA) scores for documents to be highly reliable, achieving a correlation of above 0.9 in a self-replication experiment, in addition to a substantial estimated cost reduction through quality controlled crowd-sourcing. The original gold standard based on post-edits incurs a 10–20 times greater cost than DA.

pdf
Multimodal Topic Labelling
Ionut Sorodoc | Jey Han Lau | Nikolaos Aletras | Timothy Baldwin
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Topics generated by topic models are typically presented as a list of topic terms. Automatic topic labelling is the task of generating a succinct label that summarises the theme or subject of a topic, with the intention of reducing the cognitive load of end-users when interpreting these topics. Traditionally, topic label systems focus on a single label modality, e.g. textual labels. In this work we propose a multimodal approach to topic labelling using a simple feedforward neural network. Given a topic and a candidate image or textual label, our method automatically generates a rating for the label, relative to the topic. Experiments show that this multimodal approach outperforms single-modality topic labelling systems.

pdf
An Automatic Approach for Document-level Topic Model Evaluation
Shraey Bhatia | Jey Han Lau | Timothy Baldwin
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)

Topic models jointly learn topics and document-level topic distribution. Extrinsic evaluation of topic models tends to focus exclusively on topic-level evaluation, e.g. by assessing the coherence of topics. We demonstrate that there can be large discrepancies between topic- and document-level model quality, and that basing model evaluation on topic-level analysis can be highly misleading. We propose a method for automatically predicting topic model quality based on analysis of document-level topic allocations, and provide empirical evidence for its robustness.

pdf bib
Decoupling Encoder and Decoder Networks for Abstractive Document Summarization
Ying Xu | Jey Han Lau | Timothy Baldwin | Trevor Cohn
Proceedings of the MultiLing 2017 Workshop on Summarization and Summary Evaluation Across Source Types and Genres

Abstractive document summarization seeks to automatically generate a summary for a document, based on some abstract “understanding” of the original document. State-of-the-art techniques traditionally use attentive encoder–decoder architectures. However, due to the large number of parameters in these models, they require large training datasets and long training times. In this paper, we propose decoupling the encoder and decoder networks, and training them separately. We encode documents using an unsupervised document encoder, and then feed the document vector to a recurrent neural network decoder. With this decoupled architecture, we decrease the number of parameters in the decoder substantially, and shorten its training time. Experiments show that the decoupled model achieves comparable performance with state-of-the-art models for in-domain documents, but less well for out-of-domain documents.

pdf
Semi-Automated Resolution of Inconsistency for a Harmonized Multiword Expression and Dependency Parse Annotation
King Chan | Julian Brooke | Timothy Baldwin
Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017)

This paper presents a methodology for identifying and resolving various kinds of inconsistency in the context of merging dependency and multiword expression (MWE) annotations, to generate a dependency treebank with comprehensive MWE annotations. Candidates for correction are identified using a variety of heuristics, including an entirely novel one which identifies violations of MWE constituency in the dependency tree, and resolved by arbitration with minimal human intervention. Using this technique, we identified and corrected several hundred errors across both parse and MWE annotations, representing changes to a significant percentage (well over 10%) of the MWE instances in the joint corpus.

pdf
Sub-character Neural Language Modelling in Japanese
Viet Nguyen | Julian Brooke | Timothy Baldwin
Proceedings of the First Workshop on Subword and Character Level Models in NLP

In East Asian languages such as Japanese and Chinese, the semantics of a character are (somewhat) reflected in its sub-character elements. This paper examines the effect of using sub-characters for language modeling in Japanese. This is achieved by decomposing characters according to a range of character decomposition datasets, and training a neural language model over variously decomposed character representations. Our results indicate that language modelling can be improved through the inclusion of sub-characters, though this result depends on a good choice of decomposition dataset and the appropriate granularity of decomposition.

pdf bib
Proceedings of the 3rd Workshop on Noisy User-generated Text
Leon Derczynski | Wei Xu | Alan Ritter | Tim Baldwin
Proceedings of the 3rd Workshop on Noisy User-generated Text

pdf
BIBI System Description: Building with CNNs and Breaking with Deep Reinforcement Learning
Yitong Li | Trevor Cohn | Timothy Baldwin
Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems

This paper describes our submission to the sentiment analysis sub-task of “Build It, Break It: The Language Edition (BIBI)”, on both the builder and breaker sides. As a builder, we use convolutional neural nets, trained on both phrase and sentence data. As a breaker, we use Q-learning to learn minimal change pairs, and apply a token substitution method automatically. We analyse the results to gauge the robustness of NLP systems.

pdf
Capturing Long-range Contextual Dependencies with Memory-enhanced Conditional Random Fields
Fei Liu | Timothy Baldwin | Trevor Cohn
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Despite successful applications across a broad range of NLP tasks, conditional random fields (“CRFs”), in particular the linear-chain variant, are only able to model local features. While this has important benefits in terms of inference tractability, it limits the ability of the model to capture long-range dependencies between items. Attempts to extend CRFs to capture long-range dependencies have largely come at the cost of computational complexity and approximate inference. In this work, we propose an extension to CRFs by integrating external memory, taking inspiration from memory networks, thereby allowing CRFs to incorporate information far beyond neighbouring steps. Experiments across two tasks show substantial improvements over strong CRF and LSTM baselines.

pdf
Continuous Representation of Location for Geolocation and Lexical Dialectology using Mixture Density Networks
Afshin Rahimi | Timothy Baldwin | Trevor Cohn
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We propose a method for embedding two-dimensional locations in a continuous vector space using a neural network-based model incorporating mixtures of Gaussian distributions, presenting two model variants for text-based geolocation and lexical dialectology. Evaluated over Twitter data, the proposed model outperforms conventional regression-based geolocation and provides a better estimate of uncertainty. We also show the effectiveness of the representation for predicting words from location in lexical dialectology, and evaluate it using the DARE dataset.

pdf
Further Investigation into Reference Bias in Monolingual Evaluation of Machine Translation
Qingsong Ma | Yvette Graham | Timothy Baldwin | Qun Liu
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Monolingual evaluation of Machine Translation (MT) aims to simplify human assessment by requiring assessors to compare the meaning of the MT output with a reference translation, opening up the task to a much larger pool of genuinely qualified evaluators. Monolingual evaluation runs the risk, however, of bias in favour of MT systems that happen to produce translations superficially similar to the reference and, consistent with this intuition, previous investigations have concluded monolingual assessment to be strongly biased in this respect. On re-examination of past analyses, we identify a series of potential analytical errors that force some important questions to be raised about the reliability of past conclusions, however. We subsequently carry out further investigation into reference bias via direct human assessment of MT adequacy via quality controlled crowd-sourcing. Contrary to both intuition and past conclusions, results for show no significant evidence of reference bias in monolingual evaluation of MT.

pdf
Sequence Effects in Crowdsourced Annotations
Nitika Mathur | Timothy Baldwin | Trevor Cohn
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Manual data annotation is a vital component of NLP research. When designing annotation tasks, properties of the annotation interface can unintentionally lead to artefacts in the resulting dataset, biasing the evaluation. In this paper, we explore sequence effects where annotations of an item are affected by the preceding items. Having assigned one label to an instance, the annotator may be less (or more) likely to assign the same label to the next. During rating tasks, seeing a low quality item may affect the score given to the next item either positively or negatively. We see clear evidence of both types of effects using auto-correlation studies over three different crowdsourced datasets. We then recommend a simple way to minimise sequence effects.

pdf
SemEval-2017 Task 3: Community Question Answering
Preslav Nakov | Doris Hoogeveen | Lluís Màrquez | Alessandro Moschitti | Hamdy Mubarak | Timothy Baldwin | Karin Verspoor
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

We describe SemEval–2017 Task 3 on Community Question Answering. This year, we reran the four subtasks from SemEval-2016: (A) Question–Comment Similarity, (B) Question–Question Similarity, (C) Question–External Comment Similarity, and (D) Rerank the correct answers for a new question in Arabic, providing all the data from 2015 and 2016 for training, and fresh data for testing. Additionally, we added a new subtask E in order to enable experimentation with Multi-domain Question Duplicate Detection in a larger-scale scenario, using StackExchange subforums. A total of 23 teams participated in the task, and submitted a total of 85 runs (36 primary and 49 contrastive) for subtasks A–D. Unfortunately, no teams participated in subtask E. A variety of approaches and features were used by the participating systems to address the different subtasks. The best systems achieved an official score (MAP) of 88.43, 47.22, 15.46, and 61.16 in subtasks A, B, C, and D, respectively. These scores are better than the baselines, especially for subtasks A–C.

2016

pdf
Evaluating a Topic Modelling Approach to Measuring Corpus Similarity
Richard Fothergill | Paul Cook | Timothy Baldwin
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Web corpora are often constructed automatically, and their contents are therefore often not well understood. One technique for assessing the composition of such a web corpus is to empirically measure its similarity to a reference corpus whose composition is known. In this paper we evaluate a number of measures of corpus similarity, including a method based on topic modelling which has not been previously evaluated for this task. To evaluate these methods we use known-similarity corpora that have been previously used for this purpose, as well as a number of newly-constructed known-similarity corpora targeting differences in genre, topic, time, and region. Our findings indicate that, overall, the topic modelling approach did not improve on a chi-square method that had previously been found to work well for measuring corpus similarity.

pdf
Named Entity Recognition for Novel Types by Transfer Learning
Lizhen Qu | Gabriela Ferraro | Liyuan Zhou | Weiwei Hou | Timothy Baldwin
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Learning Robust Representations of Text
Yitong Li | Trevor Cohn | Timothy Baldwin
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment Classification
Steven Xu | HuiZhi Liang | Timothy Baldwin
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
UniMelb at SemEval-2016 Task 3: Identifying Similar Questions by combining a CNN with String Similarity Measures
Timothy Baldwin | Huizhi Liang | Bahar Salehi | Doris Hoogeveen | Yitong Li | Long Duong
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
VectorWeavers at SemEval-2016 Task 10: From Incremental Meaning to Semantic Unit (phrase by phrase)
Andreas Scherbakov | Ekaterina Vylomova | Fei Liu | Timothy Baldwin
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
Melbourne at SemEval 2016 Task 11: Classifying Type-level Word Complexity using Random Forests with Corpus and Word List Features
Julian Brooke | Alexandra Uitdenbogerd | Timothy Baldwin
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
Determining the Multiword Expression Inventory of a Surprise Language
Bahar Salehi | Paul Cook | Timothy Baldwin
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Much previous research on multiword expressions (MWEs) has focused on the token- and type-level tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a “surprise” language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types.

pdf
Automatic Labelling of Topics with Neural Embeddings
Shraey Bhatia | Jey Han Lau | Timothy Baldwin
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Topics generated by topic models are typically represented as list of terms. To reduce the cognitive overhead of interpreting these topics for end-users, we propose labelling a topic with a succinct phrase that summarises its theme or idea. Using Wikipedia document titles as label candidates, we compute neural embeddings for documents and words to select the most relevant labels for topics. Comparing to a state-of-the-art topic labelling system, our methodology is simpler, more efficient and finds better topic labels.

pdf
Is all that Glitters in Machine Translation Quality Estimation really Gold?
Yvette Graham | Timothy Baldwin | Meghan Dowling | Maria Eskevich | Teresa Lynn | Lamia Tounsi
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Human-targeted metrics provide a compromise between human evaluation of machine translation, where high inter-annotator agreement is difficult to achieve, and fully automatic metrics, such as BLEU or TER, that lack the validity of human assessment. Human-targeted translation edit rate (HTER) is by far the most widely employed human-targeted metric in machine translation, commonly employed, for example, as a gold standard in evaluation of quality estimation. Original experiments justifying the design of HTER, as opposed to other possible formulations, were limited to a small sample of translations and a single language pair, however, and this motivates our re-evaluation of a range of human-targeted metrics on a substantially larger scale. Results show significantly stronger correlation with human judgment for HBLEU over HTER for two of the nine language pairs we include and no significant difference between correlations achieved by HTER and HBLEU for the remaining language pairs. Finally, we evaluate a range of quality estimation systems employing HTER and direct assessment (DA) of translation adequacy as gold labels, resulting in a divergence in system rankings, and propose employment of DA for future quality estimation evaluations.

pdf
The Sensitivity of Topic Coherence Evaluation to Topic Cardinality
Jey Han Lau | Timothy Baldwin
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
LexSemTm: A Semantic Dataset Based on All-words Unsupervised Sense Distribution Learning
Andrew Bennett | Timothy Baldwin | Jey Han Lau | Diana McCarthy | Francis Bond
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Take and Took, Gaggle and Goose, Book and Read: Evaluating the Utility of Vector Differences for Lexical Relation Learning
Ekaterina Vylomova | Laura Rimell | Trevor Cohn | Timothy Baldwin
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Bootstrapped Text-level Named Entity Recognition for Literature
Julian Brooke | Adam Hammond | Timothy Baldwin
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
pigeo: A Python Geotagging Tool
Afshin Rahimi | Trevor Cohn | Timothy Baldwin
Proceedings of ACL-2016 System Demonstrations

pdf
An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation
Jey Han Lau | Timothy Baldwin
Proceedings of the 1st Workshop on Representation Learning for NLP

pdf bib
Multiword Expressions at the Grammar-Lexicon Interface
Timothy Baldwin
Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces (GramLex)

In this talk, I will outline a range of challenges presented by multiword expressions in terms of (lexicalist) precision grammar engineering, and different strategies for accommodating those challenges, in an attempt to strike the right balance in terms of generalisation and over- and under-generation.

pdf bib
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)
Bo Han | Alan Ritter | Leon Derczynski | Wei Xu | Tim Baldwin
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

pdf
Twitter Geolocation Prediction Shared Task of the 2016 Workshop on Noisy User-generated Text
Bo Han | Afshin Rahimi | Leon Derczynski | Timothy Baldwin
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

This paper presents the shared task for English Twitter geolocation prediction in WNUT 2016. We discuss details of task settings, data preparations and participant systems. The derived dataset and performance figures from each system provide baselines for future research in this realm.

2015

pdf
The Impact of Multiword Expression Compositionality on Machine Translation Evaluation
Bahar Salehi | Nitika Mathur | Paul Cook | Timothy Baldwin
Proceedings of the 11th Workshop on Multiword Expressions

pdf bib
Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics
Michael Roth | Annie Louis | Bonnie Webber | Tim Baldwin
Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics

pdf
Shared Tasks of the 2015 Workshop on Noisy User-generated Text: Twitter Lexical Normalization and Named Entity Recognition
Timothy Baldwin | Marie Catherine de Marneffe | Bo Han | Young-Bum Kim | Alan Ritter | Wei Xu
Proceedings of the Workshop on Noisy User-generated Text

pdf
A Word Embedding Approach to Predicting the Compositionality of Multiword Expressions
Bahar Salehi | Paul Cook | Timothy Baldwin
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Accurate Evaluation of Segment-level Machine Translation Metrics
Yvette Graham | Timothy Baldwin | Nitika Mathur
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Exploiting Text and Network Context for Geolocation of Social Media Users
Afshin Rahimi | Duy Vu | Trevor Cohn | Timothy Baldwin
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Collective Document Classification with Implicit Inter-document Semantic Relationships
Clint Burford | Steven Bird | Timothy Baldwin
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

pdf
RoseMerry: A Baseline Message-level Sentiment Classification System
Huizhi Liang | Richard Fothergill | Timothy Baldwin
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

pdf
Twitter User Geolocation Using a Unified Text and Network Prediction Model
Afshin Rahimi | Trevor Cohn | Timothy Baldwin
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf
Big Data Small Data, In Domain Out-of Domain, Known Word Unknown Word: The Impact of Word Representations on Sequence Labelling Tasks
Lizhen Qu | Gabriela Ferraro | Liyuan Zhou | Weiwei Hou | Nathan Schneider | Timothy Baldwin
Proceedings of the Nineteenth Conference on Computational Natural Language Learning

pdf
Domain Adaption of Named Entity Recognition to Support Credit Risk Assessment
Julio Cesar Salinas Alvarado | Karin Verspoor | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2015

pdf
Understanding engagement with insurgents through retweet rhetoric
Joel Nothman | Atif Ahmad | Christoph Breidbach | David Malet | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2015

2014

pdf
Is Machine Translation Getting Better over Time?
Yvette Graham | Timothy Baldwin | Alistair Moffat | Justin Zobel
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf
Using Distributional Similarity of Multi-way Translations to Predict Multiword Expression Compositionality
Bahar Salehi | Paul Cook | Timothy Baldwin
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf
Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model Quality
Jey Han Lau | David Newman | Timothy Baldwin
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf
One Sense per Tweeter ... and Other Lexical Semantic Tales of Twitter
Spandana Gella | Paul Cook | Timothy Baldwin
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers

pdf
Testing for Significance of Increased Correlation with Human Judgment
Yvette Graham | Timothy Baldwin
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf
Detecting Non-compositional MWE Components using Wiktionary
Bahar Salehi | Paul Cook | Timothy Baldwin
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf
Accurate Language Identification of Twitter Messages
Marco Lui | Timothy Baldwin
Proceedings of the 5th Workshop on Language Analysis for Social Media (LASM)

pdf
Randomized Significance Tests in Machine Translation
Yvette Graham | Nitika Mathur | Timothy Baldwin
Proceedings of the Ninth Workshop on Statistical Machine Translation

pdf
Exploring Methods and Resources for Discriminating Similar Languages
Marco Lui | Ned Letcher | Oliver Adams | Long Duong | Paul Cook | Timothy Baldwin
Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects

pdf
Learning Word Sense Distributions, Detecting Unattested Senses and Identifying Novel Senses Using Topic Models
Jey Han Lau | Paul Cook | Diana McCarthy | Spandana Gella | Timothy Baldwin
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Automatic Detection of Multilingual Dictionaries on the Web
Gintarė Grigonytė | Timothy Baldwin
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
Novel Word-sense Identification
Paul Cook | Jey Han Lau | Diana McCarthy | Timothy Baldwin
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf
Automatic Detection and Language Identification of Multilingual Documents
Marco Lui | Jey Han Lau | Timothy Baldwin
Transactions of the Association for Computational Linguistics, Volume 2

Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.

2013

pdf bib
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
David Yarowsky | Timothy Baldwin | Anna Korhonen | Karen Livescu | Steven Bethard
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Continuous Measurement Scales in Human Evaluation of Machine Translation
Yvette Graham | Timothy Baldwin | Alistair Moffat | Justin Zobel
Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse

pdf bib
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity
Mona Diab | Tim Baldwin | Marco Baroni
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

pdf
UniMelb_NLP-CORE: Integrating predictions from multiple domains and feature sets for estimating semantic textual similarity
Spandana Gella | Bahar Salehi | Marco Lui | Karl Grieser | Paul Cook | Timothy Baldwin
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

pdf
Umelb: Cross-lingual Textual Entailment with Word Alignment and String Similarity Features
Yvette Graham | Bahar Salehi | Timothy Baldwin
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

pdf
unimelb: Topic Modelling-based Word Sense Induction for Web Snippet Clustering
Jey Han Lau | Paul Cook | Timothy Baldwin
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

pdf
unimelb: Topic Modelling-based Word Sense Induction
Jey Han Lau | Paul Cook | Timothy Baldwin
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

pdf
Crowd-Sourcing of Human Judgments of Machine Translation Fluency
Yvette Graham | Timothy Baldwin | Alistair Moffat | Justin Zobel
Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013)

pdf
Automatic Climate Classification of Environmental Science Literature
Jared Willett | David Martinez | J. Angus Webb | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013)

pdf
How Noisy Social Media Text, How Diffrnt Social Media Sources?
Timothy Baldwin | Paul Cook | Marco Lui | Andrew MacKinlay | Li Wang
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf
Unsupervised Word Class Induction for Under-resourced Languages: A Case Study on Indonesian
Meladel Mistica | Jey Han Lau | Timothy Baldwin
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
A Stacking-based Approach to Twitter User Geolocation Prediction
Bo Han | Paul Cook | Timothy Baldwin
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations

2012

pdf
Automatically Constructing a Normalisation Dictionary for Microblogs
Bo Han | Paul Cook | Timothy Baldwin
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf
Social Media: Friend or Foe of Natural Language Processing?
Timothy Baldwin
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf
Extracting Keywords from Multi-party Live Chats
Su Nam Kim | Timothy Baldwin
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf
Classifying Dialogue Acts in Multi-party Live Chats
Su Nam Kim | Lawrence Cavedon | Timothy Baldwin
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf
Deep Lexical Acquisition of Type Properties in Low-resource Languages: A Case Study in Wambaya
Jeremy Nicholson | Rachel Nordlinger | Timothy Baldwin
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf
Evaluating a Morphological Analyser of Inuktitut
Jeremy Nicholson | Trevor Cohn | Timothy Baldwin
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Unsupervised Estimation of Word Usage Similarity
Marco Lui | Timothy Baldwin | Diana McCarthy
Proceedings of the Australasian Language Technology Association Workshop 2012

pdf
Segmentation and Translation of Japanese Multi-word Loanwords
James Breen | Timothy Baldwin | Francis Bond
Proceedings of the Australasian Language Technology Association Workshop 2012

pdf
Measurement of Progress in Machine Translation
Yvette Graham | Timothy Baldwin | Aaron Harwood | Alistair Moffat | Justin Zobel
Proceedings of the Australasian Language Technology Association Workshop 2012

pdf
Classification of Study Region in Environmental Science Abstracts
Jared Willett | Timothy Baldwin | David Martinez | Angus Webb
Proceedings of the Australasian Language Technology Association Workshop 2012

pdf
langid.py: An Off-the-shelf Language Identification Tool
Marco Lui | Timothy Baldwin
Proceedings of the ACL 2012 System Demonstrations

pdf
Combining resources for MWE-token classification
Richard Fothergill | Timothy Baldwin
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

pdf
The Effects of Semantic Annotations on Precision Parse Ranking
Andrew MacKinlay | Rebecca Dridan | Diana McCarthy | Timothy Baldwin
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

pdf
Geolocation Prediction in Social Media Data by Finding Location Indicative Words
Bo Han | Paul Cook | Timothy Baldwin
Proceedings of COLING 2012

pdf
On-line Trend Analysis with Topic Models: #twitter Trends Detection Topic Model Online
Jey Han Lau | Nigel Collier | Timothy Baldwin
Proceedings of COLING 2012

pdf
Bayesian Text Segmentation for Index Term Identification and Keyphrase Extraction
David Newman | Nagendra Koilada | Jey Han Lau | Timothy Baldwin
Proceedings of COLING 2012

pdf
The Utility of Discourse Structure in Identifying Resolved Threads in Technical User Forums
Li Wang | Su Nam Kim | Timothy Baldwin
Proceedings of COLING 2012

pdf
Word Sense Induction for Novel Sense Detection
Jey Han Lau | Paul Cook | Diana McCarthy | David Newman | Timothy Baldwin
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

pdf
A Support Platform for Event Detection using Social Intelligence
Timothy Baldwin | Paul Cook | Bo Han | Aaron Harwood | Shanika Karunasekera | Masud Moshtaghi
Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf
Word classes in Indonesian: A linguistic reality or a convenient fallacy in natural language processing?
Meladel Mistica | Timothy Baldwin | I Wayan Arka
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf
In Situ Text Summarisation for Museum Visitors
Timothy Baldwin | Patrick Ye | Fabian Bohnert | Ingrid Zukerman
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf
Treeblazing: Using External Treebanks to Filter Parse Forests for Parse Selection and Treebanking
Andrew MacKinlay | Rebecca Dridan | Dan Flickinger | Stephan Oepen | Timothy Baldwin
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
Cross-domain Feature Selection for Language Identification
Marco Lui | Timothy Baldwin
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
Fleshing it out: A Supervised Approach to MWE-token and MWE-type Classification
Richard Fothergill | Timothy Baldwin
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
MWEs and Topic Modelling: Enhancing Machine Learning with Linguistics
Timothy Baldwin
Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World

pdf bib
Predicting Thread Discourse Structure over Technical Web Forums
Li Wang | Marco Lui | Su Nam Kim | Joakim Nivre | Timothy Baldwin
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Predicting Thread Linking Structure by Lexical Chaining
Li Wang | Diana McCarthy | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2011

pdf
Lexical Normalisation of Short Text Messages: Makn Sens a #twitter
Bo Han | Timothy Baldwin
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Collective Classification of Congressional Floor-Debate Transcripts
Clinton Burfoot | Steven Bird | Timothy Baldwin
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Automatic Labelling of Topic Models
Jey Han Lau | Karl Grieser | David Newman | Timothy Baldwin
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Relation Guided Bootstrapping of Semantic Lexicons
Tara McIntosh | Lars Yencken | James R. Curran | Timothy Baldwin
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf
Unsupervised Parse Selection for HPSG
Rebecca Dridan | Timothy Baldwin
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Classifying Dialogue Acts in One-on-One Live Chats
Su Nam Kim | Lawrence Cavedon | Timothy Baldwin
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Multilingual Language Identification: ALTW 2010 Shared Task Data
Timothy Baldwin | Marco Lui
Proceedings of the Australasian Language Technology Association Workshop 2010

pdf
Thread-level Analysis over Technical User Forum Data
Li Wang | Su Nam Kim | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2010

pdf
Classifying User Forum Participants: Separating the Gurus from the Hacks, and Other Tales of the Internet
Marco Lui | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2010

pdf
Evaluating N-gram based Evaluation Metrics for Automatic Keyphrase Extraction
Su Nam Kim | Timothy Baldwin | Min-Yen Kan
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf
Best Topic Word Selection for Topic Labelling
Jey Han Lau | David Newman | Sarvnaz Karimi | Timothy Baldwin
Coling 2010: Posters

pdf
PanLex and LEXTRACT: Translating all Words of all Languages of the World
Timothy Baldwin | Jonathan Pool | Susan Colowick
Coling 2010: Demonstrations

pdf bib
Chart Mining-based Lexical Acquisition with Precision Grammars
Yi Zhang | Timothy Baldwin | Valia Kordoni | David Martinez | Jeremy Nicholson
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Automatic Evaluation of Topic Coherence
David Newman | Jey Han Lau | Karl Grieser | Timothy Baldwin
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Language Identification: The Long and the Short of the Matter
Timothy Baldwin | Marco Lui
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Intelligent Linux Information Access by Data Mining: the ILIAD Project
Timothy Baldwin | David Martinez | Richard Penman | Su Nam Kim | Marco Lui | Li Wang | Andrew MacKinlay
Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media

pdf
Tagging and Linking Web Forum Posts
Su Nam Kim | Li Wang | Timothy Baldwin
Proceedings of the Fourteenth Conference on Computational Natural Language Learning

pdf
SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles
Su Nam Kim | Olena Medelyan | Min-Yen Kan | Timothy Baldwin
Proceedings of the 5th International Workshop on Semantic Evaluation

2009

pdf
Web and Corpus Methods for Malay Count Classifier Prediction
Jeremy Nicholson | Timothy Baldwin
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

pdf
Recognising the Predicate-argument Structure of Tagalog
Meladel Mistica | Timothy Baldwin
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

pdf
Automatic Satire Detection: Are You Having a Laugh?
Clint Burfoot | Timothy Baldwin
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

pdf
Corpus-based Extraction of Japanese Compound Verbs
James Breen | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2009

pdf
Double Double, Morphology and Trouble: Looking into Reduplication in Indonesian
Meladel Mistica | I Wayan Arka | Timothy Baldwin | Avery Andrews
Proceedings of the Australasian Language Technology Association Workshop 2009

pdf
Extracting Domain-Specific Words - A Statistical Approach
Su Nam Kim | Timothy Baldwin | Min-Yen Kan
Proceedings of the Australasian Language Technology Association Workshop 2009

pdf bib
Prepositions in Applications: A Survey and Introduction to the Special Issue
Timothy Baldwin | Valia Kordoni | Aline Villavicencio
Computational Linguistics, Volume 35, Number 2, June 2009 - Special Issue on Prepositions

pdf
Obituaries: Hozumi Tanaka
Timothy Baldwin | Takenobu Tokunaga | Jun’ichi Tsujii
Computational Linguistics, Volume 35, Number 4, December 2009

pdf bib
Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?
Timothy Baldwin | Valia Kordoni
Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?

pdf
Biomedical Event Annotation with CRFs and Precision Grammars
Andrew MacKinlay | David Martinez | Timothy Baldwin
Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task

2008

pdf
Applying Discourse Analysis and Data Mining Methods to Spoken OSCE Assessments
Meladel Mistica | Timothy Baldwin | Marisa Cordella | Simon Musgrave
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Measuring and Predicting Orthographic Associations: Modelling the Similarity of Japanese Kanji
Lars Yencken | Timothy Baldwin
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Benchmarking Noun Compound Interpretation
Su Nam Kim | Timothy Baldwin
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf
MRD-based Word Sense Disambiguation: Further Extending Lesk
Timothy Baldwin | Su Nam Kim | Francis Bond | Sanae Fujita | David Martinez | Takaaki Tanaka
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II

pdf
Evaluating and Extending the Coverage of HPSG Grammars: A Case Study for German
Jeremy Nicholson | Valia Kordoni | Yi Zhang | Timothy Baldwin | Rebecca Dridan
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

In this work, we examine and attempt to extend the coverage of a German HPSG grammar. We use the grammar to parse a corpus of newspaper text and evaluate the proportion of sentences which have a correct attested parse, and analyse the cause of errors in terms of lexical or constructional gaps which prevent parsing. Then, using a maximum entropy model, we evaluate prediction of lexical types in the HPSG type hierarchy for unseen lexemes. By automatically adding entries to the lexicon, we observe that we can increase coverage without substantially decreasing precision.

pdf
Automatic Event Reference Identification
Olivia March | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2008

pdf
Learning Count Classifier Preferences of Malay Nouns
Jeremy Nicholson | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2008

pdf
Improving Parsing and PP Attachment Performance with Sense Information
Eneko Agirre | Timothy Baldwin | David Martinez
Proceedings of ACL-08: HLT

2007

pdf
MELB-KB: Nominal Classification as Noun Compound Interpretation
Su Nam Kim | Timothy Baldwin
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf
MELB-MKB: Lexical Substitution system based on Relatives in Context
David Martinez | Su Nam Kim | Timothy Baldwin
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf
MELB-YB: Preposition Sense Disambiguation Using Rich Semantic Features
Patrick Ye | Timothy Baldwin
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf
UBC-UMB: Combining unsupervised and supervised systems for all-words WSD
David Martinez | Timothy Baldwin | Eneko Agirre | Oier Lopez de Lacalle
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf
Dynamic Path Prediction and Recommendation in a Museum Environment
Karl Grieser | Timothy Baldwin | Steven Bird
Proceedings of the Workshop on Language Technology for Cultural Heritage Data (LaTeCH 2007).

pdf bib
ACL 2007 Workshop on Deep Linguistic Processing
Timothy Baldwin | Mark Dras | Julia Hockenmaier | Tracy Holloway King | Gertjan van Noord
ACL 2007 Workshop on Deep Linguistic Processing

pdf
The Corpus and the Lexicon: Standardising Deep Lexical Acquisition Evaluation
Yi Zhang | Timothy Baldwin | Valia Kordoni
ACL 2007 Workshop on Deep Linguistic Processing

pdf bib
Landmark Classification for Route Directions
Aidan Furlan | Timothy Baldwin | Alex Klippel
Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions

pdf
The Impact of Deep Linguistic Processing on Parsing Technology
Timothy Baldwin | Mark Dras | Julia Hockenmaier | Tracy Holloway King | Gertjan van Noord
Proceedings of the Tenth International Conference on Parsing Technologies

pdf bib
Scalable Deep Linguistic Processing: Mind the Lexical Gap
Timothy Baldwin
Proceedings of the 21st Pacific Asia Conference on Language, Information and Computation

pdf
Word Sense Disambiguation Incorporating Lexical and Structural Semantic Information
Takaaki Tanaka | Francis Bond | Timothy Baldwin | Sanae Fujita | Chikara Hashimoto
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf
Extending Sense Collocations in Interpreting Noun Compounds
Su Nam Kim | Meladel Mistica | Timothy Baldwin
Proceedings of the Australasian Language Technology Workshop 2007

pdf
Dictionary Alignment for Context-sensitive Word Glossing
Willy Yap | Timothy Baldwin
Proceedings of the Australasian Language Technology Workshop 2007

2006

pdf
Interpreting Semantic Relations in Noun Compounds via Verb Semantics
Su Nam Kim | Timothy Baldwin
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

pdf
Reconsidering Language Identification for Written Language Resources
Baden Hughes | Timothy Baldwin | Steven Bird | Jeremy Nicholson | Andrew MacKinlay
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

The task of identifying the language in which a given document (ranging from a sentence to thousands of pages) is written has been relatively well studied over several decades. Automated approachesto written language identification are used widely throughout research and industrial contexts, over both oral and written source materials. Despite this widespread acceptance, a review of previous research in written language identification reveals a number of questions which remain openand ripe for further investigation.

pdf
Open Source Corpus Analysis Tools for Malay
Timothy Baldwin | Su’ad Awab
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Tokenisers, lemmatisers and POS taggers are vital to the linguistic and digital furtherment of any language. In this paper, we present an open source toolkit for Malay incorporating a word and sentence tokeniser, a lemmatiser and a partial POS tagger, based on heavy reuse of pre-existing language resources. We outline the software architecture of each component, and present an evaluation of each over a 26K word sample of Malay text.

pdf
Die Morphologie (f): Targeted Lexical Acquisition for Languages other than English
Jeremy Nicholson | Timothy Baldwin | Phil Blunsom
Proceedings of the Australasian Language Technology Workshop 2006

pdf
Verb Sense Disambiguation Using Selectional Preferences Extracted with a State-of-the-art Semantic Role Labeler
Patrick Ye | Timothy Baldwin
Proceedings of the Australasian Language Technology Workshop 2006

pdf
Analysis and Prediction of User Behaviour in a Museum Environment
Karl Grieser | Timothy Baldwin | Steven Bird
Proceedings of the Australasian Language Technology Workshop 2006

pdf bib
Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006
Timothy Baldwin | Francis Bond | Adam Meyers | Shigeko Nariyama
Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006

pdf bib
Compositionality and Multiword Expressions: Six of One, Half a Dozen of the Other?
Timothy Baldwin
Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties

pdf
Interpretation of Compound Nominalisations using Corpus and Web Statistics
Jeremy Nicholson | Timothy Baldwin
Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties

pdf
Multilingual Deep Lexical Acquisition for HPSGs via Supertagging
Phil Blunsom | Timothy Baldwin
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

pdf
Automatic Identification of English Verb Particle Constructions using Linguistic Features
Su Nam Kim | Timothy Baldwin
Proceedings of the Third ACL-SIGSEM Workshop on Prepositions

2005

pdf bib
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition
Timothy Baldwin | Anna Korhonen | Aline Villavicencio
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition

pdf
Bootstrapping Deep Lexical Resources: Resources for Courses
Timothy Baldwin
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition

pdf bib
Proceedings of the Australasian Language Technology Workshop 2005
Timothy Baldwin | James Curran | Menno van Zaanen
Proceedings of the Australasian Language Technology Workshop 2005

pdf
POS Tagging with a More Informative Tagset
Andrew MacKinlay | Timothy Baldwin
Proceedings of the Australasian Language Technology Workshop 2005

pdf
Efficient Grapheme-phoneme Alignment for Japanese
Lars Yencken | Timothy Baldwin
Proceedings of the Australasian Language Technology Workshop 2005

pdf
Statistical Interpretation of Compound Nominalisations
Jeremy Nicholson | Timothy Baldwin
Proceedings of the Australasian Language Technology Workshop 2005

pdf
Semantic Role Labelling of Prepositional Phrases
Patrick Ye | Timothy Baldwin
Second International Joint Conference on Natural Language Processing: Full Papers

pdf
Automatic Interpretation of Noun Compounds Using WordNet Similarity
Su Nam Kim | Timothy Baldwin
Second International Joint Conference on Natural Language Processing: Full Papers

2004

pdf
Evaluating the FOKS Error Model
Slaven Bilac | Timothy Baldwin | Hozumi Tanaka
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf
Road-testing the English Resource Grammar Over the British National Corpus
Timothy Baldwin | Emily M. Bender | Dan Flickinger | Ara Kim | Stephan Oepen
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf
A Multilingual Database of Idioms
Aline Villavicencio | Timothy Baldwin | Benjamin Waldron
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf
Translation by Machine of Complex Nominals: Getting it Right
Timothy Baldwin | Takaaki Tanaka
Proceedings of the Workshop on Multiword Expressions: Integrating Processing

pdf
Making sense of Japanese relative clause constructions
Timothy Baldwin
Proceedings of the 2nd Workshop on Text Meaning and Interpretation

pdf
Automatic Discovery of Telic and Agentive Roles from Corpus Data
Ichiro Yamada | Timothy Baldwin
Proceedings of the 18th Pacific Asia Conference on Language, Information and Computation

2003

pdf
A Plethora of Methods for Learning English Countability
Timothy Baldwin | Francis Bond
Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing

pdf
Noun-Noun Compound Machine Translation A Feasibility Study on Shallow Processing
Takaaki Tanaka | Timothy Baldwin
Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment

pdf
A Statistical Approach to the Semantics of Verb-Particles
Colin Bannard | Timothy Baldwin | Alex Lascarides
Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment

pdf
An Empirical Model of Multiword Expression Decomposability
Timothy Baldwin | Colin Bannard | Takaaki Tanaka | Dominic Widdows
Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment

pdf
Learning the Countability of English Nouns from Corpus Data
Timothy Baldwin | Francis Bond
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

pdf
The Ins and Outs of Dutch noun countability classification
Timothy Baldwin | Leonoor van der Beek
Proceedings of the Australasian Language Technology Workshop 2003

pdf
Translation selection for Japanese-English noun-noun compounds
Takaaki Tanaka | Timothy Baldwin
Proceedings of Machine Translation Summit IX: Papers

We present a method for compositionally translating Japanese NN compounds into English, using a word-level transfer dictionary and target language monolingual corpus. The method interpolates over fully-specified and partial translation data, based on corpus evidence. In evaluation, we demonstrate that interpolation over the two data types is superior to using either one, and show that our method performs at an F-score of 0.68 over translation-aligned inputs and 0.66 over a random sample of 500 NN compounds.

2002

pdf bib
Extracting the Unextractable: A Case Study on Verb-particles
Timothy Baldwin | Aline Villavicencio
COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002)

pdf bib
Alternation-based lexicon reconstruction
Timothy Baldwin | Francis Bond
Proceedings of the 9th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages: Papers

bib
Translation memories
Timothy Baldwin
Proceedings of the 9th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages: Tutorials

pdf
Multiword expressions: linguistic precision and reusability
Ann Copestake | Fabre Lambeau | Aline Villavicencio | Francis Bond | Timothy Baldwin | Ivan A. Sag | Dan Flickinger
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf
Enhanced Japanese Electronic Dictionary Look-up
Timothy Baldwin | Slaven Bilac | Ryo Okumura | Takenobu Tokunaga | Hozumi Tanaka
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf
Bringing the Dictionary to the User: The FOKS System
Slaven Bilac | Timothy Baldwin | Hozumi Tanaka
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf
Low-cost, High-performance Translation Retrieval: Dumber is Better
Timothy Baldwin
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

pdf
The Japanese Translation Task: Lexical and Structural Perspectives
Timothy Baldwin | Atsushi Okazaki | Takenobu Tokunaga | Hozumi Tanaka
Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems

2000

pdf
The Effects of Word Order and Segmentation on Translation Retrieval Performance
Timothy Baldwin | Hozumi Tanaka
COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics

pdf bib
Verb Alternations and Japanese : How, What and Where
Timothy Baldwin | Hozumi Tanaka
Proceedings of the 14th Pacific Asia Conference on Language, Information and Computation

1999

pdf
Argument status in Japanese verb sense disambiguation
Timothy Baldwin | Hozumi Tanaka
Proceedings of the 8th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages

pdf
A valency dictionary architecture for Machine Translation
Timothy Baldwin | Francis Bond | Ben Hutchinson
Proceedings of the 8th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages

pdf bib
The applications of unsupervised learning to Japanese grapheme-phoneme alignment
Timothy Baldwin | Hozumi Tanaka
Unsupervised Learning in Natural Language Processing

Search
Co-authors