Proceedings of the Australasian Language Technology Association Workshop 2018

Sunghwan Mac Kim, Xiuzhen (Jenny) Zhang (Editors)


Anthology ID:
U18-1
Month:
December
Year:
2018
Address:
Dunedin, New Zealand
Venue:
ALTA
SIG:
Publisher:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/U18-1/
DOI:
Bib Export formats:
BibTeX
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/U18-1.pdf

pdf bib
Proceedings of the Australasian Language Technology Association Workshop 2018
Sunghwan Mac Kim | Xiuzhen (Jenny) Zhang

pdf bib
Improved Neural Machine Translation using Side Information
Cong Duy Vu Hoang | Gholamreza Haffari | Trevor Cohn

In this work, we investigate whether side information is helpful in neural machine translation (NMT). We study various kinds of side information, including topical information, personal trait, then propose different ways of incorporating them into the existing NMT models. Our experimental results show the benefits of side information in improving the NMT models.

pdf bib
Text-dependent Forensic Voice Comparison: Likelihood Ratio Estimation with the Hidden Markov Model (HMM) and Gaussian Mixture Model
Satoru Tsuge | Shunichi Ishihara

Among the more typical forensic voice comparison (FVC) approaches, the acoustic-phonetic statistical approach is suitable for text-dependent FVC, but it does not fully exploit available time-varying information of speech in its modelling. The automatic approach, on the other hand, essentially deals with text-independent cases, which means temporal information is not explicitly incorporated in the modelling. Text-dependent likelihood ratio (LR)-based FVC studies, in particular those that adopt the automatic approach, are few. This preliminary LR-based FVC study compares two statistical models, the Hidden Markov Model (HMM) and the Gaussian Mixture Model (GMM), for the calculation of forensic LRs using the same speech data. FVC experiments were carried out using different lengths of Japanese short words under a forensically realistic, but challenging condition: only two speech tokens for model training and LR estimation. Log-likelihood-ratio cost (Cllr) was used as the assessment metric. The study demonstrates that the HMM system constantly outperforms the GMM system in terms of average Cllr values. However, words longer than three mora are needed if the advantage of the HMM is to become evident. With a seven-mora word, for example, the HMM outperformed the GMM by a Cllr value of 0.073.

pdf
Development of Natural Language Processing Tools for Cook Islands Māori
Rolando Coto Solano | Sally Akevai Nicholas | Samantha Wray

This paper presents three ongoing projects for NLP in Cook Islands Maori: Untrained Forced Alignment (approx. 9% error when detecting the center of words), speech-to-text (37% WER in the best trained models) and POS tagging (92% accuracy for the best performing model). Included as part of these projects are new resources filling in a gap in Australasian languages, including gold standard POS-tagged written corpora, transcribed speech corpora, time-aligned corpora down to the level of phonemes. These are part of efforts to accelerate the documentation of Cook Islands Maori and to increase its vitality amongst its users.

pdf
Unsupervised Mining of Analogical Frames by Constraint Satisfaction
Lance De Vine | Shlomo Geva | Peter Bruza

It has been demonstrated that vector-based representations of words trained on large text corpora encode linguistic regularities that may be exploited via the use of vector space arithmetic. This capability has been extensively explored and is generally measured via tasks which involve the automated completion of linguistic proportional analogies. The question remains, however, as to what extent it is possible to induce relations from word embeddings in a principled and systematic way, without the provision of exemplars or seed terms. In this paper we propose an extensible and efficient framework for inducing relations via the use of constraint satisfaction. The method is efficient, unsupervised and can be customized in various ways. We provide both quantitative and qualitative analysis of the results.

pdf
Specifying Conceptual Models Using Restricted Natural Language
Bayzid Ashik Hossain | Rolf Schwitter

The key activity to design an information system is conceptual modelling which brings out and describes the general knowledge that is required to build a system. In this paper we propose a novel approach to conceptual modelling where the domain experts will be able to specify and construct a model using a restricted form of natural language. A restricted natural language is a subset of a natural language that has well-defined computational properties and therefore can be translated unambiguously into a formal notation. We will argue that a restricted natural language is suitable for writing precise and consistent specifications that lead to executable conceptual models. Using a restricted natural language will allow the domain experts to describe a scenario in the terminology of the application domain without the need to formally encode this scenario. The resulting textual specification can then be automatically translated into the language of the desired conceptual modelling framework.

pdf
Extracting structured data from invoices
Xavier Holt | Andrew Chisholm

Business documents encode a wealth of information in a format tailored to human consumption – i.e. aesthetically disbursed natural language text, graphics and tables. We address the task of extracting key fields (e.g. the amount due on an invoice) from a wide-variety of potentially unseen document formats. In contrast to traditional template driven extraction systems, we introduce a content-driven machine-learning approach which is both robust to noise and generalises to unseen document formats. In a comparison of our approach with alternative invoice extraction systems, we observe an absolute accuracy gain of 20\% across compared fields, and a 25\%–94\% reduction in extraction latency.

pdf
Exploring Textual and Speech information in Dialogue Act Classification with Speaker Domain Adaptation
Xuanli He | Quan Tran | William Havard | Laurent Besacier | Ingrid Zukerman | Gholamreza Haffari

In spite of the recent success of Dialogue Act (DA) classification, the majority of prior works focus on text-based classification with oracle transcriptions, i.e. human transcriptions, instead of Automatic Speech Recognition (ASR)’s transcriptions. In spoken dialog systems, however, the agent would only have access to noisy ASR transcriptions, which may further suffer performance degradation due to domain shift. In this paper, we explore the effectiveness of using both acoustic and textual signals, either oracle or ASR transcriptions, and investigate speaker domain adaptation for DA classification. Our multimodal model proves to be superior to the unimodal models, particularly when the oracle transcriptions are not available. We also propose an effective method for speaker domain adaptation, which achieves competitive results.

pdf
Cluster Labeling by Word Embeddings and WordNet's Hypernymy
Hanieh Poostchi | Massimo Piccardi

Cluster labeling is the assignment of representative labels to clusters obtained from the organization of a document collection. Once assigned, the labels can play an important role in applications such as navigation, search and document classification. However, finding appropriately descriptive labels is still a challenging task. In this paper, we propose various approaches for assigning labels to word clusters by leveraging word embeddings and the synonymity and hypernymy relations in the WordNet lexical ontology. Experiments carried out using the WebAP document dataset have shown that one of the approaches stand out in the comparison and is capable of selecting labels that are reasonably aligned with those chosen by a pool of four human annotators.

pdf
A Comparative Study of Embedding Models in Predicting the Compositionality of Multiword Expressions
Navnita Nandakumar | Bahar Salehi | Timothy Baldwin

In this paper, we perform a comparative evaluation of off-the-shelf embedding models over the task of compositionality prediction of multiword expressions("MWEs"). Our experimental results suggest that character- and document-level models capture knowledge of MWE compositionality and are effective in modelling varying levels of compositionality, with the advantage over word-level models that they do not require token-level identification of MWEs in the training corpus.

pdf
Towards Efficient Machine Translation Evaluation by Modelling Annotators
Nitika Mathur | Timothy Baldwin | Trevor Cohn

Accurate evaluation of translation has long been a difficult, yet important problem. Current evaluations use direct assessment (DA), based on crowd sourcing judgements from a large pool of workers, along with quality control checks, and a robust method for combining redundant judgements. In this paper we show that the quality control mechanism is overly conservative, which increases the time and expense of the evaluation. We propose a model that does not rely on a pre-processing step to filter workers and takes into account varying annotator reliabilities. Our model effectively weights each worker's scores based on the inferred precision of the worker, and is much more reliable than the mean of either the raw scores or the standardised scores. We also show that DA does not deliver on the promise of longitudinal evaluation, and propose redesigning the structure of the annotation tasks that can solve this problem.

pdf
Overview of the 2018 ALTA Shared Task: Classifying Patent Applications
Diego Mollá | Dilesha Seneviratne

We present an overview of the 2018 ALTA shared task. This is the 9th of the series of shared tasks organised by ALTA since 2010. The task was to classify Australian patent classifications following the sections defined by the International Patient Classification (IPC), using data made available by IP Australia. We introduce the task, describe the data and present the results of the participating teams. Some of the participating teams outperformed state of the art.

pdf
Classifying Patent Applications with Ensemble Methods
Fernando Benites | Shervin Malmasi | Marcos Zampieri

We present methods for the automatic classification of patent applications using an annotated dataset provided by the organizers of the ALTA 2018 shared task - Classifying Patent Applications. The goal of the task is to use computational methods to categorize patent applications according to a coarse-grained taxonomy of eight classes based on the International Patent Classification (IPC). We tested a variety of approaches for this task and the best results, 0.778 micro-averaged F1-Score, were achieved by SVM ensembles using a combination of words and characters as features. Our team, BMZ, was ranked first among 14 teams in the competition.

pdf
Universal Language Model Fine-tuning for Patent Classification
Jason Hepburn

This paper describes the methods used for the 2018 ALTA Shared Task. The task this year was to automatically classify Australian patents into their main International Patent Classification section. Our final submission used a Support Vector Machine (SVM) and Universal Language Model with Fine-tuning (ULMFiT). Our system achieved the best results in the student category.