Daniel Beck
Also published as: Daniel Emilio Beck
2026
FLUKE: A Linguistically-Driven and Task-Agnostic Framework for Robustness Evaluation
Yulia Otmakhova | Thinh Hung Truong | Rahmad Mahendra | Zenan Zhai | Rongxin Zhu | Daniel Beck | Jey Han Lau
Findings of the Association for Computational Linguistics: EACL 2026
Yulia Otmakhova | Thinh Hung Truong | Rahmad Mahendra | Zenan Zhai | Rongxin Zhu | Daniel Beck | Jey Han Lau
Findings of the Association for Computational Linguistics: EACL 2026
We present FLUKE (Framework for LingUistically-driven and tasK-agnostic robustness Evaluation), a framework for assessing model robustness through systematic minimal variations of test data. FLUKE introduces controlled variations across linguistic levels — from orthography to dialect and style — and leverages large language models (LLMs) with human validation to generate modifications. We demonstrate FLUKE’s utility by evaluating both fine-tuned models and LLMs across six diverse NLP tasks (four classification and two generation tasks), and reveal that (1) the impact of linguistic variations is highly task-dependent, with some tests being critical for certain tasks but irrelevant for others; (2) LLMs still exhibit significant brittleness to certain linguistic variations, with reasoning LLMs surprisingly showing less robustness on some tasks compared to base models, and scaling improving robustness only for surface-level modifications; (3) models are overall more brittle to natural, fluent modifications such as syntax or style changes (and especially to negation), compared to corruption-style tests such as letter flipping; (4) the ability of a model to use a linguistic feature in generation does not correlate to its robustness to this feature on downstream tasks. These findings highlight the importance of systematic robustness testing for understanding model behaviors.
2024
Intervention extraction in preclinical animal studies of Alzheimer’s Disease: Enhancing regex performance with language model-based filtering
Yiyuan Pu | Kaitlyn Hair | Daniel Beck | Mike Conway | Malcolm MacLeod | Karin Verspoor
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Yiyuan Pu | Kaitlyn Hair | Daniel Beck | Mike Conway | Malcolm MacLeod | Karin Verspoor
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
We explore different information extraction tools for annotation of interventions to support automated systematic reviews of preclinical AD animal studies. We compare two PICO (Population, Intervention, Comparison, and Outcome) extraction tools and two prompting-based learning strategies based on Large Language Models (LLMs). Motivated by the high recall of a dictionary-based approach, we define a two-stage method, removing false positives obtained from regexes with a pre-trained LM. With ChatGPT-based filtering using three-shot prompting, our approach reduces almost two-thirds of False Positives compared to the dictionary approach alone, while outperforming knowledge-free instructional prompting.
2023
Predicting Empathic Accuracy from User-Designer Interviews
Steven Nguyen | Daniel Beck | Katja Holtta-Otto
Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association
Steven Nguyen | Daniel Beck | Katja Holtta-Otto
Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association
Measuring empathy as a natural language processing task has often been limited to a subjective measure of how well individuals respond to each other in emotive situations. Cognitive empathy, or an individual’s ability to accurately assess another individual’s thoughts, remains a more novel task. In this paper, we explore natural language processing techniques to measure cognitive empathy using paired sentence data from design interviews. Our findings show that an unsupervised approach based on similarity of vectors from a Large Language Model is surprisingly promising, while adding supervision does not necessarily improve the performance. An analysis of the results highlights potential reasons for this behaviour and gives directions for future work in this space.
Team:PULSAR at ProbSum 2023:PULSAR: Pre-training with Extracted Healthcare Terms for Summarising Patients’ Problems and Data Augmentation with Black-box Large Language Models
Hao Li | Yuping Wu | Viktor Schlegel | Riza Batista-Navarro | Thanh-Tung Nguyen | Abhinav Ramesh Kashyap | Xiao-Jun Zeng | Daniel Beck | Stefan Winkler | Goran Nenadic
Proceedings of the 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Hao Li | Yuping Wu | Viktor Schlegel | Riza Batista-Navarro | Thanh-Tung Nguyen | Abhinav Ramesh Kashyap | Xiao-Jun Zeng | Daniel Beck | Stefan Winkler | Goran Nenadic
Proceedings of the 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Medical progress notes play a crucial role in documenting a patient’s hospital journey, including his or her condition, treatment plan, and any updates for healthcare providers. Automatic summarisation of a patient’s problems in the form of a “problem list” can aid stakeholders in understanding a patient’s condition, reducing workload and cognitive bias. BioNLP 2023 Shared Task 1A focusses on generating a list of diagnoses and problems from the provider’s progress notes during hospitalisation. In this paper, we introduce our proposed approach to this task, which integrates two complementary components. One component employs large language models (LLMs) for data augmentation; the other is an abstractive summarisation LLM with a novel pre-training objective for generating the patients’ problems summarised as a list. Our approach was ranked second among all submissions to the shared task. The performance of our model on the development and test datasets shows that our approach is more robust on unknown data, with an improvement of up to 3.1 points over the same size of the larger model.
Performance Prediction via Bayesian Matrix Factorisation for Multilingual Natural Language Processing Tasks
Viktoria Schram | Daniel Beck | Trevor Cohn
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Viktoria Schram | Daniel Beck | Trevor Cohn
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Performance prediction for Natural Language Processing (NLP) seeks to reduce the experimental burden resulting from the myriad of different evaluation scenarios, e.g., the combination of languages used in multilingual transfer. In this work, we explore the framework ofBayesian matrix factorisation for performance prediction, as many experimental settings in NLP can be naturally represented in matrix format. Our approach outperforms the state-of-the-art in several NLP benchmarks, including machine translation and cross-lingual entity linking. Furthermore, it also avoids hyperparameter tuning and is able to provide uncertainty estimates over predictions.
Modeling Emotion Dynamics in Song Lyrics with State Space Models
Yingjin Song | Daniel Beck
Transactions of the Association for Computational Linguistics, Volume 11
Yingjin Song | Daniel Beck
Transactions of the Association for Computational Linguistics, Volume 11
Most previous work in music emotion recognition assumes a single or a few song-level labels for the whole song. While it is known that different emotions can vary in intensity within a song, annotated data for this setup is scarce and difficult to obtain. In this work, we propose a method to predict emotion dynamics in song lyrics without song-level supervision. We frame each song as a time series and employ a State Space Model (SSM), combining a sentence-level emotion predictor with an Expectation-Maximization (EM) procedure to generate the full emotion dynamics. Our experiments show that applying our method consistently improves the performance of sentence-level baselines without requiring any annotated songs, making it ideal for limited training data scenarios. Further analysis through case studies shows the benefits of our method while also indicating the limitations and pointing to future directions.
MMT’s Submission for the WMT 2023 Quality Estimation Shared Task
Yulong Wu | Viktor Schlegel | Daniel Beck | Riza Batista-Navarro
Proceedings of the Eighth Conference on Machine Translation
Yulong Wu | Viktor Schlegel | Daniel Beck | Riza Batista-Navarro
Proceedings of the Eighth Conference on Machine Translation
This paper presents our submission to the WMT 2023 Quality Estimation (QE) shared task 1 (sentence-level subtask). We propose a straightforward training data augmentation approach aimed at improving the correlation between QE model predictions and human quality assessments. Utilising eleven data augmentation approaches and six distinct language pairs, we systematically create augmented training sets by individually applying each method to the original training set of each respective language pair. By evaluating the performance gap between the model before and after training on the augmented dataset, as measured on the development set, we assess the effectiveness of each augmentation method. Experimental results reveal that synonym replacement via the Paraphrase Database (PPDB) yields the most substantial performance boost for language pairs English-German, English-Marathi and English-Gujarati, while for the remaining language pairs, methods such as contextual word embeddings-based words insertion, back translation, and direct paraphrasing prove to be more effective. Training the model on a more diverse and larger set of samples does confer further performance improvements for certain language pairs, albeit to a marginal extent, and this phenomenon is not universally applicable. At the time of submission, we select the model trained on the augmented dataset constructed using the respective most effective method to generate predictions for the test set in each language pair, except for the English-German. Despite not being highly competitive, our system consistently surpasses the baseline performance on most language pairs and secures a third-place ranking in the English-Marathi.
2022
Uncertainty Estimation and Reduction of Pre-trained Models for Text Regression
Yuxia Wang | Daniel Beck | Timothy Baldwin | Karin Verspoor
Transactions of the Association for Computational Linguistics, Volume 10
Yuxia Wang | Daniel Beck | Timothy Baldwin | Karin Verspoor
Transactions of the Association for Computational Linguistics, Volume 10
State-of-the-art classification and regression models are often not well calibrated, and cannot reliably provide uncertainty estimates, limiting their utility in safety-critical applications such as clinical decision-making. While recent work has focused on calibration of classifiers, there is almost no work in NLP on calibration in a regression setting. In this paper, we quantify the calibration of pre- trained language models for text regression, both intrinsically and extrinsically. We further apply uncertainty estimates to augment training data in low-resource domains. Our experiments on three regression tasks in both self-training and active-learning settings show that uncertainty estimation can be used to increase overall performance and enhance model generalization.
2021
Evaluating Hierarchical Document Categorisation
Qian Sun | Aili Shen | Hiyori Yoshikawa | Chunpeng Ma | Daniel Beck | Tomoya Iwakura | Timothy Baldwin
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association
Qian Sun | Aili Shen | Hiyori Yoshikawa | Chunpeng Ma | Daniel Beck | Tomoya Iwakura | Timothy Baldwin
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association
Hierarchical document categorisation is a special case of multi-label document categorisation, where there is a taxonomic hierarchy among the labels. While various approaches have been proposed for hierarchical document categorisation, there is no standard benchmark dataset, resulting in different methods being evaluated independently and there being no empirical consensus on what methods perform best. In this work, we examine different combinations of neural text encoders and hierarchical methods in an end-to-end framework, and evaluate over three datasets. We find that the performance of hierarchical document categorisation is determined not only by how the hierarchical information is modelled, but also the structure of the label hierarchy and class distribution.
On the (In)Effectiveness of Images for Text Classification
Chunpeng Ma | Aili Shen | Hiyori Yoshikawa | Tomoya Iwakura | Daniel Beck | Timothy Baldwin
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Chunpeng Ma | Aili Shen | Hiyori Yoshikawa | Tomoya Iwakura | Daniel Beck | Timothy Baldwin
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Images are core components of multi-modal learning in natural language processing (NLP), and results have varied substantially as to whether images improve NLP tasks or not. One confounding effect has been that previous NLP research has generally focused on sophisticated tasks (in varying settings), generally applied to English only. We focus on text classification, in the context of assigning named entity classes to a given Wikipedia page, where images generally complement the text and the Wikipedia page can be in one of a number of different languages. Our experiments across a range of languages show that images complement NLP models (including BERT) trained without external pre-training, but when combined with BERT models pre-trained on large-scale external data, images contribute nothing.
Generating Diverse Descriptions from Semantic Graphs
Jiuzhou Han | Daniel Beck | Trevor Cohn
Proceedings of the 14th International Conference on Natural Language Generation
Jiuzhou Han | Daniel Beck | Trevor Cohn
Proceedings of the 14th International Conference on Natural Language Generation
Text generation from semantic graphs is traditionally performed with deterministic methods, which generate a unique description given an input graph. However, the generation problem admits a range of acceptable textual outputs, exhibiting lexical, syntactic and semantic variation. To address this disconnect, we present two main contributions. First, we propose a stochastic graph-to-text model, incorporating a latent variable in an encoder-decoder model, and its use in an ensemble. Second, to assess the diversity of the generated sentences, we propose a new automatic evaluation metric which jointly evaluates output diversity and quality in a multi-reference setting. We evaluate the models on WebNLG datasets in English and Russian, and show an ensemble of stochastic models produces diverse sets of generated sentences while, retaining similar quality to state-of-the-art models.
2020
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
Maria Kim | Daniel Beck | Meladel Mistica
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
Maria Kim | Daniel Beck | Meladel Mistica
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
Information Extraction from Legal Documents: A Study in the Context of Common Law Court Judgements
Meladel Mistica | Geordie Z. Zhang | Hui Chia | Kabir Manandhar Shrestha | Rohit Kumar Gupta | Saket Khandelwal | Jeannie Paterson | Timothy Baldwin | Daniel Beck
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
Meladel Mistica | Geordie Z. Zhang | Hui Chia | Kabir Manandhar Shrestha | Rohit Kumar Gupta | Saket Khandelwal | Jeannie Paterson | Timothy Baldwin | Daniel Beck
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
‘Common Law’ judicial systems follow the doctrine of precedent, which means the legal principles articulated in court judgements are binding in subsequent cases in lower courts. For this reason, lawyers must search prior judgements for the legal principles that are relevant to their case. The difficulty for those within the legal profession is that the information that they are looking for may be contained within a few paragraphs or sentences, but those few paragraphs may be buried within a hundred-page document. In this study, we create a schema based on the relevant information that legal professionals seek within judgements and perform text classification based on it, with the aim of not only assisting lawyers in researching cases, but eventually enabling large-scale analysis of legal judgements to find trends in court outcomes over time.
Search
Fix author
Co-authors
- Trevor Cohn 11
- Lucia Specia 10
- Timothy Baldwin 5
- Kashif Shah 5
- Aili Shen 3
- Riza Theresa Batista-Navarro 2
- Frédéric Blain 2
- Gholamreza Haffari 2
- Tomoya Iwakura 2
- Chunpeng Ma 2
- Meladel Mistica 2
- Gustavo Paetzold 2
- Viktor Schlegel 2
- Karin Verspoor 2
- Hiyori Yoshikawa 2
- Ahmet Aker 1
- Fethi Bougares 1
- Bill Byrne 1
- Hui Chia 1
- Mike Conway 1
- Andres Duque 1
- Marina Fomicheva 1
- Rohit Kumar Gupta 1
- Kaitlyn Hair 1
- Jiuzhou Han 1
- Christian Hardmeier 1
- Katja Holtta-Otto 1
- Gonzalo Iglesias 1
- Saket Khandelwal 1
- Maria Kim 1
- Jey Han Lau 1
- Hao Li 1
- Varvara Logacheva 1
- Malcolm MacLeod 1
- Rahmad Mahendra 1
- Goran Nenadic 1
- Steven Nguyen 1
- Thanh-Tung Nguyen 1
- Julia Otmakhova 1
- Jeannie Paterson 1
- Yiyuan Pu 1
- Jianzhong Qi 1
- Abhinav Ramesh Kashyap 1
- Bahar Salehi 1
- Carolina Scarton 1
- Viktoria Schram 1
- Ulrich Schäfer 1
- Jurica Seva 1
- Kabir Manandhar Shrestha 1
- Karin Sim Smith 1
- Yingjin Song 1
- Qian Sun 1
- Thinh Hung Truong 1
- Andreas Vlachos 1
- Aurelien Waite 1
- Yuxia Wang 1
- Dalin Wang 1
- Stefan Winkler 1
- Yuping Wu 1
- Yulong Wu 1
- Xiao-Jun Zeng 1
- Zenan Zhai 1
- Geordie Z. Zhang 1
- Rongxin Zhu 1
- Adrià de Gispert 1