Matthias Samwald


2022

pdf
Dataset Debt in Biomedical Language Modeling
Jason Fries | Natasha Seelam | Gabriel Altay | Leon Weber | Myungsun Kang | Debajyoti Datta | Ruisi Su | Samuele Garda | Bo Wang | Simon Ott | Matthias Samwald | Wojciech Kusa
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models

Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP’s significant dataset debt – the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13% of datasets are available via programmatic access and 30% lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: https://tinyurl.com/bigbio22.

pdf
A global analysis of metrics used for measuring performance in natural language processing
Kathrin Blagec | Georg Dorffner | Milad Moradi | Simon Ott | Matthias Samwald
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP

Measuring the performance of natural language processing models is challenging. Traditionally used metrics, such as BLEU and ROUGE, originally devised for machine translation and summarization, have been shown to suffer from low correlation with human judgment and a lack of transferability to other tasks and languages. In the past 15 years, a wide range of alternative metrics have been proposed. However, it is unclear to what extent this has had an impact on NLP benchmarking efforts. Here we provide the first large-scale cross-sectional analysis of metrics used for measuring performance in natural language processing. We curated, mapped and systematized more than 3500 machine learning model performance results from the open repository ‘Papers with Code’ to enable a global and comprehensive analysis. Our results suggest that the large majority of natural language processing metrics currently used have properties that may result in an inadequate reflection of a models’ performance. Furthermore, we found that ambiguities and inconsistencies in the reporting of metrics may lead to difficulties in interpreting and comparing model performances, impairing transparency and reproducibility in NLP research.

2021

pdf
Evaluating the Robustness of Neural Language Models to Input Perturbations
Milad Moradi | Matthias Samwald
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common benchmark datasets often do not reflect model reliability and robustness when applied to noisy, real-world data. In this study, we design and implement various types of character-level and word-level perturbation methods to simulate realistic scenarios in which input texts may be slightly noisy or different from the data distribution on which NLP systems were trained. Conducting comprehensive experiments on different NLP tasks, we investigate the ability of high-performance language models such as BERT, XLNet, RoBERTa, and ELMo in handling different types of input perturbations. The results suggest that language models are sensitive to input perturbations and their performance can decrease even when small changes are introduced. We highlight that models need to be further improved and that current benchmarks are not reflecting model robustness well. We argue that evaluations on perturbed inputs should routinely complement widely-used benchmarks in order to yield a more realistic understanding of NLP systems’ robustness.

2019

pdf
Using hyperbolic large-margin classifiers for biological link prediction
Asan Agibetov | Georg Dorffner | Matthias Samwald
Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5)