Samuele Garda
2022
Dataset Debt in Biomedical Language Modeling
Jason Fries
|
Natasha Seelam
|
Gabriel Altay
|
Leon Weber
|
Myungsun Kang
|
Debajyoti Datta
|
Ruisi Su
|
Samuele Garda
|
Bo Wang
|
Simon Ott
|
Matthias Samwald
|
Wojciech Kusa
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP’s significant dataset debt – the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13% of datasets are available via programmatic access and 30% lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: https://tinyurl.com/bigbio22.
2021
Extend, don’t rebuild: Phrasing conditional graph modification as autoregressive sequence labelling
Leon Weber
|
Jannes Münchmeyer
|
Samuele Garda
|
Ulf Leser
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Deriving and modifying graphs from natural language text has become a versatile basis technology for information extraction with applications in many subfields, such as semantic parsing or knowledge graph construction. A recent work used this technique for modifying scene graphs (He et al. 2020), by first encoding the original graph and then generating the modified one based on this encoding. In this work, we show that we can considerably increase performance on this problem by phrasing it as graph extension instead of graph generation. We propose the first model for the resulting graph extension problem based on autoregressive sequence labelling. On three scene graph modification data sets, this formulation leads to improvements in accuracy over the state-of-the-art between 13 and 24 percentage points. Furthermore, we introduce a novel data set from the biomedical domain which has much larger linguistic variability and more complex graphs than the scene graph modification data sets. For this data set, the state-of-the art fails to generalize, while our model can produce meaningful predictions.
Search
Co-authors
- Leon Weber 2
- Jannes Münchmeyer 1
- Ulf Leser 1
- Jason Fries 1
- Natasha Seelam 1
- show all...