Bela Bohlender


2025

pdf bib
CLEAR-Command: Coordinated Listening, Extraction, and Allocation for Emergency Response with Large Language Models
Achref Doula | Bela Bohlender | Max Mühlhäuser | Alejandro Sanchez Guinea
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)

Effective communication is vital in emergency response scenarios where clarity and speed can save lives. Traditional systems often struggle under the chaotic conditions of real-world emergencies, leading to breakdowns in communication and task management. This paper introduces CLEAR-Command, a system that leverages Large Language Models (LLMs) to enhance emergency communications. CLEAR stands for $textbfCoordinatedListening,Extraction, andAllocation inResponse. CLEAR-Command automates the transcription, summarization, and task extraction from live radio communications of emergency first responders using the OpenAI Whisper API for transcription and gpt-4o for summarization and task extraction. Our system provides a dynamic overview of task allocations and their execution status, significantly improving the accuracy of task identification and the clarity of communication. We evaluated our system through an expert pre-study with 4 experts and a user study with 13 participants. The expert pre-study identified gpt-4o as providing the most accurate task extraction, while the user study showed that CLEAR-Command significantly outperforms traditional radio communication in terms of clarity, trust, and correctness of task extraction. Our demo is hosted under thislink, and all project details are presented in ourGitlab page$.

2022

pdf bib
AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters
Tilman Beck | Bela Bohlender | Christina Viehmann | Vincent Hane | Yanik Adamson | Jaber Khuri | Jonas Brossmann | Jonas Pfeiffer | Iryna Gurevych
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations

The open-access dissemination of pretrained language models through online repositories has led to a democratization of state-of-the-art natural language processing (NLP) research. This also allows people outside of NLP to use such models and adapt them to specific use-cases. However, a certain amount of technical proficiency is still required which is an entry barrier for users who want to apply these models to a certain task but lack the necessary knowledge or resources. In this work, we aim to overcome this gap by providing a tool which allows researchers to leverage pretrained models without writing a single line of code. Built upon the parameter-efficient adapter modules for transfer learning, our AdapterHub Playground provides an intuitive interface, allowing the usage of adapters for prediction, training and analysis of textual data for a variety of NLP tasks. We present the tool’s architecture and demonstrate its advantages with prototypical use-cases, where we show that predictive performance can easily be increased in a few-shot learning scenario. Finally, we evaluate its usability in a user study. We provide the code and a live interface at https://adapter-hub.github.io/playground.