Bettina Speckmann


2022

pdf
Story Trees: Representing Documents using Topological Persistence
Pantea Haghighatkhah | Antske Fokkens | Pia Sommerauer | Bettina Speckmann | Kevin Verbeek
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Topological Data Analysis (TDA) focuses on the inherent shape of (spatial) data. As such, it may provide useful methods to explore spatial representations of linguistic data (embeddings) which have become central in NLP. In this paper we aim to introduce TDA to researchers in language technology. We use TDA to represent document structure as so-called story trees. Story trees are hierarchical representations created from semantic vector representations of sentences via persistent homology. They can be used to identify and clearly visualize prominent components of a story line. We showcase their potential by using story trees to create extractive summaries for news stories.

pdf
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
Pantea Haghighatkhah | Antske Fokkens | Pia Sommerauer | Bettina Speckmann | Kevin Verbeek
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Bias elimination and recent probing studies attempt to remove specific information from embedding spaces. Here it is important to remove as much of the target information as possible, while preserving any other information present. INLP is a popular recent method which removes specific information through iterative nullspace projections.Multiple iterations, however, increase the risk that information other than the target is negatively affected.We introduce two methods that find a single targeted projection: Mean Projection (MP, more efficient) and Tukey Median Projection (TMP, with theoretical guarantees). Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact on the overall space.Further analysis shows that applying random projections after MP leads to the same overall effects on the embedding space as the multiple projections of INLP. Applying one targeted (MP) projection hence is methodologically cleaner than applying multiple (INLP) projections that introduce random effects.