Abstract
We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.- Anthology ID:
- 2021.repl4nlp-1.20
- Volume:
- Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Venue:
- RepL4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 195–205
- Language:
- URL:
- https://aclanthology.org/2021.repl4nlp-1.20
- DOI:
- 10.18653/v1/2021.repl4nlp-1.20
- Cite (ACL):
- Kamil Bujel, Helen Yannakoudakis, and Marek Rei. 2021. Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 195–205, Online. Association for Computational Linguistics.
- Cite (Informal):
- Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers (Bujel et al., RepL4NLP 2021)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2021.repl4nlp-1.20.pdf
- Code
- bujol12/bert-seq-interpretability
- Data
- FCE