Sanzhar Murzakhmetov
2022
Attention Understands Semantic Relations
Anastasia Chizhikova
|
Sanzhar Murzakhmetov
|
Oleg Serikov
|
Tatiana Shavrina
|
Mikhail Burtsev
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Today, natural language processing heavily relies on pre-trained large language models. Even though such models are criticized for the poor interpretability, they still yield state-of-the-art solutions for a wide set of very different tasks. While lots of probing studies have been conducted to measure the models’ awareness of grammatical knowledge, semantic probing is less popular. In this work, we introduce the probing pipeline to study the representedness of semantic relations in transformer language models. We show that in this task, attention scores are nearly as expressive as the layers’ output activations, despite their lesser ability to represent surface cues. This supports the hypothesis that attention mechanisms are focusing not only on the syntactic relational information but also on the semantic one.