Are Neural Networks Extracting Linguistic Properties or Memorizing Training Data? An Observation with a Multilingual Probe for Predicting Tense

Bingzhi Li, Guillaume Wisniewski


Abstract
We evaluate the ability of Bert embeddings to represent tense information, taking French and Chinese as a case study. In French, the tense information is expressed by verb morphology and can be captured by simple surface information. On the contrary, tense interpretation in Chinese is driven by abstract, lexical, syntactic and even pragmatic information. We show that while French tenses can easily be predicted from sentence representations, results drop sharply for Chinese, which suggests that Bert is more likely to memorize shallow patterns from the training data rather than uncover abstract properties.
Anthology ID:
2021.eacl-main.269
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3080–3089
Language:
URL:
https://aclanthology.org/2021.eacl-main.269
DOI:
10.18653/v1/2021.eacl-main.269
Bibkey:
Cite (ACL):
Bingzhi Li and Guillaume Wisniewski. 2021. Are Neural Networks Extracting Linguistic Properties or Memorizing Training Data? An Observation with a Multilingual Probe for Predicting Tense. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3080–3089, Online. Association for Computational Linguistics.
Cite (Informal):
Are Neural Networks Extracting Linguistic Properties or Memorizing Training Data? An Observation with a Multilingual Probe for Predicting Tense (Li & Wisniewski, EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2021.eacl-main.269.pdf
Code
 bingzhilee/tense_representation_bert