LazImpa”: Lazy and Impatient neural agents learn to communicate efficiently

Mathieu Rita, Rahma Chaabouni, Emmanuel Dupoux


Abstract
Previous work has shown that artificial neural agents naturally develop surprisingly non-efficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, “LazImpa”, where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.
Anthology ID:
2020.conll-1.26
Volume:
Proceedings of the 24th Conference on Computational Natural Language Learning
Month:
November
Year:
2020
Address:
Online
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
335–343
Language:
URL:
https://aclanthology.org/2020.conll-1.26
DOI:
10.18653/v1/2020.conll-1.26
Bibkey:
Cite (ACL):
Mathieu Rita, Rahma Chaabouni, and Emmanuel Dupoux. 2020. “LazImpa”: Lazy and Impatient neural agents learn to communicate efficiently. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 335–343, Online. Association for Computational Linguistics.
Cite (Informal):
“LazImpa”: Lazy and Impatient neural agents learn to communicate efficiently (Rita et al., CoNLL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.conll-1.26.pdf
Optional supplementary material:
 2020.conll-1.26.OptionalSupplementaryMaterial.zip