Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog

Satwik Kottur, José Moura, Stefan Lee, Dhruv Batra


Abstract
A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge ‘naturally’,despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.
Anthology ID:
D17-1321
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2962–2967
Language:
URL:
https://aclanthology.org/D17-1321
DOI:
10.18653/v1/D17-1321
Bibkey:
Cite (ACL):
Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. 2017. Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2962–2967, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog (Kottur et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/D17-1321.pdf
Attachment:
 D17-1321.Attachment.zip
Code
 batra-mlp-lab/lang-emerge