The ability to communicate using language is a signature characteristic of intelligence . Language provides a structured platform for agents to not only collaborate with each other and accomplish certain goals, but also to represent and store information in a compressible manner. Most importantly, language allows us to build infinitely many new concepts by the composition of the known concepts. These qualities are shared by both the natural languages used in human-human communication and programming languages used in human-machine communication. The study of the evolution of language can hence give us insights into intelligent machines that can communicate .
Our goal in this paper is to develop and understand an emergent language, i.e.
, a language that emerges when two neural network agents try to communicate with each other.Clark  argued that supervised approaches that consist of a single agent learning statistical relationships among symbols don’t capture the functional relationships between the symbols i.e., the use of symbols leading to an action or an outcome. Krishna et al.  argued the same viewpoint in the context of images. We, therefore, resort to the recent works in emergent language [5, 42, 74, 45, 30, 46] which show that a communication protocol can be developed or learned by two or more cooperative agents trying to solve a task. The choice of the task is quintessential since the language derives meaning from its use . We choose a task where two agents, a speaker, and a listener, play a referential game, a type of signaling game first proposed by Lewis . The speaker agent receives a target image and sends a message to the listener. The message consists of discrete symbols or words, capturing different parts of the image. The listener receives another view of the target image, and one or more distractor images. The goal of the speaker and the listener agents is to maximize the agreement between the message and the target image. Fig. 1 illustrates the overview of the proposed referential game.
In computer vision, a number of attempts have been made to represent images as visual words, with a focus on low-level feature descriptors such as SIFT , SURF , etc.
. Recent works in deep learning have attempted to describe the entire image with a fixed number of discrete symbols[58, 59, 63], however, we postulate that large images contain a lot of redundant information and a good visual representation should focus on only the “interesting” parts of the image. To discover what constitutes the interesting part of the image, we take inspiration from the works on mid-level patches [70, 37, 18], the patches in the image that are both representative and discriminative [70, 28]. This means they can be discovered in a large number of images (and hence representative), but simultaneously they should also be discriminative enough to set an image apart from the other images in the dataset. Hence, the speaker agent in our paper focuses on computing a symbolic representation in terms of these mid-level patches, as opposed to the entire image.
To summarize, we propose PatchGame, a referential game formulation where given an image, the speaker sends discrete signal in terms of mid-level patches, and the listener embeds these symbols to match them with another view of the same image in the presence of distractors. Compared to previous works [30, 45, 22], we make the following key changes:
Agents in the some of the prior works [30, 45, 22] have access to a pre-trained network, such as AlexNet  or VGG , for extracting features from images. In this work, the agents rely on training on a large scale image dataset, and invariance introduced by various image augmentations, to learn the language in a self-supervised way .
We propose a novel patch-based architecture for the speaker agent, which comprises of two modules: (1) PatchSymbol, a multi-layered perceptron (MLP) that operates at the patch-level and converts a given image patch into a sequence of discrete symbols, and (2) PatchRank, a ConvNet that looks at the complete image and ranks the importance of patches in a differentiable manner.
We introduce a novel transformer-based architecture for the listener agent, consisting of two modules: (1) a language module that projects the message received from the speaker to a latent space, and (2) a vision module that projects the image into the latent space. We use a contrastive loss in this latent space to train both the speaker and the listener agents simultaneously.
We propose new protocols to evaluate each of the speaker and listeners’ modules.
We assess the success of PatchGame via qualitative and quantitative evaluations of each of the proposed component, and by demonstrating some practical applications. First, we show that the speaker’s PatchRank model does indicate important patches in the image. We use the top patches indicated by this model to classify ImageNet images using a pre-trained Vision Transformer  and show that we can retain over 60% top-1 accuracy with just half of the image patches. Second, the listener’s vision model (ResNet-18) can achieve upto 30.3% Top-1 accuracy just by using k-NN () classification. This outperforms other state-of-the-art unsupervised approaches [63, 28]
that learn discrete representations of images by 9%. Finally, we also analyze the symbols learned by our model and the impact of choosing several hyperparameters used in our experiments.
2 Related Work
Referential games. Prior to the advent of deep learning, significant research in the field of emergent communications has shown that a communication protocol can be developed or learned by agents by playing a language game [5, 42, 74, 77, 4, 3, 2, 39]. However, the agents employed in these works were typically located in a synthetic world and made several assumptions about the world such as the availability of disentangled representations of objects with discrete properties. More recent works [36, 23, 15, 75, 47, 73, 45, 46] have employed deep learning methods to develop a discrete language for communication between the agents. Lazaridou et al.  used neural network agents represented by a MLP to communicate concepts about real-world pictures. They used a fixed-sized message composed of a large vocabulary for their communication. Evtimova et al. , Bouchacourt and Baroni , Havrylov and Titov  relax this assumption and allow communication via variable-length sequences. Havrylov and Titov  allows the speaker agent to use an LSTM  to construct a variable-length message. Lazaridou et al. , Havrylov and Titov  show that even when we allow agents to use variable-length sequences to represent a message, they tend to utilize the maximum possible sequence to achieve the best performance (in terms of communication success).
The idea of using a Gumbel-softmax distribution [34, 51] with the straight-through trick  for learning a language in multi-agent environment was concurrently proposed by Mordatch and Abbeel  and Havrylov and Titov 
. They show that we can achieve a more stable and faster training by using this technique as compared to reinforcement learning used in several other approaches.
Evaluating emergent communication.
Evaluating the emergent language turns out to be an equally challenging research problem. Existing approaches use the successful completion of the task or the correlation between learned language and semantic labels as evaluation metrics.Lowe et al.  and Keresztury and Bruni 
show that simple task success might not be a good or sufficient metric for evaluating the success of a game. They discuss heuristics and advocate measuring both positive signaling and positive listening independently to evaluate agents’ communication.Andreas  provides a way of evaluating compositional structure in learned representations.
In parallel to the works on emergent communication, there is a large body of research on learning discrete representations of images using some form of autoencoding or reconstruction[28, 58, 65, 62, 20, 57, 29, 63] without labels. The focus of VQ-VAE  and VQ-VAE-2 
is to learn a discrete bottleneck using vector quantization. Once we can represent any image with these discrete symbols, a powerful generative model such as PixelCNN[58, 66], or transformer [76, 61] is learned on top of these symbols to sample new images. PatchVAE  achieves the same using Gumbel-Softmax and imposes an additional structure in the bottleneck of VAEs. We argue that because of mismatch in the objectives of reconstruction and visual recognition tasks, each of these models trained using reconstruction-based losses do not capture meaningful representations in the symbols.
Self-supervised learning in vision.
Self-supervised learning (SSL) methods, such as[11, 81, 12, 10, 13, 55, 32, 26] have shown impressive results in recent years on downstream tasks of classification and object detection. Even though the bottleneck in these methods is continuous (and not discrete symbols), these methods have been shown to capture semantic and spatial information of the contents of the image. Unlike SSL methods, neural networks representing the two agents in our case, do not share any weights. Also, note that the continuous nature of representations learned by SSL techniques is fundamentally different from the symbolic representations used in language. And indeed, we show that a k-Nearest Neighbor classifier obtained from the continuous representations learned by SSL methods can perform better than the one obtained using Bag of Words (or symbols). However, to the best of our knowledge, our work is one of the first attempts to make representations learned in a self-supervised way more communication- or language-oriented.
Comparison with Mihai and Hare [53, 52]. [52, 53] extends Havrylov and Titov  by training a speaker and listener agents end-to-end without using pre-trained networks. However, the prior works [52, 52, 30] use a top-down approach to generate discrete representation (or a sentence) for an image, i.e., they compute an overall continuous embedding of the image and then proceed by generating one symbol of the sentence at a time using an LSTM. The computational cost of LSTMs is prohibitive when length of a sentence is large, which is needed to describe complex images. The transformers, on the other hand, require constant time for variable length sentences at the cost of increased memory (in the listener agent). However, generating the variable length sentences with the speaker agent using transformers is non-trivial. To solve this, we propose a bottom-up approach, i.e., we first generate symbols for image patches and combine them to form a sentence. This approach allows for computationally efficient end-to-end training. Further, it allows the speaker to compose symbols corresponding to different parts of the image, instead of deducing it from a pooled 1D representation of the image.
We first introduce various notations and the referential game played by the agents in our work. We provide further details of architectures of the different neural network agents, as well as the loss function. We also highlight the important differences between this work and prior literature. Code and pre-trained models to reproduce the results are provided.
3.1 Referential Game Setup
Fig. 1 shows the overview of our referential game setup. Given a dataset of images , we formulate a referential game  played between two agents, a speaker and a listener as follows: As in the setting of Grill et al. , we generate two “random views” for every image. A random view is generated by taking a crop from a randomly resized image and adding one or more of these augmentations - color jitter, horizontal flip, Gaussian blur and/or solarization. This prevents the neural networks from learning a trivial solution and encourages the emergent language to capture invariances induced by the augmentations. Given a batch of images, we refer to the two views as and . In each iteration during training, one set of views is presented to the speaker agent and another set of the views is shown to the listener agent.
The speaker encodes each image independently into a variable length message . Each message
is represented by a sequence of one-hot encoded symbols with a maximum possible lengthand a fixed size vocabulary . The space of all possible messages sent by the speaker is of the order . The input to the listener is the batch of messages from the speaker, and the second set of random views of the batch of images. The listener consists of a language module to encode messages and a vision module to encode images. The goal of the listener is to match each message to its corresponding image.
Specifically, for a batch of (message, image) pairs, and
are jointly trained to maximize the cosine similarities ofactual pairs while minimizing the similarity of incorrect pairs. For a target message , the image (augmented view of ) acts as the target image while all the other images act as distractors. And vice versa, for the image , the message (encoded by the speaker ) acts as the target message while all the other messages act as distractors. We use the following symmetric and contrastive loss function, also sometimes referred to as InfoNCE loss in previous metric-learning works [72, 59].
where is a constant temperature hyperparameter.
The game setting used in our work is inspired from Lazaridou et al.  and Havrylov and Titov , but there are important differences. Both our speaker and listener agents are trained from scratch. This makes the game setting more challenging, since agents cannot use the pre-trained models which have been shown to encode semantic and/or syntactic information present in natural language or images. Our training paradigm, where we show different views of the same image to the speaker and listener, is inspired by the recent success of self-supervised learning in computer vision. Empirically, we observe that this leads to a more stable training and prevents the neural networks from learning degenerate solutions. However, in contrast with such self-supervised approaches, our goal is to learn a discrete emergent language as opposed to continuous semantic representations. We discuss the differences in the architecture of the two agents in the following sections.
3.2 Speaker agent architecture
A desirable characteristic of the speaker agent is that it should be able to encode “important” components of images with a variable length sequence of discrete symbols. Previous works [22, 9, 30] have achieved this by first converting the image into a continuous deterministic latent vector and then using an LSTM network  to generate a sequence of hidden states, and sample from this sequence of hidden state until a special end of sequence token (or maximum length) is reached. As observed by [46, 30], in order to achieve the minimum loss, the model ends up always using the maximum allowable length. In our experiments as well, we observed that having an LSTM makes the training slow and does not achieve the objective of encoding images in variable length sequences. We propose leverage two separate modules in the speaker agent to circumvent this problem - the first module called PatchSymbol () is a 2-layer MLP that computes patch-level embeddings for the image, the second module called PatchRank () is a small ConvNet that computes rank or importance of each patch in the image.
PatchSymbol, . The idea of encoding an image at the patch level is inspired by the works on discovering mid-level patches [18, 37, 70] in images. We use a simple 2-hidden layer MLP, to encode each dimensional image patch to
vectors of log of probabilities. Here is the number of (color) channels in the input image or patch, is the spatial dimension of a square patch, is the size of the vocabulary used to represent a single symbol, and is the number of symbols used to encode each patch. Hence an image of size can be encoded using patches, each consisting of symbols. The vectors of log probabilities allow us to sample from a categorical distribution of categories, with a continuous relaxation by using the Gumbel-softmax trick [34, 51]. For a given vector , we draw i.i.d samples from the distribution  and get a differentiable approximation of as follows:
where controls how close the approximation is to . The final output of the network for the entire image is one-hot encoded dimensional symbols, where . In all our experiments, we fix and .
PatchRank, . An image might have a lot of redundant patches encoded using the same symbols. The goal of the network is to give an importance score to each patch. Since importance of a patch depends on the context and not the patch alone, we use a small ResNet-9  to compute an importance weight for each of the patches. One possible way to use these importance weights is to simply normalize them between and repeat the Gumbel-Softmax trick to sample important patches. The listener network would see only the message consisting of “important” patches. However, we empirically observed that a simple min-max or L2-normalization allows the network to assign high weights to each patch and effectively send the entire sequence of length to the listener. Instead, we propose to use a differentiable ranking algorithm by Blondel et al.  to convert the importance weights to soft-ranks in time. This method works by constructing differentiable operators as projections onto the convex hull of permutations. Once we have the vector of soft-ranks
, we normalize the ranks and sample binary values again using a special case of the Gumbel-softmax trick for Bernoulli distributions[34, 51] as
where are the importance weights obtained by applying a ResNet-9 to the image , is the set of all permutations, and are the ranks corresponding to the permutation . We refer the reader to  for a detailed description of the soft-sort algorithm. Therefore, the final symbols encoded by the speaker agent, , is given by:
3.3 Listener agent architecture
As discussed in 3.1, the listener agent consists of a language module and a vision module . We implement using a small transformer encoder . We prepend a token at the beginning of each message sequence received from the speaker [17, 29], and use the final embedding of to compute the loss described in Eq. 3. We implement using a small vision transformer , and follow a similar procedure as in the text module to obtain the final image embedding. Both the text and vision modules use a similar transformer encoder architecture (no weight sharing) with hidden size, layers and attention heads. Following [81, 11, 13], we add a high dimensional projection at the end of the last layer before computing the loss function.
Each of the weights of the speaker and listener agents are optimized jointly during the training. We use a 2-layer MLP for , ResNet-9  for , a ResNet-18  for , and a small transformer encoder (hidden size , 3 heads, 12 layers) for
. All the experiments are conducted on the training set of ImageNet
, which has approximately 1.28 million images from 1000 classes. We create a training and validation split from the training set by leaving aside 5% of the images for validation. After obtaining the final set of hyper-parameters, we retrain on the entire training set for 100 epochs. We use Stochastic Gradient Descent (SGD) with momentum and cosine learning rate scheduling. In order to train the stochastic component of the Speaker, we use the straight-through trick. We reiterate that the speaker and listener do not share weights, and the only supervision used is the InfoNCE loss defined in Eq. 3. Please refer to the appendix and code attached in the supplementary material for more details.
We evaluate the success of communication in the referential game and impact of various hyper-parameters on the success. Also, following the work of Lowe et al. , we evaluate the emergent communication in two primary ways. In section 4.1, we measure positive signaling, which means that sends messages relevant to the observation. In section 4.2, we measure positive listening, which indicates that the messages are influencing the agent’s behavior.
4.1 Positive Signaling - Visualizing Patch Ranks from
We first visualize the output of PatchRank module as a heatmap overlayed on the original image in Fig. 3. Most important patches are colored towards ‘red’ and the least important ones are colored towards ‘blue’. The figure shows that our PatchRank can capture important and discriminative parts of the images. In case of images of various animals and plants, the model assigns the highest important to discriminative body parts. For the inanimate objects such as abacus or revolver the model is able to distinguish between the foreground and the background. Note that although approaches such as GradCam  can provide pixel-level importance heatmaps, they require extensive supervision. Our method on the other hand is self-supervised. We also show some of the failure cases when discriminative patches in the image cover majority of the image on the rightmost 2 columns of Fig. 3.
4.2 Positive Signaling - Image Classification with subset of patches provided by
Recently proposed Vision Transformers (ViT) by Dosovitskiy et al.  have gained popularity because of their simplicity and performance. These models treat an image as a sequence of patches, use an MLP to convert the patch into an embedding, and finally use these set of patches to perform classification. We first note that both the inference time and memory consumption of ViT depend largely on the length of sequence, because of the self-attention operation. Secondly, the performance of the Vision Transformers drops if instead of using all patches, we only use a subset of patches during inference. This provides us a simple way to evaluate the module. We artificially constrain the number of patches available to ViT during inference. We measure the Top-1 Accuracy of ViT using only allowed patches. The selection of the patches is done by the model.
We consider two different pre-trained ViT  models, one trained using patches, on the images of size (so the number of patches in an image is 144), while the second model is trained using patches on the images of size (196 patches per image). In Fig. (a)a and (b)b, we show the Top-1 accuracy obtained by the pre-trained ViT models using important patches predicted by at different values of . At the epoch (with random weights), the performance of ViT drops almost linearly as we lower the patch count . At the epoch, at the end of the training, we observe that performance of the ViT does not drop as drastically.
4.3 Positive Signaling - Visualizing symbols
As mentioned earlier, we are using a vocabulary and patch size in our base model. This means has to map each patch to one of the available symbols. A natural way to analyze what the symbols are encoding to see is to visualize their corresponding patches. While is far too small a vocabulary to describe all possible patches, we observe some interesting patterns in Fig. 5. Many of the symbols seem to adhere to specific concepts repeatedly. We observed that symbols have a lesser preference for color, but more preference for texture and shape. We discovered several symbols corresponding to textures such as grass, branches, and wood. We also noticed many symbols firing for the patches corresponding to text, eyes, and even faces. There can be multiple symbols representing single concept, e.g., both symbol 91 and 123 both fire in case of eyes.
|Method||Top-1 (%)||Top-5 (%)|
|MoCo-v2  (R18)||36.8 ( 0.2)||60.3|
|VQ-VAE2 ||17.2 ( 0.6)||30.5|
|PatchVAE  (R18, )||16.4 ( 0.6)||28.5|
|PatchVAE  (R18, )||21.3 ( 0.5)||36.2|
|Ours (R18, )||27.6 ( 0.6)||46.2|
|Ours (R18, )||30.3 ( 0.5)||49.9|
- Downstream mean Average Precision for Pascal VOC
|MoCo-v2  (R18)||65.8|
|PatchVAE  ()||52.2|
|PatchVAE  ()||54.2|
4.4 Positive Listening
Next, we evaluate the vision module, or of the listener. We follow the protocol employed by various approaches in self-supervised learning literature. We consider the features obtained at the final pooling layer of the (which is a ResNet-18 in our case). Next, we run a k-NN classification with on the validation dataset. Table 2 shows the Top-1 and Top-5 % accuracy obtained on ImageNet using the listener’s vision module and the baselines approaches. Although our method outperforms VQ-VAE-2 and PatchVAE (methods that learn a discrete image representation) we observe that there is still a gap between representations learned by these models as compared to the representations learned by continuous latent models such as MoCo . Note that, because of the resource constraints, all results reported in the table are obtained by training ResNet-18 for only 100 epochs. The results for both MoCo-v2 and our approach continue to improve if we continue the training beyond 100 epochs (as also noted by He et al. ). Further, we use the listener’s vision module as a pre-trained network for Pascal VOC dataset . Our results are shown in the Table 2. Again, results are not competitive as compared to self-supervised counterparts such as MoCo-v2 but we outperform models with discrete bottleneck such as PatchVAE. We find that the convergence of models with discrete bottlenecks (such as this work, and PatchVAE) is slow and hence, improving the training efficiency of this class of models is an interesting future direction.
4.5 Ablation study
A communication iteration is successful if the receiver is able to match the message sent by the speaker to the correct target image. In our experiments, we use an effective batch size of 512 (split over 4 GPUs), so the chance accuracy of success of the speaker is . We measure the Top-1 accuracy for each image of the batch and average it over validation data at the end of epoch. In Fig. (a)a, we observe that too high or too low learning rates can be detrimental to the success of communication. We fix the learning rate at 0.0001 which shows the best validation performance empirically. We train our models at different vocabulary sizes, and different message lengths for 100 epochs. From Fig. (b)b and (c)c, we observe that having either a large vocabulary or larger message length allows the model to reach high accuracy faster. This is intuitive since the larger the space spanned by the messages is, the easier it is for receiver to distinguish between the images. A large message length increases the possible number of messages exponentially, at the cost of much larger computation cost since self-attention [76, 17] is an operation. A large vocabulary also increases the both the span of messages and computation cost linearly.
In this work, we have shown that two cooperative neural network agents can develop a variable length communication protocol to solve a given task on a large scale image dataset with only self-supervision. To the best of our knowledge, this is the first work to develop an emergent communication via mid-level patches without using any pre-trained models or external labels. We have introduced a novel mid-level-patch based architecture for the speaker agent, in order to represent only the “interesting” parts of image with discrete symbols. The listener agent learns to align the messages and images using a language and image encoders to a common embedding space. The speaker and listener agents are trained end-to-end jointly to capture invariances induced by various data augmentations in order to solve a constrastive task. We propose a number of quantitative and qualitative measures to analyse and evaluate the emerged language. We also show two major applications of the developed approach - (1) extract representative and discriminative parts of the image (2) transfer learning in image classification tasks.
Limitations and Future Work. There are a few limitations of our approach. Firstly, one of the applications discussed in Section 4.1 is using fewer patches for inference in ViT. Although, using fewer patches reduces the memory cost of ViT, the overhead of using another neural network for predicting the important patches means our gain is minimal. In the future, we would like to explore even faster architectures to have a bigger impact on classification speed. Second, our method only works with fixed-size square patches. Discovering arbitrary sized mid-level patches is a challenging task that we would like to address in future work. Third, the language emerged from our current model is not grounded in natural language and requires human intervention for interpretation. Going forward, we would like to ground this emerged language in the natural language.
Broader Impact. Like most self-supervised approaches, our approach is data hungry and trained using a large amount of unlabeled data collected from the Internet. Our model in its current form is prone to learning and amplifying the biases present in the dataset, especially if the dataset in question is not carefully curated. While the data collection and labeling has been discussed in the original paper , the community has focused towards imbalance and privacy violation in existing image datasets only recently . A recent study  shows that 997 out of 1000 ImageNet categories are not ‘people’ categories. To compare against previous methods and allow future benchmarking, we provide results on the 2012 version of the dataset with 1.28 million images in training set and 50000 images in the validation set.
Acknowledgements. We thank Matt Gwilliam, Max Ehrlich, and Pulkit Kumar for reviewing early drafts of the paper and helpful comments. This project was partially funded by DARPA SAIL-ON (W911NF2020009), and Amazon Research Award to AS.
- Andreas  Jacob Andreas. Measuring compositionality in representation learning. arXiv preprint arXiv:1902.07181, 2019.
- Andreas and Klein  Jacob Andreas and Dan Klein. Analogs of linguistic structure in deep representations. arXiv preprint arXiv:1707.08139, 2017.
- Andreas et al.  Jacob Andreas, Dan Klein, and Sergey Levine. Learning with latent language. arXiv preprint arXiv:1711.00482, 2017.
- Baronchelli et al.  Andrea Baronchelli, Maddalena Felici, Vittorio Loreto, Emanuele Caglioti, and Luc Steels. Sharp transition towards shared vocabularies in multi-agent systems. Journal of Statistical Mechanics: Theory and Experiment, 2006(06):P06014, 2006.
- Batali  John Batali. Computational simulations of the emergence of grammar. Approach to the Evolution of Language, pages 405–426, 1998.
- Bay et al.  Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In European conference on computer vision, pages 404–417. Springer, 2006.
- Bengio et al.  Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Blondel et al. 
Mathieu Blondel, Olivier Teboul, Quentin Berthet, and Josip Djolonga.
Fast differentiable sorting and ranking.
International Conference on Machine Learning, pages 950–959. PMLR, 2020.
- Bouchacourt and Baroni  Diane Bouchacourt and Marco Baroni. Miss tools and mr fruit: Emergent communication in agents learning about object affordances. arXiv preprint arXiv:1905.11871, 2019.
- Caron et al.  Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882, 2020.
- Caron et al.  Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021.
- Chen et al.  Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020.
- Chen and He  Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. arXiv preprint arXiv:2011.10566, 2020.
- Clark  Herbert H Clark. Using language. Cambridge university press, 1996.
- Das et al.  Abhishek Das, Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. Learning cooperative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE international conference on computer vision, pages 2951–2960, 2017.
Deng et al. 
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.
Imagenet: A large-scale hierarchical image database.
2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- Devlin et al.  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Doersch et al.  Carl Doersch, Saurabh Singh, Abhinav Gupta, Josef Sivic, and Alexei Efros. What makes paris look like paris? ACM Transactions on Graphics, 31(4), 2012.
- Dosovitskiy et al.  Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Esser et al.  Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. arXiv preprint arXiv:2012.09841, 2020.
- Everingham et al.  M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, January 2015.
- Evtimova et al.  Katrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. Emergent communication in a multi-modal, multi-step referential game. arXiv preprint arXiv:1705.10369, 2017.
- Foerster et al.  Jakob N Foerster, Yannis M Assael, Nando De Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. arXiv preprint arXiv:1605.06676, 2016.
- Fu et al.  Ruigang Fu, Qingyong Hu, Xiaohu Dong, Yulan Guo, Yinghui Gao, and Biao Li. Axiom-based grad-cam: Towards accurate visualization and explanation of cnns. arXiv preprint arXiv:2008.02312, 2020.
- Goyal et al.  Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
- Grill et al.  Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.
- Gumbel  Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Government Printing Office, 1954.
- Gupta et al.  Kamal Gupta, Saurabh Singh, and Abhinav Shrivastava. Patchvae: Learning local latent codes for recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4746–4755, 2020.
- Gupta et al.  Kamal Gupta, Justin Lazarow, Alessandro Achille, Larry S Davis, Vijay Mahadevan, and Abhinav Shrivastava. Layouttransformer: Layout generation and completion with self-attention. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1004–1014, 2021.
- Havrylov and Titov  Serhii Havrylov and Ivan Titov. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. arXiv preprint arXiv:1705.11192, 2017.
- He et al.  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- He et al.  Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729–9738, 2020.
- Hochreiter and Schmidhuber  Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Jang et al.  Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
- Jones  Karen Sparck Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 1972.
- Jorge et al.  Emilio Jorge, Mikael Kågebäck, Fredrik D Johansson, and Emil Gustavsson. Learning to play guess who? and inventing a grounded language as a consequence. arXiv preprint arXiv:1611.03218, 2016.
Juneja et al. 
Mayank Juneja, Andrea Vedaldi, CV Jawahar, and Andrew Zisserman.
Blocks that shout: Distinctive parts for scene classification.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 923–930, 2013.
- Jurie and Triggs  Frederic Jurie and Bill Triggs. Creating efficient codebooks for visual recognition. In Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, volume 1, pages 604–610. IEEE, 2005.
Kazemzadeh et al. 
Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg.
Referitgame: Referring to objects in photographs of natural scenes.
Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787–798, 2014.
- Keresztury and Bruni  Bence Keresztury and Elia Bruni. Compositional properties of emergent languages in deep learning. arXiv preprint arXiv:2001.08618, 2020.
- Kingma and Welling  Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Kirby  Simon Kirby. Natural language from artificial life. Artificial life, 8(2):185–215, 2002.
- Krishna et al.  Ranjay Krishna, Ines Chami, Michael Bernstein, and Li Fei-Fei. Referring relationships. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6867–6876, 2018.
Krizhevsky et al. 
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems, 25:1097–1105, 2012.
- Lazaridou et al.  Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182, 2016.
- Lazaridou et al.  Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. Emergence of linguistic communication from referential games with symbolic and pixel input. arXiv preprint arXiv:1804.03984, 2018.
- Lee et al.  Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. Emergent translation in multi-agent communication. arXiv preprint arXiv:1710.06922, 2017.
- Lewis  David Lewis. Convention: A philosophical study. John Wiley & Sons, 1969.
- Lowe  David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
- Lowe et al.  Ryan Lowe, Jakob Foerster, Y-Lan Boureau, Joelle Pineau, and Yann Dauphin. On the pitfalls of measuring emergent communication. arXiv preprint arXiv:1903.05168, 2019.
- Maddison et al.  Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
- Mihai and Hare  Daniela Mihai and Jonathon Hare. Avoiding hashing and encouraging visual semantics in referential emergent language games. arXiv preprint arXiv:1911.05546, 2019.
- Mihai and Hare  Daniela Mihai and Jonathon Hare. The emergence of visual semantics through communication games. arXiv preprint arXiv:2101.10253, 2021.
- Mikolov et al.  Tomas Mikolov, Armand Joulin, and Marco Baroni. A roadmap towards machine intelligence. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 29–61. Springer, 2016.
- Misra and Maaten  Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6707–6717, 2020.
Mordatch and Abbeel 
Igor Mordatch and Pieter Abbeel.
Emergence of grounded compositional language in multi-agent
Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
- Nash et al.  Charlie Nash, Jacob Menick, Sander Dieleman, and Peter W Battaglia. Generating images with sparse representations. arXiv preprint arXiv:2103.03841, 2021.
- Oord et al.  Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. arXiv preprint arXiv:1711.00937, 2017.
- Oord et al.  Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
- Pinker  Steven Pinker. The language instinct: How the mind creates language. Penguin UK, 2003.
- Radford et al.  Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
- Ramesh et al.  Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092, 2021.
- Razavi et al.  Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. arXiv preprint arXiv:1906.00446, 2019.
Rennie et al. 
Jason D Rennie, Lawrence Shih, Jaime Teevan, and David R Karger.
Tackling the poor assumptions of naive bayes text classifiers.In Proceedings of the 20th international conference on machine learning (ICML-03), pages 616–623, 2003.
- Rolfe  Jason Tyler Rolfe. Discrete variational autoencoders. arXiv preprint arXiv:1609.02200, 2016.
- Salimans et al.  Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017.
- Selvaraju et al.  Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
- Simonyan and Zisserman  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Singh et al.  Saurabh Singh, Abhinav Gupta, and Alexei A Efros. Unsupervised discovery of mid-level discriminative patches. In European Conference on Computer Vision, pages 73–86. Springer, 2012.
- Sivic and Zisserman  Josef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. In IEEE International Conference on Computer Vision, 2003.
- Sohn  Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 1857–1865, 2016.
- Spike et al.  Matthew Spike, Kevin Stadler, Simon Kirby, and Kenny Smith. Minimal requirements for the emergence of learned signaling. Cognitive science, 41(3):623–658, 2017.
- Steels  Luc Steels. What triggers the emergence of grammar? Society for the Study of Artificial Intelligence and Simulation of Behaviour, 2005.
- Sukhbaatar et al.  Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation. arXiv preprint arXiv:1605.07736, 2016.
- Vaswani et al.  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
- Wagner et al.  Kyle Wagner, James A Reggia, Juan Uriagereka, and Gerald S Wilkinson. Progress in the simulation of emergent communication and language. Adaptive Behavior, 11(1):37–69, 2003.
- Wittgenstein  Ludwig Wittgenstein. Philosophical investigations. John Wiley & Sons, 1953.
- Yang et al.  Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 547–558, 2020.
- Yang et al.  Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng, and Olga Russakovsky. A study of face obfuscation in imagenet. arXiv preprint arXiv:2103.06191, 2021.
- Zbontar et al.  Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. arXiv preprint arXiv:2103.03230, 2021.
Zhang et al. 
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.
The unreasonable effectiveness of deep features as a perceptual metric.In CVPR, 2018.
Appendix A Training details
Both speaker and listener agents are trained jointly during the training. All the experiments are conducted on the training set of ImageNet , which has approximately 1.28 million images from 1000 classes. We create a training and validation split from the training set by leaving aside 5% of the images for validation. After obtaining the final set of hyper-parameters, we retrain on the entire training set for 100 epochs. We use Stochastic Gradient Descent (SGD) with momentum and cosine learning rate scheduling. In order to train the stochastic component of the Speaker, we use the straight-through trick . We use linear warmup for learning rate for 30 epochs. We also use a cosine schedule for temperature annealing for Gumbel Softmax, where we decrease the temperature from 5.0 to 1.0 in first 50 epochs and then fix the temperature to 1.0 for rest of the epochs. Please find the code and model (see the README file) attached in the supplementary material.
Appendix B More Ablations
We perform an additional ablation study where we train the model on ImageNet for 20 epochs with different augmentations removed and observed which ones have the most impact on downstream classification task with kNN. The results are shown in the Table4.
In our experiments, we observed that higher batch size leads to both stable and improved downstream performance which is intuitive since it allows for contrastive loss to be more accurate. In all our experiments, we use the maximum batch size that could fit in 12GB GPU memory (128 images per GPU). The contrastive loss is computed over the aggregate batch size over 4 GPUs (and hence the effective batch size of 512 images). For larger batch sizes, we scale the learning rate linearly . Table 4 below shows the downstream classification performance of the model at different batch sizes (trained for 20 epochs).
|Remove color jitter||23.2|
|Remove random resized crops||22.9|
|Batch Size||Top-1 (%)|
Appendix C Visualizing more symbols
Fig. 7 shows some additional examples of symbol ids and their possible meaning. We observed that most symbol ids fire for consistent patterns. An interesting observation is that model doesn’t focus a lot on the color but the textures and shapes in the images. One of the limitations of our approach is the fixed-size grid used in the architecture which restricts the patch size to or . This restriction is important for the efficient training. However, it results in some symbols capturing only partial concepts such as part of a face or text. In future works, we seek to address this limitation.
Appendix D Topographic Similarity
We analyse the topographic similarity between the learned messages and images from the validation dataset as following: We take 10 random images from each category. Then we compute pairwise Jaccard Similarity (JS) between all image pairs for that category. JS corresponds to a simple intersection over union (IoU) of the set of symbols in the messages corresponding to two images. This indicates how similar the images are with respect to message generated by the Speaker (note that even though speaker generates a sequence of symbol, we analyse it as a set for this analysis). For each pair, we also compute Learned Perceptual Image Patch Similarity (LPIPS) metric  using off-the-shelf VGG model. We draw a scatter plot between the two similarity metrics as show in Fig. 8 We observe that some categories, such as ‘toucan’, ‘filing cabinet’, and ‘crab’ have very high correlation, while there are some categories with almost no correlation between LPIPS and Jaccard similarity. The mean and median correlation across all categories is 0.25 and 0.21.
Appendix E Variable length messages
We analyze the number of symbols and number of unique symbols appearing in each message, as illustrated in Fig. 9. Fig. (a)a shows that that all each message has at least unique symbols, without any particular symbol excessively recurring within a message. No image uses more than 16 unique symbols even though maximum allowable symbols that can be used by an image is 49 (from the vocabulary of 128 symbols). Fig. (b)b shows the distribution obtained when we consider all symbols (and not just unique) for a message. The distribution looks like a gaussian and this is because of the design choice we made in our architecture. Since for a given image, we sample the patch ranks from a normalized and sorted list of ranks, only about half of the symbols, end up getting selected and hence the peak of the distribution is at 24-25 (half of 49).
Appendix F Relationship between length of messages and images
We try to analyze qualitatively the relationship between the images and the length of message representation as generated by the speaker. We sort all the images in the validation set by the length of the message and report some of the images with very long and very short message lengths in Fig. 10. We observe that, on an average, images with lengthier messages appear more complex visually with a lot of clutter or variety of objects in the image. Images with shorter message length usually have a single object.
Appendix G Visualizing saliency maps and PatchRank
We qualitatively compare the patch ranks generated by our method with the corresponding saliency maps generated using a standard technique. Since the saliency maps provide us with important regions of an image, the patches deemed as important by our method should overlap with the said salient regions. Note that the saliency method uses a supervised classification model and class label to generate the pixelwise importance heatmap. Our method on the other hand generates these plots in an unsupervised manner.
Following this hypothesis, we extract the saliency maps for the ImageNet validation set with the XGradCAM  approach, using the ResNet-50  model with true category labels obtained from official code repository. Fig. 11 illustrates the saliency maps and the patch ranks obtained for a few such random ImageNet validation images. We observe that important patches ranked by our model has a high correspondence with the salient regions of the images.
Appendix H Analyzing the symbol distribution
We investigate the distribution of the patch symbols generated with our method. For each image in the ImageNet validation set, we compute the list of symbols generated by . Then we calculate the frequency that each symbol appear across the images in the validation set, as illustrated in Fig. 12. Even though the distribution is not uniform, we observe that all symbols recur often, suggesting that our generated symbols are not redundant. This complements our observation in Section 4.3 (Fig. 5 of the manuscript) where we observed that there are some symbols that correspond some very common textures and patterns such as grass or lines, while some symbols on the other hand capture more specific concepts such as faces, or eyes.
Appendix I Positive Signaling - Visual Bag of Words Classification with
Representing the image as visual bag of words is a well-known technique in image retrieval works. Since our task is to represent image with words as well, we devise the following way to evaluate the PatchSymbol module or . For each image in the ImageNet training and validation set, we compute the message or the list of symbols generated by (and filtered according to ). We then compute a feature for each image using the weighted frequency of symbols in the image or tf-idf 
. We then train a simple complement naive bayes classifier on top of these feature vectors. For benchmarking, we choose two prior works that encode images to symbols and have shown strong results in generative modeling. VQ-VAE-2 or Vector Quantized Variational AutoEncoder was initially proposed by Oord et al.  and later improved by Razavi et al.  for both conditional and unconditional large scale image generation. We retrain the model for 100 epochs at resolution used in the paper using ’s repository. PatchVAE by Gupta et al.  proposes a structured variant of Variational AutoEncoders . We use the authors’s code and retrain their models for a bottleneck (corresponding to patch size), and a bottleneck (corresponding to patch size), each for 100 epochs.
Table 5 shows the Top-1 and Top-5 % classification accuracies on ImageNet. The patch symbols generated with our method outperform those generated by VQ-VAE-2 and PatchVAE in the downstream classification task by factors of 45% and 25% respectively. Note that classification accuracy is far below the ones that can be obtained using standard techniques, however the goal of this evaluation is to demonstrate that the symbols do capture meaningful information.
|VQ-VAE2 ||0.88 ( 0.03)||5.02|
|PatchVAE  ()||1.02 ( 0.07)||4.48|
|PatchVAE  ()||1.00 ( 0.08)||5.04|
|Ours ()||1.28 ( 0.06)||6.10|
|Ours ()||1.20 ( 0.06)||5.94|