I Introduction
The recent article by A. Mitrokhin, P. Sutor, C. Fermüller, and Y. Aloimonos, “Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception”, which appeared in Science Robotics vol. 4 issue 30 (2019), presents a case for using a computation framework called hyperdimensional computing also known as Vector Symbolic Architectures (VSAs) for fusing motoric abilities of a robot with its perception system. The idea of computing with random vectors as basic objects is also known as Holographic Reduced Representation
[47], MultiplyAddPermute [14], Binary Spatter Codes [21], Binary Sparse Distributed Codes [50], Matrix Binding of Additive Terms [13], and Semantic Pointer Architecture [7]. All these frameworks are essentially equivalent. In the light of the present very high level of attention to the area of autonomous AIempowered systems from the industry and the society, we hope and believe that the application of VSAs in robotics will get an appropriately increasing attention from the community of AI/robotics researchers and practitioners. Our own experience with VSAs has shown that due to its considerable difference from the conventional computing paradigms the development of intuition and understanding required for practical applications needs to be supported by extended exposure to the details and interpretation of VSAs. We, therefore, write this commentary with the aim to facilitate the readers of the original article to comprehend and exploit the very different perspective on computation that is both required and enabled by VSAs. Hopefully this will inspire more exciting applications of VSAs to robotics.The commentary is organized as follows. Section II presents a compact summary of the main contributions in the original article. Section III provides a brief historical excursus into VSAs as well as their current state with respect to applications areas. Peculiarities of representing data in VSAs are discussed in Section IV. Section V concludes the commentary by discussing the information capacity of VSAs.
Ii A digest of the original article
The main focus of the original article is in addressing the problem of active perception, where an agent’s knowledge emerges as the result of an interplay between the agent’s actions and the sensory input arising/causing those actions. It is proposed to use VSA’s computational framework to jointly represent the sensory data and the agent’s motor/action information taken to generate these data. The authors of the article convey a message that VSAs are beneficial to the area of robotics as means for implementing active perception. The article delivers two major technical observations:

VSAbased representation of different sensor modalities enables formation and flexible manipulation of memory sequences (timeseries) of the sensory data where parts of the representation could be easily modified by either inserting or deleting some sensory data;

A joint representation of sensory data and motor/action information using VSAs enables a more streamlined interface to conventional machine learning architectures and results in faster and more resourceefficient learning with comparable accuracy.
VSAs operate with highdimensional vectors (HD vectors); typically, with several thousands of elements. They are also referred to in the original article as HBV representations or distributed representations. The foundation for the solution described in the article is a variant of VSAs based on dense binary HD vectors, which was originally introduced by P. Kanerva
[22]. In this variant, elements of HD vectors take only binary values “0” or “1”. The computations in VSAs are based on compositions of three simple arithmetic operations on HD vectors: binding  implemented as elementwise exclusive OR (XOR) and denoted as ; bundling  implemented as consensus sum/majority rule; and permutation of the elements. VSAs also require a similarity metric. The similarity between two binary HD vectors is characterized by the normalized Hamming distance (denoted as ). It measures the rate of the number of element positions in the two compared HD vectors (a and b) in which they have different element values:(1) 
The authors of the original article identify these arithmetic operations with semantics specific to the article’s focus. The binding operation was used in two contexts: to construct the representation of an unordered set of items and to construct the representation of the assignment of a value to a variable. The permutation operation was used to construct the representation of the position of an an item in an ordered sequence. Finally, the bundling operation was used to construct the representation of a composite data structure, which is essentially a set of assignments of values to variables. It is important to understand that this identification of VSAs operations with semantics is for expository convenience in the original article and is not an essential part of VSAs. We present an extended discussion of some different usages of the same fundamental arithmetic operations in Section IV.
In essence, the solution proposed by the authors consists of the following steps:

Representation of an entire visual frame as one HD vector, using a chain of binding operations on HD vectors corresponding to pixel intensities. The pixel intensity vectors are permuted to associate the pixels’ positions in the frame.

A procedure for generating a set of HD vectors to represent the range of pixel intensities. The similarities between the generated HD vectors preserve the underlying similarities between the pixel intensity values. This part is not highlighted by the authors as a specific contribution of the article, since it is based on the technique previously proposed by them.

Representation of a visual sequence as a memory HD vector. The frames represented as HD vectors produced at step 1 are assigned to HD vectors representing the time ticks using the binding operation. This produces an HD vector that represents the assignment of a value (the HD vector of a visual frame) to a variable (the HD vector corresponding to a time tick). The whole visual sequence is represented in an HD vector storing a set of all the HD vectors representing valuevariable pairs in the sequence. This HD vector is calculated as the bundling operation on all HD vectors for valuevariable pairs.

A pipeline for learning sensorimotor relationships, that is the correspondence of the sensory data (visual sequence) to the motor/action producing it. This is achieved by producing a joint HD vector of a visual frame and the value of robot’s velocity, under which the frame was observed. This joint representation was used in the heteroassociative memory mode, which allowed obtaining a prediction of the target velocity for a given visual frame.
In fact, it is even possible to interpret and implement VSAs operations in terms of standard artificial neural networks (ANN). However, this interpretation risks raising unhelpful expectations. For example, unlike typical ANNs, an ANN implementing the VSAs operations (i.e., binding, bundling, and permutation) has fixed connection weights and consequently no need for a process to incrementally optimize those weights. “Learning” in a circuit composed from the VSAs operations is often implemented by bundling (effectively, a single step of addition), but may also be implemented as updating an autoassociative memory or Sparse Distributed Memory
[20], or even an update of a nonneural contentaddressable memory. Consequently, it is probably more helpful to think of the VSAs operations as signal processing transformations that work on very highdimensional signals in order to manipulate discrete data structures represented by vectors.
As a transient result in the original article the authors briefly report on an experiment where a fully connected ANN effectively implemented the final step for decoding the distributed representations reflecting that one could get comparable accuracy while avoiding the complexity of training convolutional ANN. The authors further highlighted as their main finding that matching accuracy could be achieved with pure singlepass feedforward operations by querying the heteroassociative memory storing HD vectors of scenes bound with their velocities. While this is definitely a valid approach we emphasize the importance of a scenario where conventional ANNs (e.g., fully connected ones) are trained by an iterative process using HD vectors as input’s representation. Our own experiments with distributed representations of texts and using them as an input to conventional machine learning classifiers for solving natural language processing tasks
[3] shown that it is possible to achieve substantial speedups of the training and operation phases as well as to get significant reduction of memory consumption.One of the contributions highlighted by the authors of the original article is the mapping of sensory data from a neuromorphic dynamic vision sensor camera into a VSAs representation. In essence, they represent the averaged event statistics as pixel intensities over a predefined time interval. The resulting frame is then treated as a one channel visual frame.
Finally, along the way to their technical solution, the authors present the theoretical findings on the capacity of the memory in HD vectors. Specifically, the authors computed the expected normalized Hamming distance for an HD vector storing the result of the bundling operation (please see the details in Section V).
Iii Vector Symbolic Architectures: historical notes and the current state
VSAs is an umbrella term for a family of computational frameworks using highdimensional distributed representations [16]
and relying on the algebraic properties provided by a set of operators (bundling, binding, and permutation) on the highdimensional representation space. VSAs are intimately related to tensor product binding, introduced by P. Smolensky
[66]. His method uses the tensor product as the binding operator, which results in the dimensionality of the resultant vector being the product of the dimensionalities of the operand vectors. The key points demonstrated were that it is possible to represent complex composite data structures (usually thought of as symbolic data structures) in a vector space, and that it is possible to define transformations on that vector space, which manipulate the represented composite structures without needing to decompose them.VSA binding operations can be interpreted as forming the tensor product of the operands and then projecting that result back into a vector space with the same dimensionality as each of the operand vectors. Consequently, the dimensionality of the representational space remains constant, whereas the dimensionality of tensor product binding representations increases exponentially with the number of terms being bound. As a family, the VSAs have been committed to using a fixed dimensionality vector space for the representation and manipulation of composite data structures by exploiting the algebraic properties of a small number of operators on that vector space. Specific VSAs frameworks were introduced by T. Plate [47], P. Kanerva [21], R. Gayler [14], D. Rachkovskij [53], and others [2, 13, 34] in the time span of the 1990s to 2000s. The interested readers are kindly referred to [62] where eight different VSAs frameworks are compared and taxonomized. The inspirations of these authors vary, but precursors of these works can be found in, at least, convolutional models of human memory [42, 40] and models of associative memories inspired by holography [72, 5, 64].
Starting off with a neurophysiological inspiration that cognitive functions in biological brains are achieved through highly parallel activations of a large number of neurons, which through the learning process form statistically persistent patterns, VSAs offer a computational model where everything (items/entities/concepts/symbols/scalars etc.) is represented by HD vectors, which act as a distributed representation. VSAs provide an extreme form of distributed representation in that it is holographic and stochastic. The holographic property means that the information content of the representation is spread over all the dimensions of the highdimensional space. Any subset of the dimensions can be used as a noisy approximation to the full set of dimensions. The representations have to be interpreted stochastically, in that the relative proximities of vectors (as measured by the angles between them) are the important properties and the value of any specific dimension makes only a small contribution to the direction of a vector and is consequently nearly irrelevant (in isolation). This is the major departure from the standpoints of the conventional computing, where each component of a representation has a different specific meaning and the exact value of each component is generally important.
Compared to standard ANNs and more recent neural techniques such as Deep Learning the number of people working on VSAs, and, therefore, the number of publications, has been exceptionally low. One possible explanation for this disparity is that VSAs are very different from other computational approaches, so it can take a considerable investment of effort for a researcher to become sufficiently familiar to appreciate the potential advantages. Compounding this, VSAs can be implemented by ANNs and have often been presented as a type of neural network, but because VSAs are so different from standard ANNs the prior knowledge brought from standard ANNs may actually hinder understanding VSAs. Another, possibly cynical, explanation is that VSAs are hard relative to standard ANNs precisely because they do not rely on optimization of a large number of connection weights. When a VSAs system works in some task it does so because of good design of the system, not by throwing vast amounts of data and computational resources at an optimization process.
However, the level of VSAs activity has been increasing over the years. Partly, this is due to cumulation of interesting use cases. Researchers have demonstrated the utility of VSAs in different applications (see them at the end of this section), which then serves as encouragement and inspiration for future researchers.
In recent years the level of interest in VSAs has increased dramatically. The main driver of this has been the realization that VSAs may be well suited to the next generation of computing hardware. Digital computing hardware has been constantly shrinking in size, but is now reaching the scale where individual computational elements will be inherently unreliable. Also, although hardware can be made massively parallel, the software capability to effectively use that parallelism is badly lagging. Because of the holographic and stochastic properties of VSAs it may be ideally suited as a basis for computation on massively parallel, unreliable hardware [55]. Consequently there has been a dramatic upsurge of research on hardware implementation of VSAs [38, 37, 63].
Due to their nature, VSAs could be interpreted as a framework allowing manipulations of discrete structures with analog computing. Indeed, as we will see in Section IV, mapping discrete structures to HD vectors is not a very complicated problem, though it has its caveats. Moreover, the usage of highdimensionality provides the representations with very high robustness against errors (e.g., bit flips) appearing during the computation. This has a counterintuitive effect wellstudied in stochastic computing [1], which is a related six decades old computational paradigm, that certain computational tasks can be implemented more efficiently compared to the conventional computing on exact digital devices. Indeed, the intuition fails easily at this claim. How come that computing with several hundred dimensional vectors could be more efficient then computing with , or bits only? A short answer here is that the robustness to noise allows working on approximate analog hardware, which intrinsically introduces noise in the results of computations. This noise could be devastating for computations assuming precise values of individual positions, which is not the case in VSAs. In particular, there are recent developments in hardware [23], which suggest designs based on memristive technologies, tailored to VSAs operations using only a fraction of energy needed for the conventional computer architectures. Another prominent recent work [11]
demonstrates how representations of a particular VSAs framework – Frequency Domain Holographic Reduced Representations
[48] – can be mapped to spike timing codes, which opens possibilities for implementing VSAs on neuromorphic hardware.In general, despite being relatively littleknown outside of a small community of researchers, VSAs have a broad range of possible applications. Since the whole area was always close to cognitive science many of VSAs usecases spans different subareas, of AI such as cognitive architectures [6], analogical reasoning [60, 52, 8, 36], word embedding [19, 61, 71] and gram statistics [18] for natural language processing. Also, since recently, there are numerous applications of VSAs for machine learning [57, 58, 28, 56]. AI, however, is not the only application area for VSAs. There are other areas of computer science where VSAs could be useful, for example, for implementing conventional data structures such as finitestate machines [45, 73]. Randomized algorithms is another such area as it was recently shown that a data structure for approximate membership query known as Bloom Filter [69] is a subclass of VSAs [31]. There are even examples of applying VSAs to such areas as communications [17, 26] and workflow scheduling [65].
When it comes to the applications of VSAs in the area of robotics, an early work [35] was to use VSAs to program robot’s reactive behavior. Nevertheless, even today, the number of applications of VSAs in robotics is fairly limited. At the same time, the combination of the two enabling factors above: energy efficient hardware and broad functionality makes VSAs very perspective for robotic applications. We see the confirmation of this promise in the amount of the recently observed activity. For example, it is quite staggering that there is a very interesting recent work [44], which appeared almost at the same time as the article [41]. The work [44] provides an introduction to VSAs for robotics as well as overviews the use of VSAs for different robotic tasks. In particular, these tasks were:

recognition of objects from multiple viewpoints, which is important for mobile robot localization by recognizing known landmarks;

recognition of objects for manipulation, and other robotics tasks;

learning and recall of reactive behavior [43];

sequence processing for place recognition by implementing a SLAM algorithm with VSAs.
Thus, given the broad spectrum of possible applications, we completely agree with the authors that VSAs hold a promise for robotics, machine learning, AI, and computer science, in general.
Iv Data representation in VSAs
Iva Atomic representations
When designing a VSAsbased system for solving a problem it is common to define a set of the most basic items/entities/ concepts/symbols/scalars for the given problem and assign them with HD vectors, which are referred to as atomic HD vectors. The process of assigning atomic HD vectors is often referred to as mapping^{1}^{1}1Sometimes such terms as projection and embedding are also used.. In the early days of VSAs, most of the works were focusing on symbolic problems. In the case of working with symbols or characters one could easily imagine many tasks where a reasonable assumption would be that symbols are not related at all and, therefore, their atomic HD vectors can be generated at random. A recent example of using this assumption for creating a useful VSAs algorithm is a method of mapping conventional gram statistics into an HD vector [18]. On the other hand, it was also realized that there are many problems where assigning atomic HD vectors randomly does not lead to any useful behavior of the designed system. Therefore, in general, for a given problem we need to choose atomic representations of items such that the similarity between the representations of items correspond to the properties that we care about and want to drive the system dynamics.
In practice, numerical values are data type, which is present in many problems, e.g., in the area of machine learning. Arguably, representing numerical values with random HD vectors is not necessarily the best option, therefore, it is worth considering similarity preserving HD vectors. As mentioned in [41], one way to do it is to directly embed the linear relationship between numerical values in HD vectors, which is, indeed, a known approach for initiating similarity preserving atomic HD vectors (see, e.g., [71, 54] and Appendix I in [56]). It is also worth mentioning that there has been proposed a number of other mapping techniques [51] for representing numerical values as HD vectors. These mappings (including the linear one) were shown to perform well on some classification problems [32, 56]. Another work [49] presented numerous similarity preserving techniques (encoders) for mapping different data types to sparse binary HD vectors. These techniques can be even generalized to the case of dense binary HD vectors. There is also a promising mapping presented in [59] for forming realvalued HD vectors from numerical vectors.
On the other hand, if there is data associated with a problem an alternative way to mappings would be to obtain atomic HD vectors with the help of the available data via, e.g., an optimization process. For example, in [41] the optimizationbased mapping [68] was used. Fig. 3 in [41] exemplifies the heat map of the normalized Hamming distance structure for the representations of intensities obtained using the optimizationbased mapping. It is interesting that a similar distance structure can be obtained using simpler methods such as a nonlinear mapping [51]. The corresponding heat map is presented in Fig. 1. Similar to Fig. 3 in [41], distances away from the diagonal are increasing but the difference to the optimizationbased mapping is that the nonlinear mapping is more “aggressive” in increasing the distances for the neighboring intensities. This observation does not neglect the importance of optimizationbased representations. It rather shows that similar distance structures can be obtained in different ways.
Last, it is worth mentioning that the optimizationbased method [68] for obtaining similarity preserving HD vectors can be contrasted to a Random Indexing method [19]. Random Indexing also implicitly (i.e., without constructing the cooccurrence matrix) uses cooccurrence statistics in order to form similarity preserving HD vectors based on available data. The difference, however, is that Random Indexing is optimizationfree as it forms representations through a single pass via the data, and, thus, it can be done online while the optimization required by [68] calls for offline processing. It is an interesting research question whether these methods arrive at similar representations in terms of interitem distance structure or whether the optimization process brings extra benefits by refining the interitem distance structure.
IvB Composite representations
The atomic representations per se are hardly sufficient to solve a problem. Instead, they are used as a building block in forming a composite representations. Besides atomic HD vectors, when constructing composite representations a set of operations for manipulating HD vectors is also used. During this process, as a designer you are free to choose between different ways of both forming atomic HD vectors and combining them via the known operations. The main guiding principle, however, is that your choice of constructing composite representations should result in such similarity structure that would support the properties necessary for the problem at hand. Nevertheless, neither choice is right nor wrong – this illustrates that the choice of representation only has to be compatible with how it will be used, since it is like having a choice of data structures in conventional programming.
As emphasized in [44], in VSAs there is no structured (w.r.t. to the methodology) way for designing systems. It is also true that the area is missing welldefined design patterns. On the other hand, one could argue that there are best practices, which are commonly used when building a solution and could be seen as a set of available design choices. For example, for the task of testing membership in a set, there is a wellknown data structure called Bloom filter, which has been shown to be a special case of VSAs [31] where the bundling operation is implemented via OR and atomic HD vectors are sparse. In [41]
, an alternative way of representing a set through a chain of binding operations involving all atomic HD vectors was used. When using such design choice it is important to keep in mind that if there are similar HD vectors representing different members of the set these HD vectors will largely cancel each other when XOR is used for binding. Representation of ordered pairs or ordered sequences via the use of binding and permutation operations is another notable example of VSAs best practices
[22]. For instance, it has proven to be useful for representing sequential information when forming VSAsbased word embeddings [61]. As mentioned in the article, an interesting property of an HD vector representing the ordered sequence is that a single permutation applied to this vector can shift the whole sequence either forward or backward. We also believe that this interesting phenomenon is a potentially powerful design choice. An example of using this phenomenon is an application of shifting the HD vector for the ordered sequence by permutation for searching the best alignment of two sequences [30]. The article also proposed a new design choice for forming a composite representation of a scene model for dynamic vision sensors. Dynamic vision sensors typically send a stream of visual “events” corresponding to a series of fixations rather than representing a scene as a raster image. However, until today, it is not very clear how to represent such streams in the context of VSAs. Therefore, transforming the stream to an image, which is then represented as an HD vector, allows mitigating this issue since now a scene is a “bag of structured events”. The proposed design choice is different from existing choices [24, 12, 70] for images, as [24] focuses on storing images in HD vectors, while [12, 70, 39] proposed choices which preserve local similarity for nearby locations in an image. In contrast to [12, 70, 39], the proposed design choice of forming HD vectors of images would produce very different representations even in the case when intensities of only a single pixel are very different (i.e., their corresponding HD vectors are quasiorthogonal). One could imagine that such choice would not be translation invariant, which is a very important property in computer vision applications. Translation invariant mapping of images is a challenging problem. Only few works indirectly and partially address this problem for VSAs. The work
[44] proposes to map all possible rotations of an object with a given step of rotation degree in a single composite HD vector representing an object. Similar ideas could also be found in the Map Seeking Circuit approach [4]. The ideas used in the Map Seeking Circuits might possibly be implemented with VSAs using the mappings as in[70, 39] to form a composite HD vector for, e.g., several copies of the same object at different spatial locations. Although, as mentioned above the mapping technique presented in the original article is just a possible design choice  one should be aware of the possible limitations in computer vision tasks.Moreover, a list of the design choices is not exhaustive and new mechanisms are being discovered. For example, the task of probing a component from an HD vector representing an ordered pair is relatively simple task. This task becomes much more complex (complexity grows exponentially) in the case of either ordered sequence or when a set is represented via a chain of binding operations. However, there is a very recent work [25], which proposed an elegant mechanism (i.e., a design choice) called Resonator Circuit to address this problem. Other interesting examples of newly discovered design choices are works [15, 33] where new ways of using and implementing the binding operation were proposed. In [33], a new fractional binding was used to represent continuous spaces while in [15] a new binding for Holographic Reduced Representations [47] was presented. To conclude, it should be admitted that there is unlikely one correct way of designing a VSAsbased solution to a problem. Therefore, when solving any problem with VSAs the resulting solution is a choice, not the choice.
IvC Composite representations for active perception
One of the motivations for the article is the socalled “active perception” and the fact that actions and perceptions are often stored separately. An intuitive way of achieving “active perception” would be to focus on the perception, which is “conditional” on the action taken. In the context of VSAs, this could be translated to representing the sensory information (i.e., perception) or its change as the binding with a representation of the action, thus, forming composite HD vector, which acts as a sensorimotor representation. For instance, when considering VSAs representation of frames of rolefiller pairs [21], roles can represent arbitrarily complex structures^{2}^{2}2Unlike classic AI where roles are atomic symbols. (e.g., perceptions) while fillers can represent actions, hence, the whole representation could be seen as a sensorimotor program. Thus, it is fair to say that the notion of sensorimotor representations in the from of HD vectors has been highlighted in the VSAs literature from the beginning. For example, Appendix I in [46] deals with the representation of numbers in the context of arithmetic tables where numbers and operands can be seen as perception while the result of the operation as an associated action. As additional results supporting the idea of forming sensorimotor representation via binding HD vectors for sensory and motor/action information, it is worth mentioning works [35, 27, 9]. In [35], VSAs were used to program robot’s behavior. In [27]
, VSAs were used to form scene representation for experiments with honey bees. It was shown that VSAs can be used in order to mimic the process of learning concepts using the process of reinforcement. Work
[9] proposed a VSAbased approach for learning behaviors based on observing and associating sensory and motor/action information.V Information capacity of VSAs
The original equation in section “Theoretical limits on capacity of HBVs” [41] for the expected normalized Hamming distance after the bundling operation is:
(2) 
where is the number of component HD vectors involved in the bundling operation and is the probability of a bit flip in either a component HD vector or in the result of the bundling operation. However, when using the consensus sum (majority rule)
should be odd (when
is even it is implicit since ties are broken randomly). In this case, is not an integer, therefore, indices in (2) should be written as:(3) 
Note that is a binomial coefficient and can be denoted as then (3) can be rewritten as:
(4) 
Moreover, we know that:
(5) 
Since is odd we can also write (5) as:
(6) 
Because binomial coefficients are symmetric (e.g., ) we can state that . This allows us to modify (6) to:
(7) 
From (7) we get the an alternative expression of the left term in (4):
(8) 
Note that when , (4) is equivalent to (8). This corresponds to the case when there is no external noise (i.e., bit flips) being present in HD vectors. This case was considered in [21] where the right hand side of (8) was presented.
Moreover, due to the symmetry of binomial coefficients we know that , therefore, when combining this with (5) we can express the right term in (4) through the left term as:
(9) 
Thus, using the simplified expressions from (8) and (9), (4) can be simplified to:
(10) 
which eventually can be written as:
(11) 
Note that (11) is also valid in the case (shown in equation (5) in [55]) when characterizes bit flip probability in the resultant HD vector (after bundling) while components in the item memory are noisefree. For visualization purposes, Fig. 2 presents the analytical solution in (11) and results of simulations for several different values of .
Importantly, (11) is not sufficient to calculate the information capacity of HD vectors as it only provides the expected normalized Hamming distance. The capacity will also depend on such parameters as the dimensionality of HD vectors and the size of item memory, which stores component (atomic) HD vectors. Early results on the capacity were given in [46, 48]. Some ideas for the case of binary/bipolar HD vectors were also presented in [29, 13]. Arguably the most comprehensive analysis of the capacity of different VSAs frameworks and even some classes of recurrent artificial neural networks has been recently presented in [10]. Additionally, [67] elaborates on methods of recovering information from composite HD vectors. Last, it is worth noting that the result of the bundling operation should not necessarily be used as system’s memory. The resultant HD vectors could, in turn, be stored in some associative memory. For example, Sparse Distributed Memory [20] is a natural candidate for that.
References
 [1] (2018) Computing with Randomness. IEEE Spectrum 55 (3), pp. 46–51. Cited by: §III.
 [2] (2009) Geometric Analogue of Holographic Reduced Representation. Journal of Mathematical Psychology 53 (), pp. 389–398. Cited by: §III.

[3]
(2020)
HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled Embedding of ngram Statistics
. arXiv:2003.01821, pp. 1–17. Cited by: §II.  [4] (2002) MapSeeking Circuits in Visual Cognition. Stanford University Press. Cited by: §IVB.
 [5] (1973) Convolution and Correlation Algebras. Kybernetik 13 (2), pp. 113–122 (en). External Links: ISSN 03401200, 14320770, Link, Document Cited by: §III.
 [6] (2012) A Largescale Model of the Functioning Brain. Science 338 (6111), pp. 1202–1205. Cited by: §III.
 [7] (2013) How to Build a Brain. Oxford University Press. Cited by: §I.
 [8] (2013) Analogical Mapping and Inference with Binary Spatter Codes and Sparse Distributed Memory. In International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §III.
 [9] (2015) Vector Space Architecture for Emergent Interoperability of Systems by Learning from Demonstration. Biologically Inspired Cognitive Architectures 11 (), pp. 53–64. Cited by: §IVC.

[10]
(2018)
A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks
. Neural Computation 30 (), pp. 1449–1513. Cited by: §V.  [11] (2019) Robust Computation with Rhythmic Spike Patterns. Proceedings of the National Academy of Sciences 116 (36), pp. 18050–18059. Cited by: §III.
 [12] (2016) Positional Binding with Distributed Representations. In International Conference on Image, Vision and Computing (ICIVC), pp. 108–113. Cited by: §IVB.
 [13] (2013) Representing Objects, Relations, and Sequences. Neural Computation 25 (8), pp. 2038–2078. Cited by: §I, §III, §V.
 [14] (1998) Multiplicative Binding, Representation Operators & Analogy. In D. Gentner, K. J. Holyoak, B. N. Kokinov (Eds.), Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences, pp. 1–4. Cited by: §I, §III.
 [15] (2019) VectorDerived Transformation Binding: An Improved Binding Operation for Deep SymbolLike Processing in Neural Networks. Neural Computation 31 (5), pp. 849–869. Cited by: §IVB.
 [16] (1986) Distributed Representations. In Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Volume 1. Foundations, pp. 77–109. Cited by: §III.
 [17] (2012) Collective Communication for Dense Sensing Environments. Journal of Ambient Intelligence and Smart Environments 4 (2), pp. 123–134. Cited by: §III.
 [18] (2016) Language Geometry Using Random Indexing. In Quantum Interaction (QI), pp. 265–274. Cited by: §III, §IVA.
 [19] (2000) Random Indexing of Text Samples for Latent Semantic Analysis. In Annual Meeting of the Cognitive Science Society (CogSci), pp. 1036. Cited by: §III, §IVA.
 [20] (1988) Sparse Distributed Memory. The MIT Press. Cited by: §II, §V.
 [21] (1997) Fully Distributed Representation. In Real World Computing Symposium (RWC), pp. 358–365. Cited by: §I, §III, §IVC, §V.
 [22] (2009) Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with HighDimensional Random Vectors. Cognitive Computation 1 (2), pp. 139–159. Cited by: §II, §IVB.
 [23] (2019) InMemory Hyperdimensional Computing. arXiv:1906.01548 (), pp. 1–14. Cited by: §III.
 [24] (2013) Encoding Structure in Holographic Reduced Representations. Canadian Journal of Experimental Psychology 67 (2), pp. 79–93. Cited by: §IVB.
 [25] (2019) Resonator Circuits for Factoring Highdimensional Vectors. arXiv:1906.11684, pp. 1–50. Cited by: §IVB.
 [26] (2018) HDM: HyperDimensional Modulation for Robust LowPower Communications. In IEEE International Conference on Communications (ICC), pp. 1–6. Cited by: §III.
 [27] (2015) Imitation of Honey Bees’ Concept Learning Processes using Vector Symbolic Architectures. Biologically Inspired Cognitive Architectures 14 (), pp. 57–72. Cited by: §IVC.
 [28] (2018) Hyperdimensional Computing in Industrial Systems: The UseCase of Distributed Fault Isolation in a Power Plant. IEEE Access 6 (), pp. 30766–30777. Cited by: §III.
 [29] (2017) Holographic Graph Neuron: A Bioinspired Architecture for Pattern Processing. IEEE Transactions on Neural Networks and Learning Systems 28 (6), pp. 1250–1262. Cited by: §V.
 [30] (2014) On Bidirectional Transitions between Localist and Distributed Representations: The Case of Common Substrings Search Using Vector Symbolic Architecture. Procedia Computer Science 41 (), pp. 104–113. Cited by: §IVB.
 [31] (2019) Autoscaling Bloom Filter: Controlling Tradeoff Between True and False Positives. Neural Computing and Applications, pp. 1–10. Cited by: §III, §IVB.
 [32] (2018) Classification and Recall with Binary Hyperdimensional Computing: Tradeoffs in Choice of Density and Mapping Characteristic. IEEE Transactions on Neural Networks and Learning Systems 29 (12), pp. 5880–5898. Cited by: §IVA.
 [33] (2019) A Neural Representation of Continuous Space using Fractional Binding. In The 41st Annual Meeting of the Cognitive Science Society (CogSci), pp. 2038–2043. Cited by: §IVB.
 [34] (2015) HighDimensional Computing with Sparse Vectors. In IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 1–4. Cited by: §III.

[35]
(2013)
Learning Behavior Hierarchies via HighDimensional Sensor Projection.
In
The TwentySeventh AAAI Conference on Artificial Intelligence (AAAI)
, pp. 1–4. Cited by: §III, §IVC.  [36] (2014) Bracketing the Beetle: How Wittgenstein’s Understanding of Language Can Guide Our Practice in AGI and Cognitive Science. In Artificial General Intelligence (AGI), Lecture Notes in Computer Science, Vol. 8598, pp. 73–84. Cited by: §III.
 [37] (2017) DeviceArchitecture CoDesign for Hyperdimensional Computing with 3D Vertical Resistive Switching Random Access Memory (3D VRRAM). In International Symposium on VLSI Technology, Systems and Application (VLSITSA), pp. 1–2. Cited by: §III.
 [38] (2016) Hyperdimensional Computing with 3D VRRAM InMemory Kernels: DeviceArchitecture CoDesign for EnergyEfficient, ErrorResilient Language Recognition. In IEEE International Electron Devices Meeting (IEDM), pp. 1–4. Cited by: §III.
 [39] (2019) Representing Spatial Relations with Fractional Binding. In The 41st Annual Meeting of the Cognitive Science Society (CogSci), pp. 2214–2220. Cited by: §IVB.
 [40] (1982) A Composite Holographic Associative Recall Model. Psychological Review 89 (6), pp. 627–661 (en). External Links: ISSN 0033295X, Link, Document Cited by: §III.
 [41] (2019) Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception. Science Robotics 4 (30), pp. 1–10. Cited by: Commentaries on “Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” [Science Robotics Vol. 4 Issue 30 (2019) 1–10], §III, §IVA, §IVA, §IVB, §V.
 [42] (1982) A Theory for the Storage and Retrieval of Item and Associative Information. Psychological Review 89 (6), pp. 609–626 (en). External Links: Document Cited by: §III.
 [43] (2016) Learning Vector Symbolic Architectures for Reactive Robot Behaviours. In International Conference on Intelligent Robots and Systems (IROS), pp. 1–3. Cited by: 3rd item.
 [44] (2019) An Introduction to Hyperdimensional Computing for Robotics. KI  Künstliche Intelligenz 33 (4), pp. 319–330. Cited by: §III, §IVB.
 [45] (2017) Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing. In Annual Conference of the IEEE Industrial Electronics Society (IECON), pp. 3276–3281. Cited by: §III.
 [46] (1994) Distributed Representations and Nested Compositional Structure. University of Toronto, PhD Thesis. Cited by: §IVC, §V.
 [47] (1995) Holographic Reduced Representations. IEEE Transactions on Neural Networks 6 (3), pp. 623–641. Cited by: §I, §III, §IVB.
 [48] (2003) Holographic Reduced Representations: Distributed Representation for Cognitive Structures. Stanford: Center for the Study of Language and Information (CSLI). Cited by: §III, §V.
 [49] (2016) Encoding Data for HTM systems. arXiv:1602.05925 (), pp. 1–11. Cited by: §IVA.
 [50] (2001) Binding and Normalization of Binary Sparse Distributed Representations by ContextDependent Thinning. Neural Computation 13 (2), pp. 411–452. Cited by: §I.
 [51] (2005) Sparse Binary Distributed Encoding of Scalars. Journal of Automation and Information Sciences 37 (6), pp. 12–23. Cited by: Fig. 1, §IVA, §IVA.
 [52] (2012) Similaritybased Retrieval with StructureSensitive Sparse Binary Distributed Representations. Computational Intelligence 28 (1), pp. 106–129. Cited by: §III.
 [53] (2001) Representation and Processing of Structures with Binary Sparse Distributed Codes. IEEE Transactions on Knowledge and Data Engineering 3 (2), pp. 261–276. Cited by: §III.
 [54] (2016) Hyperdimensional Biosignal Processing: A Case Study for EMGbased Hand Gesture Recognition. In IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. Cited by: §IVA.
 [55] (2017) Highdimensional Computing as a Nanoscalable Paradigm. IEEE Transactions on Circuits and Systems I: Regular Papers 64 (9), pp. 2508–2521. Cited by: §III, §V.
 [56] (2019) Efficient Biosignal Processing Using Hyperdimensional Computing: Network Templates for Combined Learning and Classification of ExG Signals. Proceedings of the IEEE 107 (1), pp. 123–143. Cited by: §III, §IVA.
 [57] (2014) Modeling Dependencies in Multiple Parallel Data Streams with Hyperdimensional Computing. IEEE Signal Processing Letter 21 (7), pp. 899–903. Cited by: §III.
 [58] (2016) Sequence Prediction with Sparse Distributed Hyperdimensional Coding Applied to the Analysis of Mobile Phone Use Patterns. IEEE Transactions on Neural Networks and Learning Systems 27 (9), pp. 1878–1889. Cited by: §III.
 [59] (2015) Generating Hyperdimensional Distributed Representations from Continuous Valued Multivariate Sensory Input. In Annual Meeting of the Cognitive Science Society (CogSci), pp. 1943–1948. Cited by: §IVA.
 [60] (2011) A Neural Model of Rule Generation in Inductive Reasoning. Topics in Cognitive Science 3 (1), pp. 140–153. Cited by: §III.
 [61] (2015) Encoding Sequential Information in Semantic Space Models. Comparing Holographic Reduced Representation and Random Permutation. Computational Intelligence and Neuroscience (), pp. 1–18. Cited by: §III, §IVB.
 [62] (2020) A Comparison of Vector Symbolic Architectures. arXiv:2001.11797, pp. 1–9. Cited by: §III.

[63]
(2019)
Hardware Optimizations of Dense Binary Hyperdimensional Computing: Rematerialization of Hypervectors, Binarized Bundling, and Combinational Associative Memory
. ACM Journal on Emerging Technologies in Computing Systems 15 (4), pp. 1–25. Cited by: §III.  [64] (1987) Some Algebraic Relations between Involutions, Convolutions, and Correlations, with Applications to Holographic Memories. Biological Cybernetics 56 (56), pp. 367–374 (en). External Links: ISSN 03401200, 14320770, Link, Document Cited by: §III.
 [65] (2019) Constructing Distributed Timecritical Applications using Cognitive Enabled Services. Future Generation Computer Systems 100 (), pp. 70–85. Cited by: §III.
 [66] (1990) Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems. Artificial Intelligence 46 (), pp. 159–216. Cited by: §III.
 [67] (2018) Representing Sets as Summed Semantic Vectors. Biologically Inspired Cognitive Architectures 25 (), pp. 113–118. Cited by: §V.
 [68] (2018) A Computational Theory for Lifelong Learning of Semantics. In International Conference on Artificial General Intelligence (AGI), pp. 217–226. Cited by: §IVA, §IVA.
 [69] (2012) Theory and Practice of Bloom Filters for Distributed Systems. IEEE Communications Surveys and Tutorials 14 (1), pp. 131–155. Cited by: §III.
 [70] (2016) A Neural Architecture for Representing and Reasoning about Spatial Relationships. In International Conference on Learning Representations (ICLR), pp. 1–4. Cited by: §IVB.
 [71] (2015) Reasoning with Vectors: A Continuous Model for Fast Robust Inference. Logic Journal of the IGPL 23 (2), pp. 141–173. Cited by: §III, §IVA.
 [72] (1969) NonHolographic Associative Memory. Nature 222 (5197), pp. 960–962 (en). External Links: ISSN 00280836, 14764687, Link, Document Cited by: §III.
 [73] (2018) The Hyperdimensional Stack Machine. In Cognitive Computing, pp. 1–2. Cited by: §III.