1 Related Work
Quantum computing has been applied in robotics as a tool for speed up classical tools, or in a framework which is still the classical one [petschnigg_quantum_2019]. Our approach is quite different: starting from the very properties of quantum systems, we studied how to exploit them in a novel, simpler framework. This approach has been shown effective in quantum perception and cognition modeling, and we argue that this could be extremely useful in Robotics, providing also a way to translate the theoretical models of quantum cognition to practical robotics application. In contrast to the aforementioned research in Quantum Robotics, this approach could be useful even as purely simulated, because its merits are due to quantum system properties rather than merely computational advantages.
Since the early intuitions by Amann [amann_gestalt_1993], quantum cognition research studied the links between perception and quantum dynamics [conte_testing_2008, conte_mental_2009, manousakis_quantum_2009, paraan_more_2014]. A relevant example is the work in Manousakis [manousakis_quantum_2009]
, which proposed a QL model to describe probability distributions of perceptive dominances in subjects experiencing binocular rivalry.
A preliminary model inspired by the work in Manousakis [manousakis_quantum_2009] has been proposed to assess the feasibility of a QL perception model for a robot with limited sensing capabilities [lanza_preliminary_2020]. The reason behind this choice is the great descriptive potential which a quantum formalism inherently provides [khrennikov_ubiquitous_2010, busemeyer_quantum_2012, asano_quantum_2015, haven_palgrave_2017, conte_algebraic_2018].
Indeed, following Caves et al.[caves_quantum_2002]
, quantum probability theory can be understood within the Bayesian approach, with probabilities quantifying the degree of belief about a certain state. In this case, maximal information for a question does not imply complete knowledge, i.e., it does not allow us to predict which state will be measured (which answer will be given) but provides only each state’s probability of being measured (the degree of belief about the possible answers).
This interpretation of the measurement as a query given a certain belief (i.e., the quantum system state) can be extremely useful for decision making [busemeyer_what_2015].
We argue that this interpretation could be adapted with significant results in robot perception and cognition models as well. A QL model provides a way to deal with uncertain perceptual knowledge and decision making without an explicit representation. We posit that this is a more elegant, more compact approach because the state
is not a mere vector state as in current thinking. Representing it as a quantum state, we may be capable of leveraging the properties related to measurement and uncertainty in quantum mechanics. Moreover, this approach discloses new perspectives for further investigations. Starting from a QL representation, a wide range of quantum cognition models discussed in the existing literature can be applied to Robotics[busemeyer_quantum_2012, asano_quantum_2015].
The preliminary QL perception model proposed in [lanza_preliminary_2020]
dealt with one sensory input channel, performing a time integration of the input discriminating between two states. The goal of this study is to generalize such first single-qubit model to a multi-qubit approach. Moreover, exploiting state superposition and a change of basis in the Hilbert space, we can significantly extend the considered states range. Indeed, we can virtually deal with any possible state in 111 It is noteworthy that only one query at a time is possible due to the quantum state collapse after any measurement [nielsen_quantum_2010]. . To keep the analysis simpler, we do not consider time integration in this paper, although the model may be easily extended to time windows as illustrated for the single-qubit model in [lanza_preliminary_2020].
2 Technical Approach
We consider sensors, each one returning a lower- and upper-bounded discrete scalar (Fig. 1). Readings are domain-wise normalized, so the model receives a real value between 0 and 1 for each -th sensor. At each reading update, the model has a vector in input. For example, considering a camera-like sensor able to provide only the three RGB average values of the image, we can decompose it in three sensors, each of them with an interface normalizing the readings in a interval. In this case, the input vector is represented in three dimensions as shown in Fig. 2. Every qubit encodes the sensory information of a corresponding -th sensor. Following [lanza_preliminary_2020] we encode sensory data with a unitary operator which applies a rotation to . The main differences with the previous model rely on the multi-qubit generalization, the lack of temporal integration ( for the model proposed here), and the extension to continuous inputs (the previous study assumed being either 0 or 1). Therefore, the information is encoded in the angle of the Bloch sphere representation of , as shown in Fig. 2.
Many indirect methods are available to exploit information encoded in qubits [hangos_state_2011]. Here, we consider only state measurements exploiting Caves et al. [caves_quantum_2002] interpretation, as stated in Section 1. Measuring the quantum system leads to the collapse of its state in one of its basis states, namely the set of states composed by all the ordered combinations of the basis states of each single qubit. The probability for the collapse to produce a certain state as a measurement is given by the current state superposition. For example, considering the input vector we saw in Fig. 2, the overall system state produced by the application of the rotation operators is
As illustrated in Fig. 3, the probability of measuring a certain state is the square of the vector coefficient corresponding to that state. For example, for the -th state 222 We use the IBMQ Qiskit [ibm_quantum_experience_qiskit_2020] notation rather than the one usually used in quantum mechanics. This means that, in the Dirac notation, the states which are normally ordered as are instead ordered as , following the usual bits notation . we have a coefficient and then the probability of measuring it is [nielsen_quantum_2010]. These coefficients are related to the input vector due to the applied rotation operators [lanza_preliminary_2020]. Since every superposition state is itself a state in the Hilbert space and has a physical meaning on its own, we can operate a basis change. This allows us to define a “query operator” which addresses a specific target perceived state , applying an inverse rotation to . This enables us to directly associate the state with this target state. However, by changing the basis we lose the correspondence of the other states with the related sensors. Nevertheless, if we measure the state after applying , we know that obtaining a state containing “two zeros” means measuring a state closer to the target rather then one with just “one zero”, while is the opposite state.
3 Experiments and Results
We defined a case study considering an ideal camera-like sensor providing the average RGB values of each recorded image. The image has been conceptually decomposed in three scalar sensors (Fig. 1). RGB scalar values can be though of as scalar readings coming from different sensors, even based on different physical transduction mechanisms. However, for representation’ sake, we opted for the RGB average values because they allow for a compact, color-related vector representation.
We implemented a 3-qubit model ( basis states) relying on the IBMQ Qiskit framework [ibm_quantum_experience_qiskit_2020]. To collect data about the probable outcomes, for each tested input , we simulated measurements. In Tab. 1 are reported some detailed operative examples, either in the canonical base or after a specific query, i.e., applying . In the first case, the basis states maintain a precise meaning, hence the corresponding colors are reported. In the second case, the Euclidean distance between the input and the target vector is added333 Implemented using NumPy’s norm function on the difference between the input and the target RGB vectors (not their normalized counterpart and ). . To exhaustively explore the model’s behavior through all the possible inputs, we sampled the RGB space with a sampling step of 5. Hence, we tested the model for different inputs444 The simulation took several days, but the average computational time of a single measurement simulation on a Core i5 10210U is seconds (tests available in [lanza_quantum-robot_2020] via notebooks). . We have not applied any operator for these tests since the behavior would not change. Indeed, applying changes only the basis states, not the behavior of the system in their regards.
4 Experimental Insights
The model behaves as expected. The confidence curve is shown in Fig. 5 has a sinusoidal shape as observed in [lanza_preliminary_2020], which gives a nonlinear, yet definite, correspondence between stored information and measurements. As illustrated in Fig. 5, the more “zeros” the measured state has, the more is similar to . Even if Fig. 5 refers to the canonical base, this behavior can be easily generalized using every state as a target applying accordingly. We have to keep in mind that the outcome is obtained only for “extremely” different input-target combinations, e.g., near and near . Targeting states in the whole RGB cube it is more likely to give readings which are, at most, a “1 zero” measurement, as seen for the results reported in Tab. 1.
It is noteworthy that in our case study colors are just a graphical tool, not an actual concept. The answers are not to consider as precise statements about the color perceived by the system, rather a decision/classification process based on incomplete data. The probabilities indicate the degree of confidence the model has in answering a certain query in a certain way, based upon the previously collected knowledge. For this study, the knowledge relates only to a single instant, but for extended time windows ( as in [lanza_preliminary_2020]) this takes into account also the previous robot sensory history of the robot.