Automated procedures are indispensable in modern science. Computers rapidly perform large calculations, help us visualize results, and specialized robots perform preparatory laboratory work to staggering precision. But what is the true limit of the utility of machines for science? To what extent can a machine help us understand experimental results? Could it – the machine – discover new useful experimental tools? The impressive recent results in the field of machine learning, from reliably analysing photographs NIPS2012_4824 to beating the world champion in the game of Go Silver-Hassabis2016 , support optimism in this regard. Researchers from various research fields now employ machine learning algorithms Jordan255 , and the success of machine learning applied to physics CarrasquillaMelko2017 ; VanNieuwenburgLiuHuber2017 ; SchmidtLipson2009 ; carleo2016solving in particular is already noteworthy. Machine learning has claimed its place as the new data analysis tool in the physicists toolbox zdeborova2017machine . However, the true limit of machines lies beyond data analysis, in the domain of the broader theme of artificial intelligence (AI), which is much less explored in this context. In an AI picture, we consider devices (generally called intelligent agents) that interact with an environment (the laboratory) and learn from previous experience RusselNorvig2003
. To broach the broad questions raised above, in this work we address the specific yet central question of whether an intelligent machine can propose novel and elucidating quantum experiments. We answer in the affirmative in the context of photonic quantum experiments, although our techniques are more generally applicable. We design a learning agent, which interacts with (the simulations of) optical tables, and learns how to generate novel and interesting experiments. More concretely, we phrase the task of the development of new optical experiments in a reinforcement learning (RL) frameworkbarto1998reinforcement , vital in modern AI 2015_QL_Nature ; Silver-Hassabis2016 .
The usefulness of automated designs of quantum experiments has been shown in Ref. PhysRevLett.Mario . There, the algorithm MELVIN starts from a toolbox of experimentally available optical elements to randomly create simulations of setups. Those setups are reported if they result in high-dimensional multipartite entangled states. The algorithm uses handcrafted rules to simplify setups and is capable of learning by extending its toolbox with previously successful experimental setups. Several of these experimental proposals have been implemented successfully in the lab malik2016multi ; SchledererKrennFicklerMalikZeilinger2016cyclic ; babazadeh2017high ; erhard2017experimental , and have led to the discovery of new quantum techniques KrennHochrainerLahiriZeilinger2016 ; krenn2017quantum .
Inspired by the success of the MELVIN algorithm in the context of finding specific optical setups, here we investigate the broader potential of learning machines and artificial intelligence in designing quantum experiments 2012_Briegel . Specifically, we are interested in their potential to contribute to novel research, beyond rapidly identifying solutions to fully specified problems. To investigate this question, we employ a more general model of a learning agent, formulated within the projective simulation (PS) framework for artificial intelligence 2012_Briegel , which we apply to the concrete test-bed of Ref. PhysRevLett.Mario . In the process of generating specified optical experiments, the learning agent builds up a memory network of correlations between different optical components – a feature it later exploits when asked to generate targeted experiments efficiently. In the process of learning, it also develops notions (technically speaking, composite clips 2012_Briegel ), for components that “work well” in combination. In effect, the learning agent autonomously discovers sub-setups (or gadgets) which are useful outside of the given task of searching for particular high-dimensional multiphoton entanglement111The discovery of such devices is in fact a byproduct stemming from the more involved structure of the learning model..
We concentrate on the investigation of multipartite entanglement in high dimensions PhysRevA.89.012105 ; HuberDeVicenteJulio2013 ; HuberPerarnau-LlobetDeVicenteJulio2013 and give two examples where learning agents can help. The understanding of multipartite entanglement remains one of the outstanding challenges of quantum information science, not only because of the fundamental conceptual role of entanglement in quantum mechanics, but also because of the many applications of entangled states in quantum communication and computation RevModPhys.81.865 ; RevModPhys.80.517 ; RevModPhys.84.777 ; PhysRevA.69.062311 ; PhysRevLett.87.117901 ; scarani2009security . As the first example, the agent is tasked to find the simplest setup that produces a quantum state with a desired set of properties. Such efficient setups are important as they can be robustly implemented in the lab. The second task is to generate as many experiments, which create states with such particular properties, as possible. Having many different setups available is important for understanding the structure of the state space reachable in experiments and for exploration of different possibilities that are accessible in experiments. In both tasks the desired property is chosen to be a certain type of entanglement in the generated states. More precisely we target high-dimensional () many-particle () entangled states. The orbital angular momentum (OAM) of photons allen1992orbital ; Mair2001 ; molina2007twisted ; krenn2017orbital can be used for investigations into high-dimensional vaziri2002experimental ; dada2011experimental ; agnew2011tomography ; krenn2014generation ; zhang2016engineering or multiphoton entanglement hiesmayr2016observation ; wang2015quantum and, since recently, also both simultaneously malik2016multi . OAM setups have been used as test-beds for other new techniques for quantum optics experiments PhysRevLett.Mario ; KrennHochrainerLahiriZeilinger2016 , and they are the system of choice in this work as well.
Our learning setup can be put in terms of a scheme visualized in Fig. 1(a)222Part of this figure is adapted from Ref. web:robot . Image of the optical table is a part of the -experiment (see main text) in Ref. erhard2017experimental , by Manuel Erhard (University of Vienna).. The agent (our learning algorithm) interacts with a virtual environment – the simulation of an optical table. The agent has access to a set of optical elements (its toolbox), with which it generates experiments: it sequentially places the chosen elements on the (simulated) table; following the placement of an element, the quantum state generated by the corresponding setup i.e. configuration of optical elements is analyzed. Depending on the state, and the chosen task, the agent either receives a reward or not. The agent then observes the current table configuration and places another element; thereafter, the procedure is iterated. We are interested in finite experiments, so the maximal number of optical elements in one experiment is limited. This goes along with practical restrictions: due to accumulation of imperfections, e.g., through misalignment and interferometric instability, for longer experiments we expect a decreasing overall fidelity of the resulting state or signal. The experiment ends (successfully) when the reward is given or (unsuccessfully) when the maximal number of elements on the table is reached without obtaining the reward. Over time, the agent learns which element to place next, given a table.
Initially, we fix the analyzer in Fig. 1(a) to reward the successful generation of a state out of a certain class of high-dimensional multipartite entangled states. Conceptually, this choice of the reward function explicitly specifies our criterion for an experimental setup (and the states that are created) to be interesting
, but we will come back to a more general scenario momentarily. We characterize interesting states by a Schmidt-Rank Vector (SRV)HuberDeVicenteJulio2013 ; HuberPerarnau-LlobetDeVicenteJulio2013 – the numerical vector containing the rank of the reduced density matrix of each of the subsystems – together with the requirement that the states are maximally entangled in the OAM basis PhysRevLett.Mario ; malik2016multi .
The initial state of the simulated experiment is generated by a double spontaneous parametric down-conversion (SPDC) process in two nonlinear crystals. Similar to Refs. PhysRevLett.Mario ; malik2016multi , we ignore higher-order terms with OAM from down-conversion as their amplitudes are significantly smaller than the low-order terms. Neglecting these higher-order terms in the down-conversion, the initial state
can be written as a tensor product of two pairs of OAM-entangled photonsmalik2016multi ,
where the indices and specify four arms in the optical setup. The toolbox contains a basic set of elements PhysRevLett.Mario including beam splitters (BS), mirrors (Refl), shift-parametrized holograms (Holo) and Dove prisms (DP). Taking into account that each distinct element can be placed in any one (or two in the case of BS) of the four arms we allow in total
different choices of elements. Since none of the optical elements in our toolbox creates entanglement in OAM degrees of freedom, we use a measurement in armand post-selection to “trigger” a tripartite state in the other three arms.
Our learning agent is based on the projective simulation 2012_Briegel model for AI. PS is a physics-motivated framework which can be used to construct RL agents. PS was shown to perform well in standard RL problems 2013_Mautner_PSII ; Grid-world-PS ; bjerland2015projective ; melnikov2017projective , in advanced robotics applications simon2016 , and it is also amenable for quantum enhancements paparo2014quantum ; dunjko2015quantum ; friis2015coherent . The main component of the PS agent is its memory network (shown in Fig. 1(b)) comprising units of episodic memory called clips. Here, clips include remembered percepts (in our case, the observed optical tables) and actions (corresponding to the placing of an optical element). Each percept (clip) , is connected to every action (clip) , via a weighted directed edge , which represents the possibility of taking an action in a situation with probability , see Fig. 1(b). The process of learning is manifested in the creation of new clips, and in the adjustment of the probabilities. Intuitively, the probabilities of percept-action transitions which eventually lead to a reward will be enhanced, leading to a higher likelihood of rewarding behaviour in the future (see Supporting Information for details).
ii.1 Designing short experiments
As mentioned, a maximum number of elements placed in one experiment is introduced to limit the influence of experimental imperfections. For the same reason, it is valuable to identify the shortest experiments that produce high-dimensional tripartite entanglement characterized by a desired SRV. In order to relate our work to existing experiments malik2016multi , we task the agent with designing a setup that creates a state with SRV . To further emphasize the benefits of learning, we then investigate whether the agent can use its knowledge, attained through the learning setting described previously, to discover a more complex quantum state. Thus, the task of finding a state with SRV is followed by the task of finding a state with SRV , after simulated experiments. As always, a reward is issued whenever a target state is found, and the table is reset. Fig. 2(a) shows the success probability throughout the learning process: the PS agent first learns to construct a state, and then, with probability , very quickly (compared to the by itself simpler task of finding a (3,3,2) state) learns to design a setup that corresponds to a state. Our results suggest that either the knowledge of constructing a state is highly beneficial in the second phase, or that the more complicated state is easier to generate. To resolve this dichotomy, we simulated a learning process where the PS agent is required to construct a state within experiments, without having learned to build a state during the first experiments. The results of the simulations are shown in Fig. 2(a) as dashed lines (lower and upper edge of the frame). It is apparent that the agent without previous training on states does not show any significant progress in constructing a experiment. Furthermore, the PS agent constantly and autonomously improves by constructing shorter and shorter experiments (cf. Fig. S1(a) in Supporting Information). By the end of the first phase, PS almost always constructs experiments of length – the shortest length of an experiment producing a state from the SRV class. During the learning phase, the PS agent produces a state of the shortest length in half of the cases. Experimental setups useful for the second phase are almost exclusively so-called parity sorters (see Fig. 4). Other setups that are not equivalent to parity sorters seem not to be beneficial in finding a state. As we will show later, the PS agent tends to use parity sorters more frequently while exploring the space of different SRV classes. This is particularly surprising since the parity sorter itself was originally designed for a different task.
ii.2 Designing new experiments
The connection between high-dimensional entangled states, and the structure of optical tables which generate them is not well understood PhysRevLett.Mario . Having a database of such experiments would allow us to deepen our understanding of the structure of the set of entangled states that can be accessed by optical tables. In particular, such a database could then be further analyzed to identify useful sub-setups or gadgets PhysRevLett.Mario – certain few-element combinations that appear frequently in larger setups – which are useful for generating complex states. With our second task, we have challenged the agent to generate such a database by finding high-dimensional entangled states. As a bonus, we found that the agent does the post-processing above for us implicitly, and in run-time. The outcome of this post-processing is encoded in the structure of the memory network of the PS agent. Specifically, the sub-setups which were particularly useful in solving this task are clearly visible, or embodied, in the agent’s memory.
To find as many different high-dimensional three-photon entangled states as possible, we reward the agent for every new implementation of an interesting experiment. To avoid trivial extensions of such implementations, a reward is given only if the obtained SRV was not reached before within the same experiment. Fig. 2(b) displays the total number of new, interesting experiments designed by the basic PS agent (solid blue) and the PS agent with action composition 2012_Briegel (dashed blue). Action composition allows the agent to construct new composite actions from useful optical setups (i.e. placing multiple elements in a fixed configuration), thereby autonomously enhancing the toolbox (see Supporting Information for details). It is a central ingredient for an AI to exhibit even a primitive notion of creativity Briegel2012 and was also used in Ref. PhysRevLett.Mario to augment automated random search. For comparison, we provide the total number of interesting experiments obtained by automated random search with and without action composition (solid and dashed red curves). As we will see later, action composition will allow for additional insight into the agent’s behavior and helps provide useful information about quantum optical setups in general. We found that the PS model discovers significantly more interesting experiments than both automated random search and automated random search with action composition, see Fig. 2(b).
ii.3 Ingredients for successful learning
In general, successful learning relies on a structure hidden in the task environment (or dataset). The results presented thus far show that PS is highly successful in the task of designing new interesting experiments, and here we elucidate why this should be the case. The following analysis also sheds light on other settings where we can be confident that RL techniques can be applied as well.
First, the space of optical setups can be illustrated using a graph as given in Fig. 3(c), where the building of an optical experiment corresponds to a walk on the directed graph. Note that optical setups that create a certain state are not unique: two or more different setups can generate the same quantum state. Due to this fact, this graph does not have a tree structure but rather resembles a maze. Navigating in a maze, in turn, constitutes one of the classic textbook RL problems barto1998reinforcement ; Grid-world-PS ; mirowski2016learning ; hierarchical2016maze . Second, our empirical analysis suggests that experiments generating high-dimensional multipartite entanglement tend to have some structural similarities PhysRevLett.Mario (see Fig. 3(a)-(b) which partially displays the exploration space). The figure shows regions where the density of interesting experiments (large colored nodes) is high and others where it is low – interesting experiments seem to be clustered (cf. Fig. S2). In turn, RL is particularly useful when one needs to handle situations which are similar to those previously encountered – once one maze (optical experiment) is learned, similar mazes (experiments) are tackled more easily, as we have seen before. In other words, whenever the experimental task has a maze-type underlying structure, which is often the case, PS can likely help – and critically, without having any a priori information about the structure itself Grid-world-PS ; 2016_Makmal_ML . In fact, PS gathers information about the underlying structure throughout the learning process. This information can then be extracted by an external user or potentially be utilized further by the agent itself.
ii.4 The potential of learning from experiments
Thus far we have established that a machine can indeed design new quantum experiments in the setting where the task is precisely specified (via the rewarding rule). Intuitively, this could be considered the limit of what a machine can do for us, as machines are specified by our programs. However, this falls short from what, for instance, a human researcher can achieve. How could we, even in principle, design a machine to do something (interesting) we have not specified it to do? To develop an intuition for the type of behavior we could hope for, consider, for the moment, what we may expect a human, say a good PhD student would do in situations similar to those studied thus far.
To begin with, a human cannot go through all conceivable optical setups to find those that are interesting. Arguably, she would try to identify prominent sub-setups and techniques that are helpful in the solving of the problem. Furthermore, she would learn that such techniques are probably useful beyond the specified tasks and may provide new insights in other contexts. Could a machine, even in principle, have such insight? Arguably, traces of this can be found already in our, comparatively simple, learning agent. By analyzing the memory network of the agent (ranking clips according to the sum of the weights of their incident edges), specifically the composite actions it learned to generate, we can extract sub-setups that have been particularly useful in the endeavor of finding many different interesting experiments in Fig. 2(b) for .
For example, PS composes and then extensively uses a combination corresponding to an optical interferometer as displayed in Fig. 4(a) which is usually used to sort OAM modes with different parities Leach-Courtial2002 – in essence, the agent (re)discovered a parity sorter. This interferometer has already been identified as an essential part of many quantum experiments that create high-dimensional multipartite entanglement PhysRevLett.Mario , especially those involving more than two photons (cf. Fig. 2(a) and Ref. malik2016multi ).
One of the PS agent’s assignments was to discover as many different interesting experiments as possible. In the process of learning, even if a parity sorter was often (implicitly) rewarded, over time it will no longer be, as in this scenario only novel experiments are rewarded. This, again implicitly, drives the agent to “invent” new, different-looking configurations, which are similarly useful.
Indeed, in many instances the most rewarded action is no longer the original parity sorter in form of a Mach-Zehnder interferometer (cf. Fig. 4(a)) but a nonlocal version thereof (see Fig. 4(b)). As it turns out, the two setups are equivalent in the Klyshko wave front picture klyshko1988simple ; aspden2014experimental , where the time of two photons in arms and in Fig. 4(b) is inverted, and these photons are considered to be reflected at their origin (represented by ) – the nonlinear crystals. This transformation results to a reduction of the -photon experiment in Fig. 4(b) to the -photon experiment shown in Fig. 4(c).
Such a nonlocal interferometer has only recently been analysed and motivated the work in Ref. KrennHochrainerLahiriZeilinger2016 . Furthermore, by slightly changing the analyzer to only reward interesting experiments that produce states with SRV other than the most common SRV , sub-setups aside from the OAM parity sorter become apparent. For example, the PS discovered and exploited a technique to increase the dimensionality of the OAM of the photons by shifting the OAM mode number in one path and subsequently mixing it with an additional path as displayed in Fig. 4(d). Moreover, one can observe that this technique is frequently combined with a local OAM parity sorter. This setup allows the creation of high-dimensional entangled states beyond the initial state dimension of .
All of the observed setups are, in fact, modern quantum optical gadgets/devices (designed by humans) that either already found applications in state-of-the-art quantum experiments malik2016multi or could be used (as individual tool) in future experiments which create high-dimensional entanglement stating from lower-dimensional entanglement.
One of the fastest growing trends in recent times is the development of “smart” technologies. Such technologies are not only permeating our everyday lives in the forms of “smart” phones, “smart” watches, and, in some places even “smart” self-driving cars, but are expected to induce the next industrial revolution Schwab2017 . Hence, it should not come as a surprise when “smart laboratories” emerge. Already now, modern labs are to a large extent automated King2009 , and are removing the need for human involvement in tedious (or hazardous) tasks.
In this work we broach the question of the potential of automated labs, trying to understand to what extent machines could not only help in research, but perhaps even genuinely perform it. Our approach highlights two aspects of learning machines, both of which will be assets in quantum experiments of the future. First, we have improved upon the original observation that search algorithms can aid in the context of finding special optical set-ups PhysRevLett.Mario by using more sophisticated learning agents. This yields confidence that even more involved techniques from AI research (e.g. generalization melnikov2017projective , meta-learning 2016_Makmal_ML , etc, in the context of the PS framework) may yield ever improving methods for the autonomous design of experiments. In a complementary direction, we have shown that the structure of learning models commonly applied in the context of AI research (even the modest basic PS reinforcement learning machinery augmented with action-clip composition 2012_Briegel ; 2013_Mautner_PSII ), possibly allows machines to tackle problems they were not directly instructed or trained to solve. This supports the expectation that AI methodologies will genuinely contribute to research and, very optimistically, the discovery of new physics.
A.A.M., H.P.N., V.D., M.T., and H.J.B. were supported by the Austrian Science Fund (FWF) through Grants No. SFB FoQuS F4012 and DK-ALM: W1259-N27, the Templeton World Charity Foundation through Grant No. TWCF0078/AB46, and by the Ministerium für Wissenschaft, Forschung, und Kunst Baden-Württemberg (AZ: 33-7533.-30-10/41/1). M.K. and A.Z. were supported by the Austrian Academy of Sciences (ÖAW), by the European Research Council (SIQS Grant No. 600645 EU-FP7-ICT), and the Austrian Science Fund (FWF) with SFB FoQuS F40 and FWF project CoQuS No. W1210-N16.
- (1) Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q. (eds.) Adv. Neural Inf. Process. Syst., vol. 25, 1097–1105 (Curran Associates, 2012).
- (2) Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).
- (3) Jordan, M. I. & Mitchell, T. M. Machine learning: Trends, perspectives, and prospects. Science 349, 255–260 (2015).
- (4) Carrasquilla, J. & Melko, R. G. Machine learning phases of matter. Nat. Phys. 13, 431–434 (2017).
van Nieuwenburg, E. P. L., Liu, Y.-H. &
Huber, S. D.
Learning phase transitions by confusion.Nat. Phys. 13, 435–439 (2017).
- (6) Schmidt, M. & Lipson, H. Distilling free-form natural laws from experimental data. Science 324, 81–85 (2009).
- (7) Carleo, G. & Troyer, M. Solving the quantum many-body problem with artificial neural networks. Science 355, 602–606 (2017).
- (8) Zdeborová, L. Machine learning: New tool in the box. Nat. Phys. 13, 420–421 (2017).
- (9) Russel, S. & Norvig, P. Artificial Intelligence - A Modern Approach (Prentice Hall, New Jersey, 2010), 3rd edn.
- (10) Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (MIT press, Cambridge, 1998).
- (11) Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015).
- (12) Krenn, M., Malik, M., Fickler, R., Lapkiewicz, R. & Zeilinger, A. Automated search for new quantum experiments. Phys. Rev. Lett. 116, 090405 (2016).
- (13) Malik, M. et al. Multi-photon entanglement in high dimensions. Nat. Photon. 10, 248–252 (2016).
- (14) Schlederer, F., Krenn, M., Fickler, R., Malik, M. & Zeilinger, A. Cyclic transformation of orbital angular momentum modes. New J. Phys. 18, 043019 (2016).
- (15) Babazadeh, A. et al. High-dimensional single-photon quantum gates: Concepts and experiments. Phys. Rev. Lett. 119, 180510 (2017).
- (16) Erhard, M., Malik, M., Krenn, M. & Zeilinger, A. Experimental GHZ entanglement beyond qubits. arXiv:1708.03881 (2017).
- (17) Krenn, M., Hochrainer, A., Lahiri, M. & Zeilinger, A. Entanglement by path identity. Phys. Rev. Lett. 118, 080401 (2017).
- (18) Krenn, M., Gu, X. & Zeilinger, A. Quantum experiments and graphs: Multiparty states as coherent superpositions of perfect matchings. Phys. Rev. Lett. 119, 240403 (2017).
- (19) Briegel, H. J. & De las Cuevas, G. Projective simulation for artificial intelligence. Sci. Rep. 2, 400 (2012).
- (20) Lawrence, J. Rotational covariance and Greenberger-Horne-Zeilinger theorems for three or more particles of any dimension. Phys. Rev. A 89, 012105 (2014).
- (21) Huber, M. & de Vicente, J. I. Structure of multidimensional entanglement in multipartite systems. Phys. Rev. Lett. 110, 030501 (2013).
- (22) Huber, M., Perarnau-Llobet, M. & de Vicente, J. I. Entropy vector formalism and the structure of multidimensional entanglement in multipartite systems. Phys. Rev. A 88, 042328 (2013).
- (23) Horodecki, R., Horodecki, P., Horodecki, M. & Horodecki, K. Quantum entanglement. Rev. Mod. Phys. 81, 865–942 (2009).
- (24) Amico, L., Fazio, R., Osterloh, A. & Vedral, V. Entanglement in many-body systems. Rev. Mod. Phys. 80, 517–576 (2008).
- (25) Pan, J.-W. et al. Multiphoton entanglement and interferometry. Rev. Mod. Phys. 84, 777–838 (2012).
- (26) Hein, M., Eisert, J. & Briegel, H. J. Multiparty entanglement in graph states. Phys. Rev. A 69, 062311 (2004).
- (27) Scarani, V. & Gisin, N. Quantum communication between partners and Bell’s inequalities. Phys. Rev. Lett. 87, 117901 (2001).
- (28) Scarani, V. et al. The security of practical quantum key distribution. Rev. Mod. Phys. 81, 1301–1350 (2009).
- (29) Allen, L., Beijersbergen, M. W., Spreeuw, R. & Woerdman, J. Orbital angular momentum of light and the transformation of Laguerre–Gaussian laser modes. Phys. Rev. A 45, 8185 (1992).
- (30) Mair, A., Vaziri, A., Weihs, G. & Zeilinger, A. Entanglement of the orbital angular momentum states of photons. Nature 412, 313–316 (2001).
- (31) Molina-Terriza, G., Torres, J. P. & Torner, L. Twisted photons. Nat. Phys. 3, 305–310 (2007).
- (32) Krenn, M., Malik, M., Erhard, M. & Zeilinger, A. Orbital angular momentum of photons and the entanglement of Laguerre–Gaussian modes. Phil. Trans. R. Soc. A 375, 20150442 (2017).
- (33) Vaziri, A., Weihs, G. & Zeilinger, A. Experimental two-photon, three-dimensional entanglement for quantum communication. Phys. Rev. Lett. 89, 240401 (2002).
- (34) Dada, A. C., Leach, J., Buller, G. S., Padgett, M. J. & Andersson, E. Experimental high-dimensional two-photon entanglement and violations of generalized Bell inequalities. Nat. Phys. 7, 677–680 (2011).
- (35) Agnew, M., Leach, J., McLaren, M., Roux, F. S. & Boyd, R. W. Tomography of the quantum state of photons entangled in high dimensions. Phys. Rev. A 84, 062101 (2011).
- (36) Krenn, M. et al. Generation and confirmation of a (100 100)-dimensional entangled quantum system. Proc. Natl. Acad. Sci. 111, 6243–6247 (2014).
- (37) Zhang, Y. et al. Engineering two-photon high-dimensional states through quantum interference. Sci. Adv. 2, e1501165 (2016).
- (38) Hiesmayr, B., de Dood, M. & Löffler, W. Observation of four-photon orbital angular momentum entanglement. Phys. Rev. Lett. 116, 073601 (2016).
- (39) Wang, X.-L. et al. Quantum teleportation of multiple degrees of freedom of a single photon. Nature 518, 516–519 (2015).
- (40) Squarehipster. openclipart, Licensed under the CC0 1.0 Universal, openclipart.org/detail/266420/Request, date accessed: January 4, 2018.
- (41) Mautner, J., Makmal, A., Manzano, D., Tiersch, M. & Briegel, H. J. Projective simulation for classical learning agents: a comprehensive investigation. New Generat. Comput. 33, 69–114 (2015).
- (42) Melnikov, A. A., Makmal, A. & Briegel, H. J. Projective simulation applied to the grid-world and the mountain-car problem. arXiv:1405.5459 (2014).
- (43) Bjerland, Ø. F. Projective Simulation compared to reinforcement learning. Master’s thesis, Dept. Comput. Sci., Univ. Bergen, Bergen, Norway (2015).
- (44) Melnikov, A. A., Makmal, A., Dunjko, V. & Briegel, H. J. Projective simulation with generalization. Sci. Rep. 7, 14430 (2017).
- (45) Hangl, S., Ugur, E., Szedmak, S. & Piater, J. Robotic playing for hierarchical complex skill learning. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2799–2804 (2016).
- (46) Paparo, G. D., Dunjko, V., Makmal, A., Martin-Delgado, M. A. & Briegel, H. J. Quantum speed-up for active learning agents. Phys. Rev. X 4, 031002 (2014).
- (47) Dunjko, V., Friis, N. & Briegel, H. J. Quantum-enhanced deliberation of learning agents using trapped ions. New J. Phys. 17, 023006 (2015).
Friis, N., Melnikov, A. A.,
Kirchmair, G. & Briegel, H. J.
Coherent controlization using superconducting qubits.Sci. Rep. 5, 18036 (2015).
- (49) Briegel, H. J. On creative machines and the physical origins of freedom. Sci. Rep. 2, 522 (2012).
- (50) Mirowski, P. et al. Learning to navigate in complex environments. arXiv:1611.03673 (2016).
- (51) Mannucci, T. & van Kampen, E.-J. A hierarchical maze navigation algorithm with reinforcement learning and mapping. In Proc. IEEE Symposium Series on Computational Intelligence, 1–8 (2016).
- (52) Makmal, A., Melnikov, A. A., Dunjko, V. & Briegel, H. J. Meta-learning within projective simulation. IEEE Access 4, 2110–2122 (2016).
- (53) Leach, J., Padgett, M. J., Barnett, S. M., Franke-Arnold, S. & Courtial, J. Measuring the orbital angular momentum of a single photon. Phys. Rev. Lett. 88, 257901 (2002).
- (54) Klyshko, D. N. A simple method of preparing pure states of an optical field, of implementing the Einstein–Podolsky–Rosen experiment, and of demonstrating the complementarity principle. Sov. Phys. Usp. 31, 74 (1988).
- (55) Aspden, R. S., Tasca, D. S., Forbes, A., Boyd, R. W. & Padgett, M. J. Experimental demonstration of Klyshko’s advanced-wave picture using a coincidence-count based, camera-enabled imaging system. J. Mod. Opt. 61, 547–551 (2014).
- (56) Schwab, K. The Fourth Industrial Revolution (Penguin, London, 2017).
- (57) King, R. D. et al. The automation of science. Science 324, 85–89 (2009).
- (58) Hangl, S. Evaluation and extensions of generalization in the projective simulation model. Bachelor’s thesis, Univ. Innsbruck, Innsbruck, Austria (2015).
.1 Projective simulation
In the following, we describe the PS model and specify the structure of the learning agent used in this paper. For detailed exposition of the PS model, we refer the reader to Refs. 2012_Briegel ; 2013_Mautner_PSII . The PS model is based on information processing in a specific memory system, called episodic and compositional memory, which is represented as a weighted network of clips. Clips are the units of episodic memory, which are remembered percepts and actions, or short sequences thereof 2012_Briegel . The PS agent perceives a state of an environment, deliberates using its network of clips and outputs an action as shown schematically in Fig. 1. The weighted network of clips is a graph with clips as vertices and directed edges that represent the possible (stochastic) transitions between clips. In this paper, the clips represent the percepts and actions, and directed edges connect each percept to every action, as illustrated in Fig. 1(b). An edge connects a percept to an action , where and are the total number of percepts and actions at time step , respectively (each time step corresponds to one interaction cycle shown in Fig. 1(a)). Each edge has a time-dependent weight , which defines the probability of a transition from percept to action happening. This transition probability is proportional to the weights, also called -values, and is given by the following expression:
Initially, all -values are set to , hence the probability of performing action is equal to , that is, uniform. In other words, the initial agent’s behaviour is fully random.
As random behaviour is rarely optimal, changes in the probabilities are neccessary. These changes should result in an improvement of the (average) probability of success, i.e. in the chances of receiving a nonnegative reward . Learning in the PS model is realized by making use of feedback from the environment, to update the weights . The matrix with entries , the so-called -matrix, is updated after each interaction with an environment according to the following learning rule:
where is an “all-ones” matrix of a suitable size and is the damping parameter of the PS model, responsible for forgetting. The -matrix, or so-called glow matrix, allows to internally redistribute rewards such that decisions that were made in the past are rewarded less than the more recent decisions. The -matrix is updated in parallel with the -matrix. Initially all -values are set to zero, but if the edge was traversed during the last decision-making process, the corresponding glow value is set to . In order to determine the extent to which previous actions should be internally rewarded, the -matrix is also updated after each interaction with an environment:
where is the so-called glow parameter of the PS model. While finding the optimal values for and parameters is, in general, a difficult problem, these parameters can be learned by the PS model itself 2016_Makmal_ML , however, at the expense of longer learning times. In this paper and parameters were set to time-independent values without any extensive search for optimal values, which could, in principle, further improve our results.
In addition to the basic PS model described above, two additional mechanisms were used that dynamically alter the structure of the PS network: action composition and clip deletion.
.1.1 Action composition
When action composition 2012_Briegel is used, new actions can be created as follows: if a sequence of optical elements (actions) results in a rewarded state, then this sequence is added as a single, composite, action to the agent’s toolbox. Having such additional actions available allows the agent to access the rewarded experiments in a single decision step, without reproducing the placement of optical elements one by one. Moreover, composite actions can be placed at any time during an experiment which effectively increases the likelihood of an experiment to contain the composite action as sub-setup. Although the entire sequence of elements can be represented as a single action, this action still has length . That is, the agent is not allowed to place such an element if there are already optical elements on the table.
Because of the growing size of the action space, rewards coming from the analyzer are rescaled proportionally to the size of the toolbox ,
where is the size of the initial toolbox. This reward rescaling compensates for an otherwise smaller change in transition probabilities in Eq. 2. Although the action composition leads to only a modest improvement in the assigned tasks (see Fig. 2(b)), it is vital for the creative features exhibited by the agent which we discussed in the Results section.
.1.2 Clip deletion
Clip deletion enables the agent to handle a large percept and action space by deleting clips that are used only infrequently. Since the underlying mechanisms for percept and action creation are different, the deletion mechanisms are also treated differently.
Let us start by describing the percept deletion mechanism. By keeping all the percepts, the number of possible table configurations scales in highest order as . For example, the simplest situation that we study ( and ) comprises more than billion table configurations. In order to circumvent the storing of all configurations of the percept space, we delete percepts that correspond to unsuccessful experiments since they will not be of any relevance in the future. If, after placing elements on the optical table, no positive reinforcement was obtained, all clips and edges that were created during this experiment are deleted. This deletion mechanism allows us to maintain a compact size for the PS network with no more than percept clips on average in the most complicated scenario that we considered ().
Now, let us consider action deletion. Due to action composition the number of actions grows with the number of rewarded experiments. Without deleting action clips, the number of actions can easily go beyond , making basic elements less likely to be chosen. In order to keep the balance between the exploitation of new successful sequences and the exploration of new actions, we assign to each action clip , with , a probability to be deleted together with all corresponding edges. These probabilities are given by
Action deletion is triggered just after a reward is obtained and deactivated again until a new reward is encountered. The intuition behind Eq. 6 is the following. If the in-going edges of action have, to a large extent, comparatively small weights (i.e. either the action has not received many rewards or the -values have been damped significantly since the last rewards were received), then the probability can be approximated as
with . Initially, after the new action has been created, tends to be smaller than one since the sum of in-bound -values is approximately equal to the number of ingoing edges (because a composite action is initialized with all -values being ). A small initial value means that the probability of the new action to be deleted, as defined by Eq. 7, is high. In order to compensate for the fast deletion of new actions we assign to each action an immunity time of updates simon2015evaluation . That is, a newly created action cannot be deleted unless
more rewards have been given to any sequence of actions. Hence, we can estimate the total number of actions to be equal to, where is the number of elements that were learned.
We used the same action composition and deletion mechanisms for automated random search. The number of actions learned by automated random search with action composition is however always equal to zero, because automated random search does not have any other learning component.
.2 Details of learning performance
Here we complement the results shown in Fig. 2(a) with an additional analysis of the agent’s performance. Fig. 5(a) verifies that the number of experiments the PS agent has to generate before finding another state, continuously decreases with the number of states that have already been found. In order to find a state for the very first time the agent spends on average around experiments. This number decreases each time the agent finds a state. Moreover, in Fig. 5(a) one can observe that the length of experiments decreases with the number of states already found: the PS agent keeps finding on average shorter and shorter implementations that produce states.
Fig. 2(b) demonstrates that the PS agent finds many more interesting experiments than random search by counting the number of these experiments after time steps. Here we show that the rate at which interesting experiments are found grows over time. The fraction of interesting experiments at different times is shown in Fig. 5(b)-(c). The fraction of interesting experiments is calculated as the ratio of the number of new, interesting experiments to the number of all experiments performed at time step . Initially, the average fraction of interesting experiments is the same for all agents and automated random search. But, in contrast to automated random search (red), the rate at which new experiments are discovered by the PS agents (blue) keeps increasing for a much longer time (Fig. 5(b)-(c)). Automated random search, as expected, always obtains approximately the same fraction of interesting experiments independent of the number of experiments that have already been simulated. In fact, the fraction of interesting experiments found by automated random search slowly decreases since it has the chance to encounter interesting experiments that have already been encountered. One can see that all red curves converge to certain values before they start decreasing again, indicating that on average for each automated random search agent there exists a maximum rate with which new useful experiments can be obtained.
A tantalizing assignment, related to the task of finding many interesting experiments, is that of finding many different SRVs. Even though the tasks assignment was not to find many different SRVs, this and the actual task seem to be related since PS actually found on average more different SRVs than automated random search (see Fig. 5(d)-(e)).
We have demonstrated that the PS agent is able to explore a variety of photonic quantum experiments. This suggest that there are correlations (or structural connections) between different interesting experiments which the agent exploits. If this were not the case, we would expect to see no advantage in using PS in comparison to random search. In order to verify the existence of such correlations between different experiments we simulate the exploration space as seen by the PS agent. Compared to the exploration space as seen from the perspective of an extensive random search process shown in Fig. 3(a), the exploration space generated by the PS agent shown in Fig. 6 shows a more refined clustering of interesting experiments. This indicates that interesting experiments as defined here have a tendency to cluster.
.3 Basic optical elements
Here we describe the standard set of optical elements with that was used to build photonic quantum experiments. There are different types of basic elements in our toolbox, which are shown in Table 1: beam splitters, Dove prisms, holograms and mirrors. Non-polarizing symmetric beam splitters are denoted as , where indices and are the input arms of the beam splitter. There are different arms, , , , , that can be used as inputs for optical elements. Hence, there are different beam splitters that are in the basic set of optical elements. Holograms are denoted as and can be placed in one arm of the four arms. stands for a shift in the OAM of the photon. Dove prisms (DP) and mirrors (Refl) also act on single modes and can be placed in one of four arms. This gives in total basic optical elements, which are used throughout the paper. The action of all elements on the OAM of photons is summarized in Table 1.
In addition to the described elements, a post-selection procedure is used at the end of each experiment. The desired output is conditioned on having exactly one photon in each arm. A photon in the arm is measured in the OAM basis, which leads to different possible three-photon states. Each experiment is virtually repeated until all possible three-photon states are observed and the analyzer can report whether some interesting state was observed or not.
|Optical element||Unitary transformation|
|: non-polarizing symmetric beam splitter|
|: Dove prism|