Resonant Machine Learning Based on Complex Growth Transform Dynamical Systems
In this paper we propose an energy-efficient learning framework which exploits structural and functional similarities between a machine learning network and a general electrical network satisfying the Tellegen's theorem. The proposed formulation ensures that the network's active-power is dissipated only during the process of learning, whereas the network's reactive-power is maintained to be zero at all times. As a result, in steady-state, the learned parameters are stored and self-sustained by electrical resonance determined by the network's nodal inductances and capacitances. Based on this approach, this paper introduces three novel concepts: (a) A learning framework where the network's active-power dissipation is used as a regularization for a learning objective function that is subjected to zero total reactive-power constraint; (b) A dynamical system based on complex-domain, continuous-time growth transforms which optimizes the learning objective function and drives the network towards electrical resonance under steady-state operation; and (c) An annealing procedure that controls the trade-off between active-power dissipation and the speed of convergence. As a representative example, we show how the proposed framework can be used for designing resonant support vector machines (SVMs), where we show that the support-vectors correspond to an LC network with self-sustained oscillations. We also show that this resonant network dissipates less active-power compared to its non-resonant counterpart.
READ FULL TEXT