
Quantum machine learning models are kernel methods
With nearterm quantum devices available and the race for faulttolerant...
read it

Variational quantum policies for reinforcement learning
Variational quantum circuits have recently gained popularity as quantum ...
read it

Nearterm quantum algorithms for linear systems of equations
Solving linear systems of equations is an essential component in science...
read it

AnsatzIndependent Variational Quantum Classifier
The paradigm of variational quantum classifiers (VQCs) encodes classical...
read it

Encodingdependent generalization bounds for parametrized quantum circuits
A large body of recent work has begun to explore the potential of parame...
read it

Towards Quantum Machine Learning with Tensor Networks
Machine learning is a promising application of quantum computing, but ch...
read it

Quantum Machine Learning: Fad or Future?
For the last few decades, classical machine learning has allowed us to i...
read it
Structural risk minimization for quantum linear classifiers
Quantum machine learning (QML) stands out as one of the typically highlighted candidates for quantum computing's nearterm "killer application". In this context, QML models based on parameterized quantum circuits comprise a family of machine learning models that are well suited for implementations on nearterm devices and that can potentially harness computational powers beyond what is efficiently achievable on a classical computer. However, how to best use these models – e.g., how to control their expressivity to best balance between training accuracy and generalization performance – is far from understood. In this paper we investigate capacity measures of two closely related QML models called explicit and implicit quantum linear classifiers (also called the quantum variational method and quantum kernel estimator) with the objective of identifying new ways to implement structural risk minimization – i.e., how to balance between training accuracy and generalization performance. In particular, we identify that the rank and Frobenius norm of the observables used in the QML model closely control the model's capacity. Additionally, we theoretically investigate the effect that these model parameters have on the training accuracy of the QML model. Specifically, we show that there exists datasets that require a highrank observable for correct classification, and that there exists datasets that can only be classified with a given margin using an observable of at least a certain Frobenius norm. Our results provide new options for performing structural risk minimization for QML models.
READ FULL TEXT
Comments
There are no comments yet.