Quantum Boosting

02/12/2020
by   Srinivasan Arunachalam, et al.
0

Suppose we have a weak learning algorithm A for a Boolean-valued problem: A produces hypotheses whose bias γ is small, only slightly better than random guessing (this could, for instance, be due to implementing A on a noisy device), can we boost the performance of A so that A's output is correct on 2/3 of the inputs? Boosting is a technique that converts a weak and inaccurate machine learning algorithm into a strong accurate learning algorithm. The AdaBoost algorithm by Freund and Schapire (for which they were awarded the Gödel prize in 2003) is one of the widely used boosting algorithms, with many applications in theory and practice. Suppose we have a γ-weak learner for a Boolean concept class C that takes time R(C), then the time complexity of AdaBoost scales as VC(C)· poly(R(C), 1/γ), where VC(C) is the VC-dimension of C. In this paper, we show how quantum techniques can improve the time complexity of classical AdaBoost. To this end, suppose we have a γ-weak quantum learner for a Boolean concept class C that takes time Q(C), we introduce a quantum boosting algorithm whose complexity scales as √(VC(C))· poly(Q(C),1/γ); thereby achieving a quadratic quantum improvement over classical AdaBoost in terms of VC(C).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

Improved Quantum Boosting

Boosting is a general method to convert a weak learner (which generates ...
research
09/07/2022

Multitask Learning via Shared Features: Algorithms and Hardness

We investigate the computational efficiency of multitask learning of Boo...
research
10/25/2021

Quantum Boosting using Domain-Partitioning Hypotheses

Boosting is an ensemble learning method that converts a weak learner int...
research
01/31/2020

Boosting Simple Learners

We consider boosting algorithms under the restriction that the weak lear...
research
03/23/2022

New Distinguishers for Negation-Limited Weak Pseudorandom Functions

We show how to distinguish circuits with log k negations (a.k.a k-monoto...
research
11/25/2018

Average-Case Information Complexity of Learning

How many bits of information are revealed by a learning algorithm for a ...
research
10/01/2022

Efficient Quantum Agnostic Improper Learning of Decision Trees

The agnostic setting is the hardest generalization of the PAC model sinc...

Please sign up or login with your details

Forgot password? Click here to reset