Fast Convergence Rate of Multiple Kernel Learning with Elastic-net Regularization

03/02/2011
by   Taiji Suzuki, et al.
0

We investigate the learning rate of multiple kernel leaning (MKL) with elastic-net regularization, which consists of an ℓ_1-regularizer for inducing the sparsity and an ℓ_2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relatively small, and prove that elastic-net MKL achieves the minimax learning rate on the ℓ_2-mixed-norm ball. Our bound is sharper than the convergence rates ever shown, and has a property that the smoother the truth is, the faster the convergence rate is.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro