DeepAI
Log In Sign Up

Beyond L2-Loss Functions for Learning Sparse Models

Incorporating sparsity priors in learning tasks can give rise to simple, and interpretable models for complex high dimensional data. Sparse models have found widespread use in structure discovery, recovering data from corruptions, and a variety of large scale unsupervised and supervised learning problems. Assuming the availability of sufficient data, these methods infer dictionaries for sparse representations by optimizing for high-fidelity reconstruction. In most scenarios, the reconstruction quality is measured using the squared Euclidean distance, and efficient algorithms have been developed for both batch and online learning cases. However, new application domains motivate looking beyond conventional loss functions. For example, robust loss functions such as ℓ_1 and Huber are useful in learning outlier-resilient models, and the quantile loss is beneficial in discovering structures that are the representative of a particular quantile. These new applications motivate our work in generalizing sparse learning to a broad class of convex loss functions. In particular, we consider the class of piecewise linear quadratic (PLQ) cost functions that includes Huber, as well as ℓ_1, quantile, Vapnik, hinge loss, and smoothed variants of these penalties. We propose an algorithm to learn dictionaries and obtain sparse codes when the data reconstruction fidelity is measured using any smooth PLQ cost function. We provide convergence guarantees for the proposed algorithm, and demonstrate the convergence behavior using empirical experiments. Furthermore, we present three case studies that require the use of PLQ cost functions: (i) robust image modeling, (ii) tag refinement for image annotation and retrieval and (iii) computing empirical confidence limits for subspace clustering.

READ FULL TEXT
11/12/2015

Automatic Inference of the Quantile Parameter

Supervised learning is an active research area, with numerous applicatio...
06/29/2020

Penalized regression with multiple loss functions and selection by vote

This article considers a linear model in a high dimensional data scenari...
02/19/2014

Sparse Quantile Huber Regression for Efficient and Robust Estimation

We consider new formulations and methods for sparse quantile regression ...
10/24/2014

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

Modern applications in sensitive domains such as biometrics and medicine...
09/12/2021

High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization

ℓ_1-penalized quantile regression is widely used for analyzing high-dime...
12/12/2016

Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification

Top-k error is currently a popular performance measure on large scale im...