Adopting Robustness and Optimality in Fitting and Learning

10/13/2015
by   Zhiguang Wang, et al.
0

We generalized a modified exponentialized estimator by pushing the robust-optimal (RO) index λ to -∞ for achieving robustness to outliers by optimizing a quasi-Minimin function. The robustness is realized and controlled adaptively by the RO index without any predefined threshold. Optimality is guaranteed by expansion of the convexity region in the Hessian matrix to largely avoid local optima. Detailed quantitative analysis on both robustness and optimality are provided. The results of proposed experiments on fitting tasks for three noisy non-convex functions and the digits recognition task on the MNIST dataset consolidate the conclusions.

READ FULL TEXT
research
03/04/2019

A Stochastic Trust Region Method for Non-convex Minimization

We target the problem of finding a local minimum in non-convex finite-su...
research
06/05/2019

Robustness and Tractability for Non-convex M-estimators

We investigate two important properties of M-estimator, namely, robustne...
research
01/22/2019

Linear Index Coding With Multiple Senders and Extension to a Cellular Network

In this paper, linear index codes with multiple senders are studied, whe...
research
10/19/2022

Convexity Certificates from Hessians

The Hessian of a differentiable convex function is positive semidefinite...
research
10/19/2020

Robust Asymptotically Locally Optimal UAV-Trajectory Generation Based on Spline Subdivision

Generating locally optimal UAV-trajectories is challenging due to the no...
research
06/05/2019

A Tunable Loss Function for Classification

Recently, a parametrized class of loss functions called α-loss, α∈ [1,∞]...

Please sign up or login with your details

Forgot password? Click here to reset