Recursive Optimization of Convex Risk Measures: Mean-Semideviation Models

04/02/2018
by   Dionysios S. Kalogerias, et al.
0

We develop and analyze stochastic subgradient methods for optimizing a new, versatile, application-friendly and tractable class of convex risk measures, termed here as mean-semideviations. Their construction relies on on the concept of a risk regularizer, a one-dimensional nonlinear map with certain properties, essentially generalizing the positive part weighting function in the mean-upper-semideviation risk measure. After we formally introduce mean-semideviations, we study their basic properties, and we present a fundamental constructive characterization result, demonstrating their generality. We then introduce and rigorously analyze the MESSAGEp algorithm, an efficient stochastic subgradient procedure for iteratively solving convex mean-semideviation risk-averse problems to optimality. The MESSAGEp algorithm may be derived as an application of the T-SCGD algorithm of (Yang et al., 2018). However, the generic theoretical framework of (Yang et al., 2018) is too narrow and structurally restrictive, as far as optimization of mean-semideviations is concerned, including the classical mean-upper-semideviation risk measure. By exploiting problem structure, we propose a substantially weaker theoretical framework, under which we establish pathwise convergence of the MESSAGEp algorithm, under the same strong sense as in (Yang et al., 2018). The new framework reveals a fundamental trade-off between the smoothness of the random position function and that of the particular mean-semideviation risk measure under consideration. Further, we explicitly show that the class of mean-semideviation problems supported under our framework is strictly larger than the respective class of problems supported in (Yang et al., 2018). Thus, applicability of compositional stochastic optimization is established for a strictly wider spectrum of mean-semideviation problems, justifying the purpose of our work.

READ FULL TEXT

page 24

page 36

research
03/20/2018

Fastest Rates for Stochastic Mirror Descent Methods

Relative smoothness - a notion introduced by Birnbaum et al. (2011) and ...
research
12/19/2019

Zeroth-order Stochastic Compositional Algorithms for Risk-Aware Learning

We present Free-MESSAGEp, the first zeroth-order algorithm for convex me...
research
01/29/2023

Conditional generalized quantiles based on expected utility model and equivalent characterization of properties

As a counterpart to the (static) risk measures of generalized quantiles ...
research
08/19/2022

Sudakov-Fernique post-AMP, and a new proof of the local convexity of the TAP free energy

In many problems in modern statistics and machine learning, it is often ...
research
01/31/2022

Agnostic Learnability of Halfspaces via Logistic Loss

We investigate approximation guarantees provided by logistic regression ...
research
06/04/2021

An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning

We present and analyze an algorithm for optimizing smooth and convex or ...
research
10/17/2022

New metrics for risk analysis

This paper introduces a new framework for risk analysis for distribution...

Please sign up or login with your details

Forgot password? Click here to reset