DeepAI AI Chat
Log In Sign Up

Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications

by   Franck Iutzeler, et al.

Nonsmoothness is often a curse for optimization; but it is sometimes a blessing, in particular for applications in machine learning. In this paper, we present the specific structure of nonsmooth optimization problems appearing in machine learning and illustrate how to leverage this structure in practice, for compression, acceleration, or dimension reduction. We pay a special attention to the presentation to make it concise and easily accessible, with both simple examples and general results.


page 1

page 2

page 3

page 4


A Survey of Optimization Methods from a Machine Learning Perspective

Machine learning develops rapidly, which has made many theoretical break...

Accelerated nonlinear primal-dual hybrid gradient algorithms with applications to machine learning

The primal-dual hybrid gradient (PDHG) algorithm is a first-order method...

Optimization in Machine Learning: A Distribution Space Approach

We present the viewpoint that optimization problems encountered in machi...

Participation is not a Design Fix for Machine Learning

This paper critically examines existing modes of participation in design...

Optimization Methods for Large-Scale Machine Learning

This paper provides a review and commentary on the past, present, and fu...

Screening Rules and its Complexity for Active Set Identification

Screening rules were recently introduced as a technique for explicitly i...

The Dark Side of Unikernels for Machine Learning

This paper analyzes the shortcomings of unikernels as a method of deploy...