An Analysis of Regularized Approaches for Constrained Machine Learning

05/20/2020
by   Michele Lombardi, et al.
0

Regularization-based approaches for injecting constraints in Machine Learning (ML) were introduced to improve a predictive model via expert knowledge. We tackle the issue of finding the right balance between the loss (the accuracy of the learner) and the regularization term (the degree of constraint satisfaction). The key results of this paper is the formal demonstration that this type of approach cannot guarantee to find all optimal solutions. In particular, in the non-convex case there might be optima for the constrained problem that do not correspond to any multiplier value.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2023

On Regularization and Inference with Label Constraints

Prior knowledge and symbolic rules in machine learning are often express...
research
12/01/2021

SaDe: Learning Models that Provably Satisfy Domain Constraints

With increasing real world applications of machine learning, models are ...
research
05/29/2018

Weak Supermodularity Assists Submodularity-based Approaches to Non-convex Constrained Optimization

Non-convex constrained optimization problems have many applications in m...
research
06/16/2020

Toward Theory of Applied Learning. What is Machine Learning?

Various existing approaches to formalize machine learning (ML) problem a...
research
07/15/2018

Boosting Combinatorial Problem Modeling with Machine Learning

In the past few years, the area of Machine Learning (ML) has witnessed t...
research
07/15/2021

Lockout: Sparse Regularization of Neural Networks

Many regression and classification procedures fit a parameterized functi...
research
10/20/2020

Image Obfuscation for Privacy-Preserving Machine Learning

Privacy becomes a crucial issue when outsourcing the training of machine...

Please sign up or login with your details

Forgot password? Click here to reset