Making Learners (More) Monotone

11/25/2019
by   Tom J. Viering, et al.
0

Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MT_HT makes less than 1% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to several baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2018

Altitude Terrain Guarding and Guarding Uni-Monotone Polygons

We show that the problem of guarding an x-monotone terrain from an altit...
research
02/10/2022

Monotone Learning

The amount of training-data is one of the key factors which determines t...
research
03/05/2019

Strongly Exponential Separation Between Monotone VP and Monotone VNP

We show that there is a sequence of explicit multilinear polynomials P_n...
research
05/26/2019

On the monotone complexity of the shift operator

We show that the complexity of minimal monotone circuits implementing a ...
research
01/04/2022

Erdős-Selfridge Theorem for Nonmonotone CNFs

In an influential paper, Erdős and Selfridge introduced the Maker-Breake...
research
07/20/2021

Learning MR-Sort Models from Non-Monotone Data

The Majority Rule Sorting (MR-Sort) method assigns alternatives evaluate...
research
11/13/2020

Wisdom of the Ensemble: Improving Consistency of Deep Learning Models

Deep learning classifiers are assisting humans in making decisions and h...

Please sign up or login with your details

Forgot password? Click here to reset