Monotonic Trends in Deep Neural Networks

09/24/2019
by   Akhil Gupta, et al.
0

The importance of domain knowledge in enhancing model performance and making reliable predictions in the real-world is unparalleled. We focus on incorporating monotonic trends (increase in input implies increase/decrease in output), and propose a novel gradient-based point-wise loss function for enforcing partial monotonicity with deep neural networks. While recent developments have focused on structural changes to the model, our approach aims at enhancing the learning process. Our point-wise loss function acts as a plug-in to the standard loss and penalizes non-monotonic gradients. We demonstrate that the point-wise loss produces comparable (and sometimes better) results with respect to both AUC and monotonicity metric, compared to state-of-the-art deep lattice networks that enforce monotonicity. Moreover, it is able to learn customized individual trends and produces smoother conditional curves - important for personalized decisions, while preserving the flexibility of deep networks.

READ FULL TEXT

page 3

page 8

research
11/20/2020

Certified Monotonic Neural Networks

Learning monotonic models with respect to a subset of the inputs is a de...
research
05/24/2022

Constrained Monotonic Neural Networks

Deep neural networks are becoming increasingly popular in approximating ...
research
06/16/2020

Counterexample-Guided Learning of Monotonic Neural Networks

The widespread adoption of deep learning is often attributed to its auto...
research
09/19/2017

Deep Lattice Networks and Partial Monotonic Functions

We propose learning deep models that are monotonic with respect to a use...
research
07/21/2021

How to Tell Deep Neural Networks What We Know

We present a short survey of ways in which existing scientific knowledge...
research
06/11/2021

Monotonic Neural Network: combining Deep Learning with Domain Knowledge for Chiller Plants Energy Optimization

In this paper, we are interested in building a domain knowledge based de...
research
12/02/2022

Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks

In a recent paper Wunderlich and Pehle introduced the EventProp algorith...

Please sign up or login with your details

Forgot password? Click here to reset