Learning Energy Networks with Generalized Fenchel-Young Losses

05/19/2022
by   Mathieu Blondel, et al.
0

Energy-based models, a.k.a. energy networks, perform inference by optimizing an energy function, typically parametrized by a neural network. This allows one to capture potentially complex relationships between inputs and outputs. To learn the parameters of the energy function, the solution to that optimization problem is typically fed into a loss function. The key challenge for training energy networks lies in computing loss gradients, as this typically requires argmin/argmax differentiation. In this paper, building upon a generalized notion of conjugate function, which replaces the usual bilinear pairing with a general energy function, we propose generalized Fenchel-Young losses, a natural loss construction for learning energy networks. Our losses enjoy many desirable properties and their gradients can be computed efficiently without argmin/argmax differentiation. We also prove the calibration of their excess risk in the case of linear-concave energies. We demonstrate our losses on multilabel classification and imitation learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2023

Imitation Learning from Nonlinear MPC via the Exact Q-Loss and its Gauss-Newton Approximation

This work presents a novel loss function for learning nonlinear Model Pr...
research
01/08/2019

Learning with Fenchel-Young Losses

Over the past decades, numerous loss functions have been been proposed f...
research
09/20/2016

Multiclass Classification Calibration Functions

In this paper we refine the process of computing calibration functions f...
research
11/07/2019

Improving Joint Training of Inference Networks and Structured Prediction Energy Networks

Deep energy-based models are powerful, but pose challenges for learning ...
research
02/14/2023

On Classification-Calibration of Gamma-Phi Losses

Gamma-Phi losses constitute a family of multiclass classification loss f...
research
05/13/2022

A Unified Framework for Implicit Sinkhorn Differentiation

The Sinkhorn operator has recently experienced a surge of popularity in ...
research
02/10/2020

Supervised Learning: No Loss No Cry

Supervised learning requires the specification of a loss function to min...

Please sign up or login with your details

Forgot password? Click here to reset