DeepAI AI Chat
Log In Sign Up

Learning Optimal Linear Regularizers

by   Matthew Streeter, et al.

We present algorithms for efficiently learning regularizers that improve generalization. Our approach is based on the insight that regularizers can be viewed as upper bounds on the generalization gap, and that reducing the slack in the bound can improve performance on test data. For a broad class of regularizers, the hyperparameters that give the best upper bound can be computed using linear programming. Under certain Bayesian assumptions, solving the LP lets us "jump" to the optimal hyperparameters given very limited data. This suggests a natural algorithm for tuning regularization hyperparameters, which we show to be effective on both real and synthetic data.


page 1

page 2

page 3

page 4


A new upper bound for spherical codes

We introduce a new linear programming method for bounding the maximum nu...

Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data

Traditional statistics forbids use of test data (a.k.a. holdout data) du...

Data-Driven Projection for Reducing Dimensionality of Linear Programs: Generalization Bound and Learning Methods

This paper studies a simple data-driven approach to high-dimensional lin...

Linear Programming helps solving large multi-unit combinatorial auctions

Previous works suggested the use of Branch and Bound techniques for find...

Chain, Generalization of Covering Code, and Deterministic Algorithm for k-SAT

We present the current fastest deterministic algorithm for k-SAT, improv...

Secretaries with Advice

The secretary problem is probably the purest model of decision making un...