A Solver + Gradient Descent Training Algorithm for Deep Neural Networks

07/07/2022
by   Dhananjay Ashok, et al.
0

We present a novel hybrid algorithm for training Deep Neural Networks that combines the state-of-the-art Gradient Descent (GD) method with a Mixed Integer Linear Programming (MILP) solver, outperforming GD and variants in terms of accuracy, as well as resource and data efficiency for both regression and classification tasks. Our GD+Solver hybrid algorithm, called GDSolver, works as follows: given a DNN D as input, GDSolver invokes GD to partially train D until it gets stuck in a local minima, at which point GDSolver invokes an MILP solver to exhaustively search a region of the loss landscape around the weight assignments of D's final layer parameters with the goal of tunnelling through and escaping the local minima. The process is repeated until desired accuracy is achieved. In our experiments, we find that GDSolver not only scales well to additional data and very large model sizes, but also outperforms all other competing methods in terms of rates of convergence and data efficiency. For regression tasks, GDSolver produced models that, on average, had 31.5 MSE in 48 GDSolver was able to achieve the highest accuracy over all competing methods, using only 50

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset