Training Efficiency and Robustness in Deep Learning

12/02/2021
by   Fartash Faghri, et al.
0

Deep Learning has revolutionized machine learning and artificial intelligence, achieving superhuman performance in several standard benchmarks. It is well-known that deep learning models are inefficient to train; they learn by processing millions of training data multiple times and require powerful computational resources to process large batches of data in parallel at the same time rather than sequentially. Deep learning models also have unexpected failure modes; they can be fooled into misbehaviour, producing unexpectedly incorrect predictions. In this thesis, we study approaches to improve the training efficiency and robustness of deep learning models. In the context of learning visual-semantic embeddings, we find that prioritizing learning on more informative training data increases convergence speed and improves generalization performance on test data. We formalize a simple trick called hard negative mining as a modification to the learning objective function with no computational overhead. Next, we seek improvements to optimization speed in general-purpose optimization methods in deep learning. We show that a redundancy-aware modification to the sampling of training data improves the training speed and develops an efficient method for detecting the diversity of training signal, namely, gradient clustering. Finally, we study adversarial robustness in deep learning and approaches to achieve maximal adversarial robustness without training with additional data. For linear models, we prove guaranteed maximal robustness achieved only by appropriate choice of the optimizer, regularization, or architecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2021

Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Deep learning models are prone to being fooled by imperceptible perturba...
research
03/06/2023

Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks

Solving a problem with a deep learning model requires researchers to opt...
research
12/07/2022

DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing

Recent advances on deep learning models come at the price of formidable ...
research
12/05/2019

Towards Explainable Deep Neural Networks (xDNN)

In this paper, we propose an elegant solution that is directly addressin...
research
06/20/2023

Towards a robust and reliable deep learning approach for detection of compact binary mergers in gravitational wave data

The ability of deep learning (DL) approaches to learn generalised signal...
research
11/19/2020

An Experimental Study of Semantic Continuity for Deep Learning Models

Deep learning models suffer from the problem of semantic discontinuity: ...
research
09/06/2019

Mass Personalization of Deep Learning

We discuss training techniques, objectives and metrics toward mass perso...

Please sign up or login with your details

Forgot password? Click here to reset