Auto-tuning of Deep Neural Networks by Conflicting Layer Removal

03/07/2021
by   David Peer, et al.
0

Designing neural network architectures is a challenging task and knowing which specific layers of a model must be adapted to improve the performance is almost a mystery. In this paper, we introduce a novel methodology to identify layers that decrease the test accuracy of trained models. Conflicting layers are detected as early as the beginning of training. In the worst-case scenario, we prove that such a layer could lead to a network that cannot be trained at all. A theoretical analysis is provided on what is the origin of those layers that result in a lower overall network performance, which is complemented by our extensive empirical evaluation. More precisely, we identified those layers that worsen the performance because they would produce what we name conflicting training bundles. We will show that around 60 residual networks can be completely removed from the architecture with no significant increase in the test-error. We will further present a novel neural-architecture-search (NAS) algorithm that identifies conflicting layers at the beginning of the training. Architectures found by our auto-tuning algorithm achieve competitive accuracy values when compared against more complex state-of-the-art architectures, while drastically reducing memory consumption and inference time for different computer vision tasks. The source code is available on https://github.com/peerdavid/conflicting-bundles

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2020

Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks

Designing neural network architectures is a challenging task and knowing...
research
03/08/2022

Evolutionary Neural Cascade Search across Supernetworks

To achieve excellent performance with modern neural networks, having the...
research
06/08/2020

Neural Architecture Search without Training

The time and effort involved in hand-designing deep neural networks is i...
research
09/01/2021

Searching for Efficient Multi-Stage Vision Transformers

Vision Transformer (ViT) demonstrates that Transformer for natural langu...
research
03/28/2022

Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?

In Neural Architecture Search (NAS), reducing the cost of architecture e...
research
12/23/2019

Progressive DARTS: Bridging the Optimization Gap for NAS in the Wild

With the rapid development of neural architecture search (NAS), research...
research
11/08/2021

Triple-level Model Inferred Collaborative Network Architecture for Video Deraining

Video deraining is an important issue for outdoor vision systems and has...

Please sign up or login with your details

Forgot password? Click here to reset