CrossedWires: A Dataset of Syntactically Equivalent but Semantically Disparate Deep Learning Models

08/29/2021
by   Max Zvyagin, et al.
0

The training of neural networks using different deep learning frameworks may lead to drastically differing accuracy levels despite the use of the same neural network architecture and identical training hyperparameters such as learning rate and choice of optimization algorithms. Currently, our ability to build standardized deep learning models is limited by the availability of a suite of neural network and corresponding training hyperparameter benchmarks that expose differences between existing deep learning frameworks. In this paper, we present a living dataset of models and hyperparameters, called CrossedWires, that exposes semantic differences between two popular deep learning frameworks: PyTorch and Tensorflow. The CrossedWires dataset currently consists of models trained on CIFAR10 images using three different computer vision architectures: VGG16, ResNet50 and DenseNet121 across a large hyperparameter space. Using hyperparameter optimization, each of the three models was trained on 400 sets of hyperparameters suggested by the HyperSpace search algorithm. The CrossedWires dataset includes PyTorch and Tensforflow models with test accuracies as different as 0.681 on syntactically equivalent models and identical hyperparameter choices. The 340 GB dataset and benchmarks presented here include the performance statistics, training curves, and model weights for all 1200 hyperparameter choices, resulting in 2400 total models. The CrossedWires dataset provides an opportunity to study semantic differences between syntactically equivalent models across popular deep learning frameworks. Further, the insights obtained from this study can enable the development of algorithms and tools that improve reliability and reproducibility of deep learning frameworks. The dataset is freely available at https://github.com/maxzvyagin/crossedwires through a Python API and direct download link.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2020

Sherpa: Robust Hyperparameter Optimization for Machine Learning

Sherpa is a hyperparameter optimization library for machine learning mod...
research
05/30/2021

Surrogate Model Based Hyperparameter Tuning for Deep Learning with SPOT

A surrogate model based hyperparameter tuning approach for deep learning...
research
05/18/2022

Hyperparameter Optimization with Neural Network Pruning

Since the deep learning model is highly dependent on hyperparameters, hy...
research
07/13/2017

Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models

Even todays most advanced machine learning models are easily fooled by a...
research
09/30/2021

Genealogical Population-Based Training for Hyperparameter Optimization

Hyperparameter optimization aims at finding more rapidly and efficiently...
research
06/20/2022

Hyperparameter Importance of Quantum Neural Networks Across Small Datasets

As restricted quantum computers are slowly becoming a reality, the searc...
research
12/22/2019

Efficient Parameter Sampling for Neural Network Construction

The customizable nature of deep learning models have allowed them to be ...

Please sign up or login with your details

Forgot password? Click here to reset