A synthetic dataset for deep learning

06/01/2019
by   Xinjie Lan, et al.
0

In this paper, we propose a novel method for generating a synthetic dataset obeying Gaussian distribution. Compared to the commonly used benchmark datasets with unknown distribution, the synthetic dataset has an explicit distribution, i.e., Gaussian distribution. Meanwhile, it has the same characteristics as the benchmark dataset MNIST. As a result, we can easily apply Deep Neural Networks (DNNs) on the synthetic dataset. This synthetic dataset provides a novel experimental tool to verify the proposed theories of deep learning.

READ FULL TEXT
research
08/26/2019

A Probabilistic Representation of Deep Learning

In this work, we introduce a novel probabilistic representation of deep ...
research
05/04/2021

Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders

Deep neural networks often suffer from overconfidence which can be partl...
research
09/16/2021

Protect the Intellectual Property of Dataset against Unauthorized Use

Training high performance Deep Neural Networks (DNNs) models require lar...
research
10/22/2019

Explicitly Bayesian Regularizations in Deep Learning

Generalization is essential for deep learning. In contrast to previous w...
research
03/17/2023

Deephys: Deep Electrophysiology, Debugging Neural Networks under Distribution Shifts

Deep Neural Networks (DNNs) often fail in out-of-distribution scenarios....
research
04/08/2022

Labeling-Free Comparison Testing of Deep Learning Models

Various deep neural networks (DNNs) are developed and reported for their...
research
05/05/2022

OCR Synthetic Benchmark Dataset for Indic Languages

We present the largest publicly available synthetic OCR benchmark datase...

Please sign up or login with your details

Forgot password? Click here to reset