fashion-mnist
A MNIST-like fashion product database. Benchmark :point_down:
view repo
We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. The dataset is freely available at https://github.com/zalandoresearch/fashion-mnist
READ FULL TEXT VIEW PDFA MNIST-like fashion product database. Benchmark :point_down:
None
The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998)
in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can do so much the simple MNIST dataset has become the most widely used testbed in deep learning, surpassing CIFAR-10
(Krizhevsky and Hinton, 2009)and ImageNet
(Deng et al., 2009) in its popularity via Google trends111https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet. Despite its simplicity its usage does not seem to be decreasing despite calls for it in the deep learning community.The reason MNIST is so popular has to do with its size, allowing deep learning researchers to quickly check and prototype their algorithms. This is also complemented by the fact that all machine learning libraries (e.g. scikit-learn) and deep learning frameworks (e.g. Tensorflow, Pytorch) provide helper functions and convenient examples that use MNIST out of the box.
Our aim with this work is to create a good benchmark dataset which has all the accessibility of MNIST, namely its small size, straightforward encoding and permissive license. We took the approach of sticking to the classes grayscale images in the size of as in the original MNIST. In fact, the only change one needs to use this dataset is to change the URL from where the MNIST dataset is fetched. Moreover, Fashion-MNIST poses a more challenging classification task than the simple MNIST digits data, whereas the latter has been trained to accuracies above 99.7% as reported in Wan et al. (2013); Ciregan et al. (2012).
We also looked at the EMNIST dataset provided by Cohen et al. (2017)
, an extended version of MNIST that extends the number of classes by introducing uppercase and lowercase characters. However, to be able to use it seamlessly one needs to not only extend the deep learning framework’s MNIST helpers, but also change the underlying deep neural network to classify these extra classes.
Fashion-MNIST is based on the assortment on Zalando’s website222Zalando is the Europe’s largest online fashion platform. http://www.zalando.com. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.
We use the front look thumbnail images of unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, white-color products are not included in the dataset as they have low contrast to the background. The thumbnails () are then fed into the following conversion pipeline, which is visualized in Figure 1.
Converting the input to a PNG image.
Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within of the maximum possible intensity in RGB space.
Resizing the longest edge of the image to by subsampling the pixels, i.e. some rows and columns are skipped over.
Sharpening pixels using a Gaussian operator of the radius and standard deviation of
, with increasing effect near outlines.Extending the shortest edge to and put the image to the center of the canvas.
Negating the intensities of the image.
Converting the image to 8-bit grayscale pixels.
Name | Description | # Examples | Size |
---|---|---|---|
train-images-idx3-ubyte.gz | Training set images | MBytes | |
train-labels-idx1-ubyte.gz | Training set labels | Bytes | |
t10k-images-idx3-ubyte.gz | Test set images | MBytes | |
t10k-labels-idx1-ubyte.gz | Test set labels | Bytes |
For the class labels, we use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product contains only one silhouette code. Table 2 gives a summary of all class labels in Fashion-MNIST with examples for each class.
Finally, the dataset is divided into a training and a test set. The training set receives a randomly-selected
examples from each class. Images and labels are stored in the same file format as the MNIST data set, which is designed for storing vectors and multidimensional matrices. The result files are listed in
Table 1. We sort examples by their labels while storing, resulting in smaller label files after compression comparing to the MNIST. It is also easier to retrieve examples with a certain class label. The data shuffling job is therefore left to the algorithm developer.Label | Description | Examples |
---|---|---|
T-Shirt/Top | ![]() |
|
Trouser | ![]() |
|
Pullover | ![]() |
|
Dress | ![]() |
|
Coat | ![]() |
|
Sandals | ![]() |
|
Shirt | ![]() |
|
Sneaker | ![]() |
|
Bag | ![]() |
|
Ankle boots | ![]() |
We provide some classification results in Table 3 to form a benchmark on this data set. All algorithms are repeated times by shuffling the training data and the average accuracy on the test set is reported. The benchmark on the MNIST dataset is also included for a side-by-side comparison. A more comprehensive table with explanations on the algorithms can be found on https://github.com/zalandoresearch/fashion-mnist.
Test Accuracy | |||
---|---|---|---|
Classifier | Parameter | Fashion | MNIST |
DecisionTreeClassifier | criterion=entropy max_depth= splitter=best | ||
criterion=entropy max_depth= splitter=random | |||
criterion=entropy max_depth= splitter=best | |||
criterion=entropy max_depth= splitter=best | |||
criterion=gini max_depth= splitter=best | |||
criterion=entropy max_depth= splitter=random | |||
criterion=entropy max_depth= splitter=random | |||
criterion=gini max_depth= splitter=best | |||
criterion=gini max_depth= splitter=best | |||
criterion=gini max_depth= splitter=random | |||
criterion=gini max_depth= splitter=random | |||
criterion=gini max_depth= splitter=random | |||
ExtraTreeClassifier | criterion=gini max_depth= splitter=best | ||
criterion=entropy max_depth= splitter=best | |||
criterion=entropy max_depth= splitter=best | |||
criterion=entropy max_depth= splitter=best | |||
criterion=gini max_depth= splitter=best | |||
criterion=gini max_depth= splitter=best | |||
criterion=entropy max_depth= splitter=random | |||
criterion=entropy max_depth= splitter=random | |||
criterion=gini max_depth= splitter=random | |||
criterion=gini max_depth= splitter=random | |||
criterion=gini max_depth= splitter=random | |||
criterion=entropy max_depth= splitter=random | |||
GaussianNB | priors=[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] | ||
GradientBoostingClassifier | n_estimators= loss=deviance max_depth= | ||
n_estimators= loss=deviance max_depth= | |||
n_estimators= loss=deviance max_depth= | |||
n_estimators= loss=deviance max_depth= | |||
n_estimators= loss=deviance max_depth= | |||
n_estimators= loss=deviance max_depth= | |||
n_estimators= loss=deviance max_depth= | |||
KNeighborsClassifier | weights=distance n_neighbors= p= | ||
weights=distance n_neighbors= p= | |||
weights=uniform n_neighbors= p= | |||
weights=uniform n_neighbors= p= | |||
weights=distance n_neighbors= p= | |||
weights=distance n_neighbors= p= | |||
weights=uniform n_neighbors= p= | |||
weights=uniform n_neighbors= p= | |||
weights=distance n_neighbors= p= | |||
weights=uniform n_neighbors= p= | |||
weights=uniform n_neighbors= p= | |||
weights=distance n_neighbors= p= | |||
LinearSVC | loss=hinge C= multi_class=ovr penalty=l2 | ||
loss=hinge C= multi_class=crammer_singer penalty=l2 | |||
loss=squared_hinge C= multi_class=crammer_singer penalty=l2 | |||
loss=squared_hinge C= multi_class=crammer_singer penalty=l1 | |||
loss=hinge C= multi_class=crammer_singer penalty=l1 | |||
loss=squared_hinge C= multi_class=ovr penalty=l2 | |||
loss=squared_hinge C= multi_class=ovr penalty=l2 | |||
loss=squared_hinge C= multi_class=ovr penalty=l2 | |||
loss=hinge C= multi_class=ovr penalty=l2 | |||
loss=hinge C= multi_class=ovr penalty=l2 | |||
loss=hinge C= multi_class=crammer_singer penalty=l1 | |||
loss=hinge C= multi_class=crammer_singer penalty=l2 | |||
loss=squared_hinge C= multi_class=crammer_singer penalty=l2 | |||
loss=squared_hinge C= multi_class=crammer_singer penalty=l1 | |||
loss=hinge C= multi_class=crammer_singer penalty=l1 | |||
loss=hinge C= multi_class=crammer_singer penalty=l2 | |||
loss=squared_hinge C= multi_class=crammer_singer penalty=l1 | |||
loss=squared_hinge C= multi_class=crammer_singer penalty=l2 | |||
LogisticRegression | C= multi_class=ovr penalty=l1 | ||
C= multi_class=ovr penalty=l2 | |||
C= multi_class=ovr penalty=l2 | |||
C= multi_class=ovr penalty=l1 | |||
C= multi_class=ovr penalty=l2 | |||
MLPClassifier | activation=relu hidden_layer_sizes=[100] | ||
activation=relu hidden_layer_sizes=[100, 10] | |||
activation=tanh hidden_layer_sizes=[100] | |||
activation=tanh hidden_layer_sizes=[100, 10] | |||
activation=relu hidden_layer_sizes=[10, 10] | |||
activation=relu hidden_layer_sizes=[10] | |||
activation=tanh hidden_layer_sizes=[10, 10] | |||
activation=tanh hidden_layer_sizes=[10] | |||
PassiveAggressiveClassifier | C= | ||
C= | |||
C= | |||
Perceptron | penalty=l1 | ||
penalty=l2 | |||
penalty=elasticnet | |||
RandomForestClassifier | n_estimators= criterion=entropy max_depth= | ||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
n_estimators= criterion=entropy max_depth= | |||
n_estimators= criterion=gini max_depth= | |||
SGDClassifier | loss=hinge penalty=l2 | ||
loss=perceptron penalty=l1 | |||
loss=modified_huber penalty=l1 | |||
loss=modified_huber penalty=l2 | |||
loss=log penalty=elasticnet | |||
loss=hinge penalty=elasticnet | |||
loss=squared_hinge penalty=elasticnet | |||
loss=hinge penalty=l1 | |||
loss=log penalty=l1 | |||
loss=perceptron penalty=l2 | |||
loss=perceptron penalty=elasticnet | |||
loss=squared_hinge penalty=l2 | |||
loss=modified_huber penalty=elasticnet | |||
loss=log penalty=l2 | |||
loss=squared_hinge penalty=l1 | |||
SVC | C= kernel=rbf | ||
C= kernel=poly | |||
C= kernel=poly | |||
C= kernel=rbf | |||
C= kernel=rbf | |||
C= kernel=poly | |||
C= kernel=linear | |||
C= kernel=linear | |||
C= kernel=linear | |||
C= kernel=sigmoid | |||
C= kernel=sigmoid | |||
C= kernel=sigmoid |
This paper introduced Fashion-MNIST, a fashion product images dataset intended to be a drop-in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithm. The images in Fashion-MNIST are converted to a format that matches that of the MNIST dataset, making it immediately compatible with any machine learning package capable of working with the original MNIST dataset.