Image augmentation library in Python for machine learning.
The generation of artificial data based on existing observations, known as data augmentation, is a technique used in machine learning to improve model accuracy, generalisation, and to control overfitting. Augmentor is a software package, available in both Python and Julia versions, that provides a high level API for the expansion of image data using a stochastic, pipeline-based approach which effectively allows for images to be sampled from a distribution of augmented images at runtime. Augmentor provides methods for most standard augmentation practices as well as several advanced features such as label-preserving, randomised elastic distortions, and provides many helper functions for typical augmentation tasks used in machine learning.READ FULL TEXT VIEW PDF
Large databases such as aflowlib.org provide valuable data sources for
On image data, data augmentation is becoming less relevant due to the la...
This article describes the implementation and use of PHOTON, a high-leve...
Data augmentation, a technique in which a training set is expanded with
Flaw detection in non-destructive testing, especially in complex signals...
Data imbalance is a major problem that affects several machine learning
Considering audio and image data as having quantum nature (data are
Image augmentation library in Python for machine learning.
A fast image augmentation library in Julia for machine learning.
Data augmentation is the artificial generation of data through the introduction of new samples created by the perturbation of the original dataset, while preserving the label of newly generated samples. It is a convenient and frequently employed method for generating more training data at low effort, or when the accumulation of new samples is no longer feasible, such as in a discontinued clinical trial. Data augmentation is most commonly utilised in the branch of machine learning that concerns image analysis (Hauberg et al., 2016).
The Augmentor project uses a stochastic, pipeline-based approach to image augmentation. The pipeline approach allows the user to chain augmentation operations together, such as shears, rotations, and crops, and pass images through this pipeline in order to create new data. All operations in the pipeline are applied stochastically, both in terms of the probability of the operations being applied to each image as the image passes through the pipeline, and in terms of each operation’s parameters, which are also randomised within user specified ranges. This effectively allows you to sample from a distribution of possible images, generated by the pipeline at runtime.
Therefore, the aim of the package is to provide a comprehensive and highly customisable image augmentation library, which is platform independent but also independent from any particular machine learning framework. Crucial to the successful application of augmentation is the generation of realistically feasible training data, meaning tight control of the pipeline is a necessity when creating new data. Augmentor’s operations are therefore highly parametric, allowing fine control over how images are created.
The Augmentor package is available for Python and Julia. Sources are available on GitHub, while comprehensive documentation is hosted on Read The Docs (see Table 1). Both versions of the Augmentor package are available under the terms of the MIT Licence.
To install Augmentor:
Python: pip install Augmentor
We took into account typical augmentation techniques from the literature, and techniques reported on various competition sites such as Kaggle, when developing the API. Standard operations include arbitrary rotations, transformations through the horizontal and vertical axes, cropping, scaling, perspective shifting, shearing, and zooming (Dosovitskiy et al., 2013; Simard et al., 2003; Krizhevsky et al., 2012; Howard, 2013). Less frequently used operations were also implemented (Dosovitskiy et al., 2015), as well a number of pre-processing techniques in common use. Also, a large number of convenience functions have been implemented that take into account typical augmentation techniques.
Because image augmentation is often performed accumulatively, a pipeline-based API was developed (see Figure 1). To use Augmentor, you begin with an empty pipeline. The user adds operations to this pipeline in the order they wish the operations to be applied to images that are passed through the pipeline. As well as this, the user can specify the probability that each operation should be applied to images as they pass through. Also, the range of each operation’s freedom of movement is likewise defined by the user, for example by specifying that a rotation operation can operate within the range of to . Once a pipeline has been generated, an image or set of images are repeatedly passed though the pipeline until the desired amount of new images have been generated. The stochastic nature of the pipeline approach will produce different image data each time an image passed through the pipeline. This stochastic approach allows for a potentially very large amount of images to be generated from even a small initial dataset.
A complete list of features can be found in the project’s documentation. Some commonly used features are random rotations, transforms through the horizontal and vertical axes, cropping (randomly positioned or centred), random zoom levels, random scaling and resizing. Other transforms, such as shearing by random angles in random directions and through random axes, as well as perspective transformations are also implemented. Operations have been implemented with machine learning in mind. For example, arbitrary rotations will not result in images with black or transparent regions around the newly rotated image, as the images are optimally cropped and then resized to their original input size. The same is true of the shear and perspective tilt operations.
Augmentor also has the capability of producing random elastic transforms (Simard et al., 2003), and to perform these in a highly configurable way. The user may specify a grid size which controls the granularity of the distortions and the strength of the displacement within the grid (the magnitude of the arrows shown in Figure 2).
To demonstrate the API and to highlight the effectiveness of augmentation on a well-known dataset, a short experiment was performed. Using the MNIST dataset, a CNN was trained on 1000 random images (100 samples per class) extracted from the 60,000 image training set and tested on the standard 10,000 image test set. Then this set of 1000 images was augmented to produce 10,000 new images, a separate CNN was trained using the augmented dataset, and the results of the models were compared. As shown in Table 2 this resulted in an almost 4% improvement in performance on the same test set.
This augmentation experiment was perform using randomised elastic transforms and random rotations. To show how the Augmentor API works in practice, we will demonstrate how this augmented dataset was generated. To begin, a pipeline object is created, pointing to a folder containing the images:
Now that a pipeline object, p, has been created, operations are added to the pipeline as follows:
Every operation has at least a probability parameter. This was set to 1.0 for the randomised distortions and 0.5 for the rotation operation. Finally, the sample function is called to generate the data:
This generates 1000 new augmented images. The procedure was repeated 10 times, once per digit, for 10,000 augmented images in total.
|Experiment||Dataset||Test Set Accuracy|
|Baseline||1,000 image training set||93.94%|
|Augmented||11,000 image augmented training set||97.28%|
Image augmentation is an important constituent of many machine learning tasks, particularly deep learning. Augmentor makes it easier to perform artificial data generation, by providing a stochastic, pipeline-based API that allows for fine-grained control over the creation of augmented data and provides many functions for augmentation techniques found in the literature. Future work will entail expanding functionality, such as the ability to mirror augmentation on a reference data set, or mimicking more advanced preprocessing and augmentation methods such as the specialised contrast manipulation techniques or vignetting shown inWu et al. (2015).
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics. Journal of Machine Learning Research, 2016.