Log In Sign Up

An introduction to distributed training of deep neural networks for segmentation tasks with large seismic datasets

by   Claire Birnie, et al.

Deep learning applications are drastically progressing in seismic processing and interpretation tasks. However, the majority of approaches subsample data volumes and restrict model sizes to minimise computational requirements. Subsampling the data risks losing vital spatio-temporal information which could aid training whilst restricting model sizes can impact model performance, or in some extreme cases, renders more complicated tasks such as segmentation impossible. This paper illustrates how to tackle the two main issues of training of large neural networks: memory limitations and impracticably large training times. Typically, training data is preloaded into memory prior to training, a particular challenge for seismic applications where data is typically four times larger than that used for standard image processing tasks (float32 vs. uint8). Using a microseismic use case, we illustrate how over 750GB of data can be used to train a model by using a data generator approach which only stores in memory the data required for that training batch. Furthermore, efficient training over large models is illustrated through the training of a 7-layer UNet with input data dimensions of 4096X4096. Through a batch-splitting distributed training approach, training times are reduced by a factor of four. The combination of data generators and distributed training removes any necessity of data 1 subsampling or restriction of neural network sizes, offering the opportunity of utilisation of larger networks, higher-resolution input data or moving from 2D to 3D problem spaces.


Data optimization for large batch distributed training of deep neural networks

Distributed training in deep learning (DL) is common practice as data an...

Benchmarking network fabrics for data distributed training of deep neural networks

Artificial Intelligence/Machine Learning applications require the traini...

See through Gradients: Image Batch Recovery via GradInversion

Training deep neural networks requires gradient estimation from data bat...

Training Multiscale-CNN for Large Microscopy Image Classification in One Hour

Existing approaches to train neural networks that use large images requi...

One-element Batch Training by Moving Window

Several deep models, esp. the generative, compare the samples from two d...

Out-of-core Training for Extremely Large-Scale Neural Networks With Adaptive Window-Based Scheduling

While large neural networks demonstrate higher performance in various ta...

An approach to Beethoven's 10th Symphony

Ludwig van Beethoven composed his symphonies between 1799 and 1825, when...