The Design and Implementation of a Scalable DL Benchmarking Platform

11/19/2019
by   Cheng Li, et al.
55

The current Deep Learning (DL) landscape is fast-paced and is rife with non-uniform models, hardware/software (HW/SW) stacks, but lacks a DL benchmarking platform to facilitate evaluation and comparison of DL innovations, be it models, frameworks, libraries, or hardware. Due to the lack of a benchmarking platform, the current practice of evaluating the benefits of proposed DL innovations is both arduous and error-prone - stifling the adoption of the innovations. In this work, we first identify 10 design features which are desirable within a DL benchmarking platform. These features include: performing the evaluation in a consistent, reproducible, and scalable manner, being framework and hardware agnostic, supporting real-world benchmarking workloads, providing in-depth model execution inspection across the HW/SW stack levels, etc. We then propose MLModelScope, a DL benchmarking platform design that realizes the 10 objectives. MLModelScope proposes a specification to define DL model evaluations and techniques to provision the evaluation workflow using the user-specified HW/SW stack. MLModelScope defines abstractions for frameworks and supports board range of DL models and evaluation scenarios. We implement MLModelScope as an open-source project with support for all major frameworks and hardware architectures. Through MLModelScope's evaluation and automated analysis workflows, we performed case-study analyses of 37 models across 4 systems and show how model, hardware, and framework selection affects model accuracy and performance under different benchmarking scenarios. We further demonstrated how MLModelScope's tracing capability gives a holistic view of model execution and helps pinpoint bottlenecks.

READ FULL TEXT
research
02/19/2020

MLModelScope: A Distributed Platform for Model Evaluation and Benchmarking at Scale

Machine Learning (ML) and Deep Learning (DL) innovations are being intro...
research
11/24/2018

MLModelScope: Evaluate and Measure ML Models within AI Pipelines

The current landscape of Machine Learning (ML) and Deep Learning (DL) is...
research
12/13/2020

Comparing the costs of abstraction for DL frameworks

High level abstractions for implementing, training, and testing Deep Lea...
research
11/09/2021

MLHarness: A Scalable Benchmarking System for MLCommons

With the society's growing adoption of machine learning (ML) and deep le...
research
02/26/2020

DLSpec: A Deep Learning Task Exchange Specification

Deep Learning (DL) innovations are being introduced at a rapid pace. How...
research
02/01/2019

Clubmark: a Parallel Isolation Framework for Benchmarking and Profiling Clustering Algorithms on NUMA Architectures

There is a great diversity of clustering and community detection algorit...
research
01/17/2019

Proposition of an implementation framework enabling benchmarking of Holonic Manufacturing Systems

Performing an overview of the benchmarking initiatives oriented towards ...

Please sign up or login with your details

Forgot password? Click here to reset