MLHarness: A Scalable Benchmarking System for MLCommons

11/09/2021
by   Yen-Hsiang Chang, et al.
28

With the society's growing adoption of machine learning (ML) and deep learning (DL) for various intelligent solutions, it becomes increasingly imperative to standardize a common set of measures for ML/DL models with large scale open datasets under common development practices and resources so that people can benchmark and compare models quality and performance on a common ground. MLCommons has emerged recently as a driving force from both industry and academia to orchestrate such an effort. Despite its wide adoption as standardized benchmarks, MLCommons Inference has only included a limited number of ML/DL models (in fact seven models in total). This significantly limits the generality of MLCommons Inference's benchmarking results because there are many more novel ML/DL models from the research community, solving a wide range of problems with different inputs and outputs modalities. To address such a limitation, we propose MLHarness, a scalable benchmarking harness system for MLCommons Inference with three distinctive features: (1) it codifies the standard benchmark process as defined by MLCommons Inference including the models, datasets, DL frameworks, and software and hardware systems; (2) it provides an easy and declarative approach for model developers to contribute their models and datasets to MLCommons Inference; and (3) it includes the support of a wide range of models with varying inputs/outputs modalities so that we can scalably benchmark these models across different datasets, frameworks, and hardware systems. This harness system is developed on top of the MLModelScope system, and will be open sourced to the community. Our experimental results demonstrate the superior flexibility and scalability of this harness system for MLCommons Inference benchmarking.

READ FULL TEXT
research
02/19/2020

MLModelScope: A Distributed Platform for Model Evaluation and Benchmarking at Scale

Machine Learning (ML) and Deep Learning (DL) innovations are being intro...
research
11/19/2019

The Design and Implementation of a Scalable DL Benchmarking Platform

The current Deep Learning (DL) landscape is fast-paced and is rife with ...
research
05/23/2023

Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces

Benchmarking and co-design are essential for driving optimizations and i...
research
09/21/2019

Scale MLPerf-0.6 models on Google TPU-v3 Pods

The recent submission of Google TPU-v3 Pods to the industry wide MLPerf ...
research
11/18/2019

DLBricks: Composable Benchmark Generation toReduce Deep Learning Benchmarking Effort on CPUs

The past few years have seen a surge of applying Deep Learning (DL) mode...
research
11/18/2019

DLBricks: Composable Benchmark Generation to Reduce Deep Learning Benchmarking Effort on CPUs

The past few years have seen a surge of applying Deep Learning (DL) mode...
research
11/18/2019

DLBricks: Composable Benchmark Generation to Reduce Deep Learning Benchmarking Effort on CPUs (Extended)

The past few years have seen a surge of applying Deep Learning (DL) mode...

Please sign up or login with your details

Forgot password? Click here to reset