MLModelScope: Evaluate and Measure ML Models within AI Pipelines

11/24/2018
by   Abdul Dakkak, et al.
0

The current landscape of Machine Learning (ML) and Deep Learning (DL) is rife with non-uniform frameworks, models, and system stacks but lacks standard tools to facilitate the evaluation and measurement of model. Due to the absence of such tools, the current practice for evaluating and comparing the benefits of proposed AI innovations (be it hardware or software) on end-to-end AI pipelines is both arduous and error prone -- stifling the adoption of the innovations. We propose MLModelScope -- a hardware/software agnostic platform to facilitate the evaluation, measurement, and introspection of ML models within AI pipelines. MLModelScope aids application developers in discovering and experimenting with models, data scientists developers in replicating and evaluating for publishing models, and system architects in understanding the performance of AI workloads. We describe the design and implementation of MLModelScope and show how it is able to give users a holistic view into the execution of models within AI pipelines. Using AlexNet as a case study, we demonstrate how MLModelScope aids in identifying deviation in accuracy, helps in pin pointing the source of system bottlenecks, and automates the evaluation and performance aggregation of models across frameworks and systems.

READ FULL TEXT

page 3

page 4

page 7

page 8

page 9

research
11/19/2019

The Design and Implementation of a Scalable DL Benchmarking Platform

The current Deep Learning (DL) landscape is fast-paced and is rife with ...
research
05/01/2020

PipelineProfiler: A Visual Analytics Tool for the Exploration of AutoML Pipelines

In recent years, a wide variety of automated machine learning (AutoML) m...
research
02/19/2020

MLModelScope: A Distributed Platform for Model Evaluation and Benchmarking at Scale

Machine Learning (ML) and Deep Learning (DL) innovations are being intro...
research
06/07/2020

Kafka-ML: connecting the data stream with ML/AI frameworks

Machine Learning (ML) and Artificial Intelligence (AI) have a dependency...
research
04/17/2023

eTOP: Early Termination of Pipelines for Faster Training of AutoML Systems

Recent advancements in software and hardware technologies have enabled t...
research
05/23/2023

Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces

Benchmarking and co-design are essential for driving optimizations and i...
research
09/12/2023

The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

Pushing the boundaries of machine learning often requires exploring diff...

Please sign up or login with your details

Forgot password? Click here to reset