SysML'19 demo: customizable and reusable Collective Knowledge pipelines to automate and reproduce machine learning experiments
Reproducing, comparing and reusing results from machine learning and systems papers is a very tedious, ad hoc and time-consuming process. I will demonstrate how to automate this process using open-source, portable, customizable and CLI-based Collective Knowledge workflows and pipelines developed by the community. I will help participants run several real-world non-virtualized CK workflows from the SysML'19 conference, companies (General Motors, Arm) and MLPerf benchmark to automate benchmarking and co-design of efficient software/hardware stacks for machine learning workloads. I hope that our approach will help authors reduce their effort when sharing reusable and extensible research artifacts while enabling artifact evaluators to automatically validate experimental results from published papers in a standard and portable way.
READ FULL TEXT