-
Mirovia: A Benchmarking Suite for Modern Heterogeneous Computing
This paper presents Mirovia, a benchmark suite developed for modern day ...
read it
-
Machine Learning Automation Toolbox (MLaut)
In this paper we present MLaut (Machine Learning AUtomation Toolbox) for...
read it
-
Rasa: Open Source Language Understanding and Dialogue Management
We introduce a pair of tools, Rasa NLU and Rasa Core, which are open sou...
read it
-
Minerva and minepy: a C engine for the MINE suite and its R, Python and MATLAB wrappers
We introduce a novel implementation in ANSI C of the MINE family of algo...
read it
-
A Benchmark to Evaluate InfiniBand Solutions for Java Applications
Low-latency network interconnects, such as InfiniBand, are commonly used...
read it
-
Bringing the People Back In: Contesting Benchmark Machine Learning Datasets
In response to algorithmic unfairness embedded in sociotechnical systems...
read it
-
Scale MLPerf-0.6 models on Google TPU-v3 Pods
The recent submission of Google TPU-v3 Pods to the industry wide MLPerf ...
read it
OpenML Benchmarking Suites and the OpenML100
We advocate the use of curated, comprehensive benchmark suites of machine learning datasets, backed by standardized OpenML-based interfaces and complementary software toolkits written in Python, Java and R. Major distinguishing features of OpenML benchmark suites are (a) ease of use through standardized data formats, APIs, and existing client libraries; (b) machine-readable meta-information regarding the contents of the suite; and (c) online sharing of results, enabling large scale comparisons. As a first such suite, we propose the OpenML100, a machine learning benchmark suite of 100 classification datasets carefully curated from the thousands of datasets available on OpenML.org.
READ FULL TEXT
Comments
There are no comments yet.