Can AutoML outperform humans? An evaluation on popular OpenML datasets using AutoML Benchmark

09/03/2020
by   Marc Hanussek, et al.
0

In the last few years, Automated Machine Learning (AutoML) has gained much attention. With that said, the question arises whether AutoML can outperform results achieved by human data scientists. This paper compares four AutoML frameworks on 12 different popular datasets from OpenML; six of them supervised classification tasks and the other six supervised regression ones. Additionally, we consider a real-life dataset from one of our recent projects. The results show that the automated frameworks perform better or equal than the machine learning community in 7 out of 12 OpenML tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

Leveraging Automated Machine Learning for Text Classification: Evaluation of AutoML Tools and Comparison with Human Performance

Recently, Automated Machine Learning (AutoML) has registered increasing ...
research
08/27/2021

Man versus Machine: AutoML and Human Experts' Role in Phishing Detection

Machine learning (ML) has developed rapidly in the past few years and ha...
research
04/18/2022

AutoMLBench: A Comprehensive Experimental Evaluation of Automated Machine Learning Frameworks

Nowadays, machine learning is playing a crucial role in harnessing the p...
research
11/23/2022

Can lies be faked? Comparing low-stakes and high-stakes deception video datasets from a Machine Learning perspective

Despite the great impact of lies in human societies and a meager 54 accu...
research
07/21/2019

Techniques for Automated Machine Learning

Automated machine learning (AutoML) aims to find optimal machine learnin...
research
03/30/2020

How human judgment impairs automated deception detection performance

Background: Deception detection is a prevalent problem for security prac...

Please sign up or login with your details

Forgot password? Click here to reset