Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning

07/17/2018
by   Michaela Regneri, et al.
0

Predictive geometric models deliver excellent results for many Machine Learning use cases. Despite their undoubted performance, neural predictive algorithms can show unexpected degrees of instability and variance, particularly when applied to large datasets. We present an approach to measure changes in geometric models with respect to both output consistency and topological stability. Considering the example of a recommender system using word2vec, we analyze the influence of single data points, approximation methods and parameter settings. Our findings can help to stabilize models where needed and to detect differences in informational value of data points on a large scale.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2019

Computing the Value of Data: Towards Applied Data Minimalism

We present an approach to compute the monetary value of individual data ...
research
02/27/2022

Conformal prediction beyond exchangeability

Conformal prediction is a popular, modern technique for providing valid ...
research
10/03/2022

Data Budgeting for Machine Learning

Data is the fuel powering AI and creates tremendous value for many domai...
research
02/19/2017

Compressive Embedding and Visualization using Graphs

Visualizing high-dimensional data has been a focus in data analysis comm...
research
05/02/2023

Class based Influence Functions for Error Detection

Influence functions (IFs) are a powerful tool for detecting anomalous ex...
research
01/29/2023

Neural Relation Graph for Identifying Problematic Data

Diagnosing and cleaning datasets are crucial for building robust machine...
research
05/05/2020

Interpreting Deep Models through the Lens of Data

Identification of input data points relevant for the classifier (i.e. se...

Please sign up or login with your details

Forgot password? Click here to reset