
An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms
This paper focuses on valuating training data for supervised learning ta...
read it

On the Problem of p_1^1 in LocalitySensitive Hashing
A LocalitySensitive Hash (LSH) function is called (r,cr,p_1,p_2)sensit...
read it

Why Size Matters: Feature Coding as Nystrom Sampling
Recently, the computer vision and machine learning community has been in...
read it

All nearest neighbor calculation based on Delaunay graphs
When we have two data sets and want to find the nearest neighbour of eac...
read it

Efficient computation and analysis of distributional Shapley values
Distributional data Shapley value (DShapley) has been recently proposed ...
read it

A New Clustering Algorithm Based Upon Flocking On Complex Network
We have proposed a model based upon flocking on a complex network, and t...
read it

Analyzing Hypersensitive AI: Instability in CorporateScale Machine Learning
Predictive geometric models deliver excellent results for many Machine L...
read it
Efficient TaskSpecific Data Valuation for Nearest Neighbor Algorithms
Given a data set D containing millions of data points and a data consumer who is willing to pay for X to train a machine learning (ML) model over D, how should we distribute this X to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing realworld interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all N data points, it requires O(2^N) model evaluations for exact computation and O(N N) for (ϵ, δ)approximation. In this paper, we focus on one popular family of ML models relying on Knearest neighbors (KNN). The most surprising result is that for unweighted KNN classifiers and regressors, the Shapley value of all N data points can be computed, exactly, in O(N N) time  an exponential improvement on computational complexity! Moreover, for (ϵ, δ)approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O(N^h(ϵ,K) N) when ϵ is not too small and K is not too large. We empirically evaluate our algorithms on up to 10 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSHbased approximation algorithm can accelerate the value calculation process even further. We then extend our algorithms to other scenarios such as (1) weighed KNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation.
READ FULL TEXT
Comments
There are no comments yet.