Fully Understanding the Hashing Trick

05/22/2018
by   Casper Benjamin Freksen, et al.
0

Feature hashing, also known as the hashing trick, introduced by Weinberger et al. (2009), is one of the key techniques used in scaling-up machine learning algorithms. Loosely speaking, feature hashing uses a random sparse projection matrix A : R^n →R^m (where m ≪ n) in order to reduce the dimension of the data from n to m while approximately preserving the Euclidean norm. Every column of A contains exactly one non-zero entry, equals to either -1 or 1. Weinberger et al. showed tail bounds on Ax_2^2. Specifically they showed that for every ε, δ, if x_∞ / x_2 is sufficiently small, and m is sufficiently large, then [ | Ax_2^2 - x_2^2 | < εx_2^2 ] > 1 - δ . These bounds were later extended by Dasgupta (2010) and most recently refined by Dahlgaard et al. (2017), however, the true nature of the performance of this key technique, and specifically the correct tradeoff between the pivotal parameters x_∞ / x_2, m, ε, δ remained an open question. We settle this question by giving tight asymptotic bounds on the exact tradeoff between the central parameters, thus providing a complete understanding of the performance of feature hashing. We complement the asymptotic bound with empirical data, which shows that the constants "hiding" in the asymptotic notation are, in fact, very close to 1, thus further illustrating the tightness of the presented bounds in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2019

Understanding Sparse JL for Feature Hashing

Feature hashing and more general projection schemes are commonly used in...
research
05/03/2022

Understanding the Moments of Tabulation Hashing via Chaoses

Simple tabulation hashing dates back to Zobrist in 1970 and is defined a...
research
11/23/2017

Practical Hash Functions for Similarity Estimation and Dimensionality Reduction

Hashing is a basic tool for dimensionality reduction employed in several...
research
05/25/2021

Hashing embeddings of optimal dimension, with applications to linear least squares

The aim of this paper is two-fold: firstly, to present subspace embeddin...
research
01/26/2021

New upper bounds for (b,k)-hashing

For fixed integers b≥ k, the problem of perfect (b,k)-hashing asks for t...
research
05/26/2020

Tight Bounds for Deterministic High-Dimensional Grid Exploration

We study the problem of exploring an oriented grid with autonomous agent...
research
06/23/2023

Precise Asymptotic Generalization for Multiclass Classification with Overparameterized Linear Models

We study the asymptotic generalization of an overparameterized linear mo...

Please sign up or login with your details

Forgot password? Click here to reset