An Algorithm for Learning Smaller Representations of Models With Scarce Data
We present a greedy algorithm for solving binary classification problems in situations where the dataset is either too small or not fully representative of the problem being solved, and obtaining more data is not possible. This algorithm is of particular interest when training small models that have trouble generalizing. It relies on a trained model with loose accuracy constraints, an iterative hyperparameter pruning procedure, and a function used to generate new data. Analysis on correctness and runtime complexity under ideal conditions and an extension to deep neural networks is provided. In the former case we obtain an asymptotic bound of O(|Θ^2|(log|Θ| + |θ^2| + T_f(| D|)) + S̅|Θ||E|), where |Θ| is the cardinality of the set of hyperparameters θ to be searched; |E| and |D| are the sizes of the evaluation and training datasets, respectively; S̅ and f̅ are the inference times for the trained model and the candidate model; and T_f(|D|) is a polynomial on |D| and f̅. Under these conditions, this algorithm returns a solution that is 1 ≤ r ≤ 2(1 - 2^-|Θ|) times better than simply enumerating and training with any θ∈Θ. As part of our analysis of the generating function we also prove that, under certain assumptions, if an open cover of D has the same homology as the manifold where the support of the underlying probability distribution lies, then D is learnable, and viceversa.
READ FULL TEXT