Algebraic Machine Learning

03/14/2018
by   Fernando Martin-Maroto, et al.
0

Machine learning algorithms use error function minimization to fit a large set of parameters in a preexisting model. However, error minimization eventually leads to a memorization of the training dataset, losing the ability to generalize to other datasets. To achieve generalization something else is needed, for example a regularization method or stopping the training when error in a validation dataset is minimal. Here we propose a different approach to learning and generalization that is parameter-free, fully discrete and that does not use function minimization. We use the training data to find an algebraic representation with minimal size and maximal freedom, explicitly expressed as a product of irreducible components. This algebraic representation is shown to directly generalize, giving high accuracy in test data, more so the smaller the representation. We prove that the number of generalizing representations can be very large and the algebra only needs to find one. We also derive and test a relationship between compression and error rate. We give results for a simple problem solved step by step, hand-written character recognition, and the Queens Completion problem as an example of unsupervised learning. As an alternative to statistical learning, algebraic learning may offer advantages in combining bottom-up and top-down information, formal concept derivation from data and large-scale parallelization.

READ FULL TEXT
research
08/06/2020

Data Minimization for GDPR Compliance in Machine Learning Models

The EU General Data Protection Regulation (GDPR) mandates the principle ...
research
03/10/2023

Tradeoff of generalization error in unsupervised learning

Finding the optimal model complexity that minimizes the generalization e...
research
06/23/2021

Training Data Subset Selection for Regression with Controlled Generalization Error

Data subset selection from a large number of training instances has been...
research
07/20/2023

Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization

Despite extensive studies, the underlying reason as to why overparameter...
research
10/03/2019

Robust Risk Minimization for Statistical Learning

We consider a general statistical learning problem where an unknown frac...
research
08/03/2022

Improving Meta-Learning Generalization with Activation-Based Early-Stopping

Meta-Learning algorithms for few-shot learning aim to train neural netwo...
research
01/28/2020

Margin Maximization as Lossless Maximal Compression

The ultimate goal of a supervised learning algorithm is to produce model...

Please sign up or login with your details

Forgot password? Click here to reset