Probabilistic Invariant Learning with Randomized Linear Classifiers

08/08/2023
by   Leonardo Cotta, et al.
0

Designing models that are both expressive and preserve known invariances of tasks is an increasingly hard problem. Existing solutions tradeoff invariance for computational or memory resources. In this work, we show how to leverage randomness and design models that are both expressive and invariant but use less resources. Inspired by randomized algorithms, our key insight is that accepting probabilistic notions of universal approximation and invariance can reduce our resource requirements. More specifically, we propose a class of binary classification models called Randomized Linear Classifiers (RLCs). We give parameter and sample size conditions in which RLCs can, with high probability, approximate any (smooth) function while preserving invariance to compact group transformations. Leveraging this result, we design three RLCs that are provably probabilistic invariant for classification tasks over sets, graphs, and spherical data. We show how these models can achieve probabilistic invariance and universality using less resources than (deterministic) neural networks and their invariant counterparts. Finally, we empirically demonstrate the benefits of this new class of models on invariant tasks where deterministic invariant neural networks are known to struggle.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2015

Manitest: Are classifiers really invariant?

Invariance to geometric transformations is a highly desirable property o...
research
01/18/2019

Probabilistic symmetry and invariant neural networks

In an effort to improve the performance of deep neural networks in data-...
research
02/26/2022

Automated Data Augmentations for Graph Classification

Data augmentations are effective in improving the invariance of learning...
research
05/30/2023

Group Invariant Global Pooling

Much work has been devoted to devising architectures that build group-eq...
research
02/25/2021

Learning with invariances in random features and kernel models

A number of machine learning tasks entail a high degree of invariance: t...
research
10/11/2017

Subsampling large graphs and invariance in networks

Specify a randomized algorithm that, given a very large graph or network...
research
12/14/2016

Deep Function Machines: Generalized Neural Networks for Topological Layer Expression

In this paper we propose a generalization of deep neural networks called...

Please sign up or login with your details

Forgot password? Click here to reset