Universal Mini-Batch Consistency for Set Encoding Functions

08/26/2022
by   Jeffrey Willette, et al.
0

Previous works have established solid foundations for neural set functions, as well as effective architectures which preserve the necessary properties for operating on sets, such as being invariant to permutations of the set elements. Subsequently, Mini-Batch Consistency (MBC), the ability to sequentially process any permutation of any random set partition scheme while maintaining consistency guarantees on the output, has been established but with limited options for network architectures. We further study the MBC property in neural set encoding functions, establishing a method for converting arbitrary non-MBC models to satisfy MBC. In doing so, we provide a framework for a universally-MBC (UMBC) class of set functions. Additionally, we explore an interesting dropout strategy made possible by our framework, and investigate its effects on probabilistic calibration under test-time distributional shifts. We validate UMBC with proofs backed by unit tests, also providing qualitative/quantitative experiments on toy data, clean and corrupted point cloud classification, and amortized clustering on ImageNet. The results demonstrate the utility of UMBC, and we further discover that our dropout strategy improves uncertainty calibration.

READ FULL TEXT

page 9

page 16

research
03/02/2021

Mini-Batch Consistent Slot Set Encoder for Scalable Set Encoding

Most existing set encoding algorithms operate under the assumption that ...
research
11/25/2020

Recalibration of Neural Networks for Point Cloud Analysis

Spatial and channel re-calibration have become powerful concepts in comp...
research
03/02/2023

Dropout Reduces Underfitting

Introduced by Hinton et al. in 2012, dropout has stood the test of time ...
research
10/14/2020

Optimal quantisation of probability measures using maximum mean discrepancy

Several researchers have proposed minimisation of maximum mean discrepan...
research
06/19/2017

An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation

Training of neural machine translation (NMT) models usually uses mini-ba...
research
03/30/2022

Self-Distillation from the Last Mini-Batch for Consistency Regularization

Knowledge distillation (KD) shows a bright promise as a powerful regular...

Please sign up or login with your details

Forgot password? Click here to reset