This paper proposes a novel communication-efficient split learning (SL)
...
Deep models are susceptible to learning spurious correlations, even duri...
Knowledge distillation is a popular and effective regularization techniq...
We introduce an efficient optimization-based meta-learning technique for...
We introduce a modality-agnostic neural data compression algorithm based...
Trying to capture the sample-label relationship, conditional generative
...
Succinct representation of complex signals using coordinate-based neural...
The idea of using a separately trained target model (or teacher) to impr...
Recent denoising algorithms based on the "blind-spot" strategy show
impr...
The paradigm of worst-group loss minimization has shown its promise in
a...
Implicit neural representations are a promising new avenue of representi...
Recent breakthroughs in self-supervised learning show that such algorith...
Pre-trained language models have achieved state-of-the-art accuracies on...
It is known that Θ(N) parameters are sufficient for neural networks to
m...
Recent discoveries on neural network pruning reveal that, with a careful...
Neural networks often learn to make predictions that overly rely on spur...
The universal approximation property of width-bounded networks has been
...
In risk-sensitive learning, one aims to find a hypothesis that minimizes...
Magnitude-based pruning is one of the simplest methods for pruning neura...
This paper generalizes the Maurer--Pontil framework of finite-dimensiona...