
Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers
In a recent paper, Celis et al. (2020) introduced a new approach to fair...
read it

All your loss are belong to Bayes
Loss functions are a cornerstone of machine learning and the starting po...
read it

Cumulantfree closedform formulas for some common (dis)similarities between densities of an exponential family
It is wellknown that the Bhattacharyya, Hellinger, KullbackLeibler, α...
read it

Generalised Lipschitz Regularisation Equals Distributional Robustness
The problem of adversarial examples has highlighted the need for a theor...
read it

Supervised Learning: No Loss No Cry
Supervised learning requires the specification of a loss function to min...
read it

Boosted and Differentially Private Ensembles of Decision Trees
Boosted ensemble of decision tree (DT) classifiers are extremely popular...
read it

Advances and Open Problems in Federated Learning
Federated learning (FL) is a machine learning setting where many clients...
read it

ProperComposite Loss Functions in Arbitrary Dimensions
The study of a machine learning problem is in many ways is difficult to ...
read it

Adversarial Networks and Autoencoders: The PrimalDual Relationship and Generalization Bounds
Since the introduction of Generative Adversarial Networks (GANs) and Var...
read it

New Tricks for Estimating Gradients of Expectations
We derive a family of Monte Carlo estimators for gradients of expectatio...
read it

The Bregman chord divergence
Distances are fundamental primitives whose choice significantly impacts ...
read it

Lipschitz Networks and Distributional Robustness
Robust risk minimisation has several advantages: it has been studied wit...
read it

Hyperparameter Learning for Conditional Mean Embeddings with Rademacher Complexity Bounds
Conditional mean embeddings are nonparametric models that encode conditi...
read it

DPAGE: Diverse Paraphrase Generation
In this paper, we investigate the diversity aspect of paraphrase generat...
read it

Private Text Classification
Confidential text corpora exist in many forms, but do not allow arbitrar...
read it

Integral Privacy for Density Estimation with Approximation Guarantees
Density estimation is an old and central problem in statistics and machi...
read it

Monge beats Bayes: Hardness Results for Adversarial Training
The last few years have seen extensive empirical study of the robustness...
read it

Boosted Density Estimation Remastered
There has recently been a steadily increase in the iterative approaches ...
read it

Entity Resolution and Federated Learning get a Federated Resolution
Consider two data providers, each maintaining records of different featu...
read it

fGANs in an Information Geometric Nutshell
Nowozin et al showed last year how to extend the GAN principle to all f...
read it

Semiparametric Network Structure Discovery Models
We propose a network structure discovery model for continuous observatio...
read it

A series of maximum entropy upper bounds of the differential entropy
We present a series of closedform maximum entropy upper bounds for the ...
read it

Large Margin Nearest Neighbor Classification using Curved Mahalanobis Distances
We consider the supervised classification problem of machine learning in...
read it

Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
We present a theoretically grounded approach to train deep neural networ...
read it

A scaled Bregman theorem with applications
Bregman divergences play a central role in the design and analysis of a ...
read it

The Crossover Process: Learnability and Data Protection from Inference Attacks
It is usual to consider data protection and learnability as conflicting ...
read it

Fast (1+ε)approximation of the Löwner extremal matrices of highdimensional symmetric matrices
Matrix data sets are common nowadays like in biomedical imaging where th...
read it

Loss factorization, weakly supervised learning and label noise robustness
We prove that the empirical risk of most wellknown loss functions facto...
read it

Further heuristics for kmeans: The mergeandsplit heuristic and the (k,l)means
Finding the optimal kmeans clustering is NPhard in general and many he...
read it

Combining Feature and Prototype Pruning by Uncertainty Minimization
We focus in this paper on dataset reduction techniques for use in knear...
read it

Boosting kNN for categorization of natural scenes
The knearest neighbors (kNN) classification rule has proven extremely ...
read it
Richard Nock
is this you? claim profile
Adjunct Professor, the Australian National University, the University of Sydney & Senior Principal Researcher, Data61