Current approaches to building general-purpose AI systems tend to produc...
Information-theoretic approaches to active learning have traditionally
f...
We introduce a method to measure uncertainty in large language models. F...
State-of-the-art language models are often accurate on many
question-ans...
We investigate the efficacy of treating all the parameters in a Bayesian...
Bayesian inference has theoretical attractions as a principled framework...
Causal models of agents have been used to analyse the safety aspects of
...
Training on web-scale data can take months. But most computation and tim...
We present a general framework for training safe agents whose naive
ince...
Pruning neural networks at initialization would enable us to find sparse...
We propose Active Surrogate Estimators (ASEs), a new method for
label-ef...
We introduce Goldilocks Selection, a technique for faster model training...
In active learning, new labels are commonly acquired in batches. However...
High-quality estimates of uncertainty and robustness are crucial for num...
We introduce active testing: a new framework for sample-efficient model
...
Active learning is a powerful tool when labelling data is expensive, but...
We introduce a method to speed up training by 2x and inference by 3x in ...
We challenge the longstanding assumption that the mean-field approximati...
Evaluation of Bayesian deep learning (BDL) methods is challenging. We of...
We propose Radial Bayesian Neural Networks: a variational distribution f...
Catastrophic forgetting can be a significant problem for institutions th...
Some machine learning applications require continual learning - where da...
Continual learning experiments used in current deep learning papers do n...
This report surveys the landscape of potential security threats from
mal...