Teaching Uncertainty Quantification in Machine Learning through Use Cases

by   Matias Valdenegro-Toro, et al.

Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.


page 1

page 2

page 3

page 4


Using Comics to Introduce and Reinforce Programming Concepts in CS1

Recent work investigated the potential of comics to support the teaching...

Thinging the Use Case Model

Use cases as textual visual modeling techniques have become a key constr...

Applied Machine Learning for Games: A Graduate School Course

The game industry is moving into an era where old-style game engines are...

Sources of Uncertainty in Machine Learning – A Statisticians' View

Machine Learning and Deep Learning have achieved an impressive standard ...

Towards Connecting Use Cases and Methods in Interpretable Machine Learning

Despite increasing interest in the field of Interpretable Machine Learni...

Seeing Seeds Beyond Weeds: Green Teaming Generative AI for Beneficial Uses

Large generative AI models (GMs) like GPT and DALL-E are trained to gene...

Bayesian Learning: A Selective Overview

This paper presents an overview of some of the concepts of Bayesian Lear...

Please sign up or login with your details

Forgot password? Click here to reset