Teaching Uncertainty Quantification in Machine Learning through Use Cases

by   Matias Valdenegro-Toro, et al.

Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.



page 1

page 2

page 3

page 4


Using Comics to Introduce and Reinforce Programming Concepts in CS1

Recent work investigated the potential of comics to support the teaching...

Thinging the Use Case Model

Use cases as textual visual modeling techniques have become a key constr...

Applied Machine Learning for Games: A Graduate School Course

The game industry is moving into an era where old-style game engines are...

Towards Connecting Use Cases and Methods in Interpretable Machine Learning

Despite increasing interest in the field of Interpretable Machine Learni...

Improving Students' Academic Performance with AI and Semantic Technologies

Artificial intelligence and semantic technologies are evolving and have ...

Bayesian Learning: A Selective Overview

This paper presents an overview of some of the concepts of Bayesian Lear...

Novel Approach for Cybersecurity Workforce Development: A Course in Secure Design

Training the future cybersecurity workforce to respond to emerging threa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.