Uncertainty Quantification for Competency Assessment of Autonomous Agents

06/21/2022
by   Aastha Acharya, et al.
0

For safe and reliable deployment in the real world, autonomous agents must elicit appropriate levels of trust from human users. One method to build trust is to have agents assess and communicate their own competencies for performing given tasks. Competency depends on the uncertainties affecting the agent, making accurate uncertainty quantification vital for competency assessment. In this work, we show how ensembles of deep generative models can be used to quantify the agent's aleatoric and epistemic uncertainties when forecasting task outcomes as part of competency assessment.

READ FULL TEXT

page 1

page 2

page 3

research
03/23/2022

Competency Assessment for Autonomous Agents using Deep Generative Models

For autonomous agents to act as trustworthy partners to human users, the...
research
02/17/2023

Learning to Forecast Aleatoric and Epistemic Uncertainties over Long Horizon Trajectories

Giving autonomous agents the ability to forecast their own outcomes and ...
research
04/20/2022

A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement

Neural networks are ubiquitous in many tasks, but trusting their predict...
research
05/27/2021

Deep Ensembles from a Bayesian Perspective

Deep ensembles can be seen as the current state-of-the-art for uncertain...
research
02/21/2022

GAN-DUF: Hierarchical Deep Generative Models for Design Under Free-Form Geometric Uncertainty

Deep generative models have demonstrated effectiveness in learning compa...
research
11/11/2022

Disentangled Uncertainty and Out of Distribution Detection in Medical Generative Models

Trusting the predictions of deep learning models in safety critical sett...
research
03/03/2023

Dynamic Competency Self-Assessment for Autonomous Agents

As autonomous robots are deployed in increasingly complex environments, ...

Please sign up or login with your details

Forgot password? Click here to reset