Building Safe and Reliable AI systems for Safety Critical Tasks with Vision-Language Processing

08/06/2023
by   Shuang Ao, et al.
0

Although AI systems have been applied in various fields and achieved impressive performance, their safety and reliability are still a big concern. This is especially important for safety-critical tasks. One shared characteristic of these critical tasks is their risk sensitivity, where small mistakes can cause big consequences and even endanger life. There are several factors that could be guidelines for the successful deployment of AI systems in sensitive tasks: (i) failure detection and out-of-distribution (OOD) detection; (ii) overfitting identification; (iii) uncertainty quantification for predictions; (iv) robustness to data perturbations. These factors are also challenges of current AI systems, which are major blocks for building safe and reliable AI. Specifically, the current AI algorithms are unable to identify common causes for failure detection. Furthermore, additional techniques are required to quantify the quality of predictions. All these contribute to inaccurate uncertainty quantification, which lowers trust in predictions. Hence obtaining accurate model uncertainty quantification and its further improvement are challenging. To address these issues, many techniques have been proposed, such as regularization methods and learning strategies. As vision and language are the most typical data type and have many open source benchmark datasets, this thesis will focus on vision-language data processing for tasks like classification, image captioning, and vision question answering. In this thesis, we aim to build a safeguard by further developing current techniques to ensure the accurate model uncertainty for safety-critical tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2023

Conformal Prediction with Large Language Models for Multi-Choice Question Answering

As large language models continue to be widely developed, robust uncerta...
research
06/26/2020

A Comparison of Uncertainty Estimation Approaches in Deep Learning Components for Autonomous Vehicle Applications

A key to ensuring safety in Autonomous Vehicles (AVs) is to avoid any ab...
research
05/24/2023

Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning

As the use of Artificial Intelligence (AI) components in cyber-physical ...
research
10/14/2019

Engineering Reliable Deep Learning Systems

Recent progress in artificial intelligence (AI) using deep learning tech...
research
08/05/2022

Interpretable Uncertainty Quantification in AI for HEP

Estimating uncertainty is at the core of performing scientific measureme...
research
04/10/2023

Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning

Predictions made by deep learning models are prone to data perturbations...
research
06/15/2020

Can Your AI Differentiate Cats from Covid-19? Sample Efficient Uncertainty Estimation for Deep Learning Safety

Deep Neural Networks (DNNs) are known to make highly overconfident predi...

Please sign up or login with your details

Forgot password? Click here to reset