Can We Rely on AI?

08/29/2023
by   Desmond J. Higham, et al.
0

Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning tools. These algorithms raise issues regarding safety, reliability and interpretability in artificial intelligence; especially in high risk settings. From a practical perspective, there has been a war of escalation between those developing attack and defence strategies. At a more theoretical level, researchers have also studied bigger picture questions concerning the existence and computability of attacks. Here we give a brief overview of the topic, focusing on aspects that are likely to be of interest to researchers in applied and computational mathematics.

READ FULL TEXT
research
07/13/2021

A Classification of Artificial Intelligence Systems for Mathematics Education

This chapter provides an overview of the different Artificial Intelligen...
research
05/02/2021

AI Risk Skepticism

In this work, we survey skepticism regarding AI risk and show parallels ...
research
02/16/2023

AI Risk Skepticism, A Comprehensive Survey

In this thorough study, we took a closer look at the skepticism that has...
research
02/28/2012

One Decade of Universal Artificial Intelligence

The first decade of this century has seen the nascency of the first math...
research
04/17/2020

Adversarial Attack on Deep Learning-Based Splice Localization

Regarding image forensics, researchers have proposed various approaches ...
research
07/19/2019

Proceedings Seventeenth Conference on Theoretical Aspects of Rationality and Knowledge

This is the proceedings of the Seventeenth conference on Theoretical Asp...
research
12/19/2022

AI Security for Geoscience and Remote Sensing: Challenges and Future Trends

Recent advances in artificial intelligence (AI) have significantly inten...

Please sign up or login with your details

Forgot password? Click here to reset