Neural Network Verification for the Masses (of AI graduates)

07/02/2019
by   Ekaterina Komendantskaya, et al.
0

Rapid development of AI applications has stimulated demand for, and has given rise to, the rapidly growing number and diversity of AI MSc degrees. AI and Robotics research communities, industries and students are becoming increasingly aware of the problems caused by unsafe or insecure AI applications. Among them, perhaps the most famous example is vulnerability of deep neural networks to “adversarial attacks”. Owing to wide-spread use of neural networks in all areas of AI, this problem is seen as particularly acute and pervasive. Despite of the growing number of research papers about safety and security vulnerabilities of AI applications, there is a noticeable shortage of accessible tools, methods and teaching materials for incorporating verification into AI programs. LAIV – the Lab for AI and Verification – is a newly opened research lab at Heriot-Watt university that engages AI and Robotics MSc students in verification projects, as part of their MSc dissertation work. In this paper, we will report on successes and unexpected difficulties LAIV faces, many of which arise from limitations of existing programming languages used for verification. We will discuss future directions for incorporating verification into AI degrees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2020

AI Data poisoning attack: Manipulating game AI of Go

With the extensive use of AI in various fields, the issue of AI security...
research
05/19/2022

Neural Networks in Imandra: Matrix Representation as a Verification Choice

The demand for formal verification tools for neural networks has increas...
research
04/06/2022

Software Testing, AI and Robotics (STAIR) Learning Lab

In this paper we presented the Software Testing, AI and Robotics (STAIR)...
research
01/17/2023

Adversarial AI in Insurance: Pervasiveness and Resilience

The rapid and dynamic pace of Artificial Intelligence (AI) and Machine L...
research
08/23/2023

Innovating Computer Programming Pedagogy: The AI-Lab Framework for Generative AI Adoption

Over the last year, the ascent of Generative AI (GenAI) has raised conce...
research
01/26/2022

PreDefense: Defending Underserved AI Students and Researchers from Predatory Conferences

Mentorship in the AI community is crucial to maintaining and increasing ...
research
06/07/2022

CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

We present CAISAR, an open-source platform under active development for ...

Please sign up or login with your details

Forgot password? Click here to reset