Silent Vulnerable Dependency Alert Prediction with Vulnerability Key Aspect Explanation

02/15/2023
by   Jiamou Sun, et al.
0

Due to convenience, open-source software is widely used. For beneficial reasons, open-source maintainers often fix the vulnerabilities silently, exposing their users unaware of the updates to threats. Previous works all focus on black-box binary detection of the silent dependency alerts that suffer from high false-positive rates. Open-source software users need to analyze and explain AI prediction themselves. Explainable AI becomes remarkable as a complementary of black-box AI models, providing details in various forms to explain AI decisions. Noticing there is still no technique that can discover silent dependency alert on time, in this work, we propose a framework using an encoder-decoder model with a binary detector to provide explainable silent dependency alert prediction. Our model generates 4 types of vulnerability key aspects including vulnerability type, root cause, attack vector, and impact to enhance the trustworthiness and users' acceptance to alert prediction. By experiments with several models and inputs, we confirm CodeBERT with both commit messages and code changes achieves the best results. Our user study shows that explainable alert predictions can help users find silent dependency alert more easily than black-box predictions. To the best of our knowledge, this is the first research work on the application of Explainable AI in silent dependency alert prediction, which opens the door of the related domains.

READ FULL TEXT
research
04/26/2023

GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural Network (LTC) trained on AACR GENIE Datasets

In recent years, the field of medicine has been increasingly adopting ar...
research
03/30/2023

Model-agnostic explainable artificial intelligence for object detection in image data

Object detection is a fundamental task in computer vision, which has bee...
research
01/26/2021

Introducing and assessing the explainable AI (XAI)method: SIDU

Explainable Artificial Intelligence (XAI) has in recent years become a w...
research
04/21/2022

Evolution of Transparent Explainable Rule-sets

Most AI systems are black boxes generating reasonable outputs for given ...
research
08/06/2020

Predicting Missing Information of Key Aspects in Vulnerability Reports

Software vulnerabilities have been continually disclosed and documented....
research
09/18/2023

Information based explanation methods for deep learning agents – with applications on large open-source chess models

With large chess-playing neural network models like AlphaZero contesting...
research
11/25/2020

Probing Model Signal-Awareness via Prediction-Preserving Input Minimization

This work explores the signal awareness of AI models for source code und...

Please sign up or login with your details

Forgot password? Click here to reset