-
Wolf in Sheep's Clothing - The Downscaling Attack Against Deep Learning Applications
This paper considers security risks buried in the data processing pipeli...
read it
-
Security of Deep Learning Methodologies: Challenges and Opportunities
Despite the plethora of studies about security vulnerabilities and defen...
read it
-
A detailed comparative study of open source deep learning frameworks
Deep Learning (DL) is one of the hottest trends in machine learning as D...
read it
-
Software and application patterns for explanation methods
Deep neural networks successfully pervaded many applications domains and...
read it
-
FastEstimator: A Deep Learning Library for Fast Prototyping and Productization
As the complexity of state-of-the-art deep learning models increases by ...
read it
-
Performance Evaluation of Deep Learning Tools in Docker Containers
With the success of deep learning techniques in a broad range of applica...
read it
-
Exploiting Token and Path-based Representations of Code for Identifying Security-Relevant Commits
Public vulnerability databases such as CVE and NVD account for only 60 s...
read it
Security Risks in Deep Learning Implementations
Advance in deep learning algorithms overshadows their security risk in software implementations. This paper discloses a set of vulnerabilities in popular deep learning frameworks including Caffe, TensorFlow, and Torch. Contrast to the small code size of deep learning models, these deep learning frameworks are complex and contain heavy dependencies on numerous open source packages. This paper considers the risks caused by these vulnerabilities by studying their impact on common deep learning applications such as voice recognition and image classifications. By exploiting these framework implementations, attackers can launch denial-of-service attacks that crash or hang a deep learning application, or control-flow hijacking attacks that cause either system compromise or recognition evasions. The goal of this paper is to draw attention on the software implementations and call for the community effort to improve the security of deep learning frameworks.
READ FULL TEXT
Comments
There are no comments yet.