Vulnerabilities of Connectionist AI Applications: Evaluation and Defence

03/18/2020
by   Christian Berghoff, et al.
0

This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity, one of the three IT security goals. Such threats are for instance most relevant in prominent AI computer vision applications. In order to present a holistic view on the IT security goal integrity, many additional aspects such as interpretability, robustness and documentation are taken into account. A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature. AI-specific vulnerabilities such as adversarial attacks and poisoning attacks as well as their AI-specific root causes are discussed in detail. Additionally and in contrast to former reviews, the whole AI supply chain is analysed with respect to vulnerabilities, including the planning, data acquisition, training, evaluation and operation phases. The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains and their embeddings in larger IT infrastructures and hardware devices. Based on this and the observation that adaptive attackers may circumvent any single published AI-specific defence to date, the article concludes that single protective measures are not sufficient but rather multiple measures on different levels have to be combined to achieve a minimum level of IT security for AI applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2022

CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI Models

As the practicality of Artificial Intelligence (AI) and Machine Learning...
research
04/18/2023

AI Product Security: A Primer for Developers

Not too long ago, AI security used to mean the research and practice of ...
research
05/23/2023

Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications

In July 2022, the Center for Security and Emerging Technology (CSET) at ...
research
04/30/2023

Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks

Adversarial training (AT) is a robust learning algorithm that can defend...
research
05/29/2023

Chatbots to ChatGPT in a Cybersecurity Space: Evolution, Vulnerabilities, Attacks, Challenges, and Future Recommendations

Chatbots shifted from rule-based to artificial intelligence techniques a...
research
09/10/2021

Emerging AI Security Threats for Autonomous Cars – Case Studies

Artificial Intelligence has made a significant contribution to autonomou...
research
02/22/2022

Multi-service Threats: Attacking and Protecting Network Printers and VoIP Phones alike

Printing over a network and calling over VoIP technology are routine at ...

Please sign up or login with your details

Forgot password? Click here to reset