Towards a Robust and Trustworthy Machine Learning System Development

by   Pulei Xiong, et al.

Machine Learning (ML) technologies have been widely adopted in many mission critical fields, such as cyber security, autonomous vehicle control, healthcare, etc. to support intelligent decision-making. While ML has demonstrated impressive performance over conventional methods in these applications, concerns arose with respect to system resilience against ML-specific security attacks and privacy breaches as well as the trust that users have in these systems. In this article, firstly we present our recent systematic and comprehensive survey on the state-of-the-art ML robustness and trustworthiness technologies from a security engineering perspective, which covers all aspects of secure ML system development including threat modeling, common offensive and defensive technologies, privacy-preserving machine learning, user trust in the context of machine learning, and empirical evaluation for ML model robustness. Secondly, we then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners. We further illustrate how to leverage the metamodel to guide a systematic threat analysis and security design process in a context of generic ML system development, which extends and scales up the classic process. Thirdly, we propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems. Our work differs from existing surveys in this area in that, to the best of our knowledge, it is the first of its kind of engineering effort to (i) explore the fundamental principles and best practices to support robust and trustworthy ML system development; and (ii) study the interplay of robustness and user trust in the context of ML systems.



page 1

page 2

page 3

page 4


A Marauder's Map of Security and Privacy in Machine Learning

There is growing recognition that machine learning (ML) exposes new secu...

Secure and Robust Machine Learning for Healthcare: A Survey

Recent years have witnessed widespread adoption of machine learning (ML)...

Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges

The rapid development of Machine Learning (ML) has demonstrated superior...

Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective

As machine learning (ML) technologies and applications are rapidly chang...

Technology Readiness Levels for Machine Learning Systems

The development and deployment of machine learning systems can be execut...

Poisoning Attacks and Defenses on Artificial Intelligence: A Survey

Machine learning models have been widely adopted in several fields. Howe...

Model Agnostic Defence against Backdoor Attacks in Machine Learning

Machine Learning (ML) has automated a multitude of our day-to-day decisi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.