Towards a Robust and Trustworthy Machine Learning System Development

01/08/2021
by   Pulei Xiong, et al.
0

Machine Learning (ML) technologies have been widely adopted in many mission critical fields, such as cyber security, autonomous vehicle control, healthcare, etc. to support intelligent decision-making. While ML has demonstrated impressive performance over conventional methods in these applications, concerns arose with respect to system resilience against ML-specific security attacks and privacy breaches as well as the trust that users have in these systems. In this article, firstly we present our recent systematic and comprehensive survey on the state-of-the-art ML robustness and trustworthiness technologies from a security engineering perspective, which covers all aspects of secure ML system development including threat modeling, common offensive and defensive technologies, privacy-preserving machine learning, user trust in the context of machine learning, and empirical evaluation for ML model robustness. Secondly, we then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners. We further illustrate how to leverage the metamodel to guide a systematic threat analysis and security design process in a context of generic ML system development, which extends and scales up the classic process. Thirdly, we propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems. Our work differs from existing surveys in this area in that, to the best of our knowledge, it is the first of its kind of engineering effort to (i) explore the fundamental principles and best practices to support robust and trustworthy ML system development; and (ii) study the interplay of robustness and user trust in the context of ML systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2018

A Marauder's Map of Security and Privacy in Machine Learning

There is growing recognition that machine learning (ML) exposes new secu...
research
04/13/2023

MLOps Spanning Whole Machine Learning Life Cycle: A Survey

Google AlphaGos win has significantly motivated and sped up machine lear...
research
01/21/2020

Secure and Robust Machine Learning for Healthcare: A Survey

Recent years have witnessed widespread adoption of machine learning (ML)...
research
01/12/2022

Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges

The rapid development of Machine Learning (ML) has demonstrated superior...
research
10/31/2022

SoK: Modeling Explainability in Security Monitoring for Trust, Privacy, and Interpretability

Trust, privacy, and interpretability have emerged as significant concern...
research
09/19/2022

A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications

Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of te...
research
01/24/2020

When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions

Wireless systems are vulnerable to various attacks such as jamming and e...

Please sign up or login with your details

Forgot password? Click here to reset