Explainable Artificial Intelligence for Smart City Application: A Secure and Trusted Platform

10/31/2021
by   M. Humayn Kabir, et al.
0

Artificial Intelligence (AI) is one of the disruptive technologies that is shaping the future. It has growing applications for data-driven decisions in major smart city solutions, including transportation, education, healthcare, public governance, and power systems. At the same time, it is gaining popularity in protecting critical cyber infrastructure from cyber threats, attacks, damages, or unauthorized access. However, one of the significant issues of those traditional AI technologies (e.g., deep learning) is that the rapid progress in complexity and sophistication propelled and turned out to be uninterpretable black boxes. On many occasions, it is very challenging to understand the decision and bias to control and trust systems' unexpected or seemingly unpredictable outputs. It is acknowledged that the loss of control over interpretability of decision-making becomes a critical issue for many data-driven automated applications. But how may it affect the system's security and trustworthiness? This chapter conducts a comprehensive study of machine learning applications in cybersecurity to indicate the need for explainability to address this question. While doing that, this chapter first discusses the black-box problems of AI technologies for Cybersecurity applications in smart city-based solutions. Later, considering the new technological paradigm, Explainable Artificial Intelligence (XAI), this chapter discusses the transition from black-box to white-box. This chapter also discusses the transition requirements concerning the interpretability, transparency, understandability, and Explainability of AI-based technologies in applying different autonomous systems in smart cities. Finally, it has presented some commercial XAI platforms that offer explainability over traditional AI technologies before presenting future challenges and opportunities.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/10/2021

Explainable Artificial Intelligence (XAI): An Engineering Perspective

The remarkable advancements in Deep Learning (DL) algorithms have fueled...
02/03/2021

Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond

Explainable Artificial Intelligence (XAI) is an emerging research topic ...
04/21/2020

Explainable Goal-Driven Agents and Robots – A Comprehensive Review and New Framework

Recent applications of autonomous agents and robots, for example, self-d...
01/23/2021

Explainable Artificial Intelligence Approaches: A Survey

The lack of explainability of a decision from an Artificial Intelligence...
04/26/2021

TrustyAI Explainability Toolkit

Artificial intelligence (AI) is becoming increasingly more popular and c...
04/07/2021

AI perspectives in Smart Cities and Communities to enable road vehicle automation and smart traffic control

Smart Cities and Communities (SCC) constitute a new paradigm in urban de...
02/14/2022

Artificial Intelligence-Based Smart Grid Vulnerabilities and Potential Solutions for Fake-Normal Attacks: A Short Review

Smart grid systems are critical to the power industry, however their sop...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.