The Quest for Interpretable and Responsible Artificial Intelligence

10/10/2019
by   Vaishak Belle, et al.
0

Artificial Intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in computational biology, finance, law and robotics. However, such a highly positive impact is coupled with significant challenges: How do we understand the decisions suggested by these systems in order that we can trust them? How can they be held accountable for those decisions? In this short survey, we cover some of the motivations and trends in the area that attempt to address such questions.

READ FULL TEXT
research
09/18/2020

Principles and Practice of Explainable Machine Learning

Artificial intelligence (AI) provides many opportunities to improve priv...
research
09/12/2018

Artificial Intelligence for the Public Sector: Opportunities and challenges of cross-sector collaboration

Public sector organisations are increasingly interested in using data sc...
research
12/13/2019

AutoAIViz: Opening the Blackbox of Automated Artificial Intelligence with Conditional Parallel Coordinates

Artificial Intelligence (AI) can now automate the algorithm selection, f...
research
04/25/2022

AI-Assisted Authentication: State of the Art, Taxonomy and Future Roadmap

Artificial Intelligence (AI) has found its applications in a variety of ...
research
05/20/2018

Cost-Benefit Analysis of Data Intelligence – Its Broader Interpretations

The core of data science is our fundamental understanding about data int...

Please sign up or login with your details

Forgot password? Click here to reset