Interpretable deep-learning models to help achieve the Sustainable Development Goals

08/24/2021
by   Ricardo Vinuesa, et al.
12

We discuss our insights into interpretable artificial-intelligence (AI) models, and how they are essential in the context of developing ethical AI systems, as well as data-driven solutions compliant with the Sustainable Development Goals (SDGs). We highlight the potential of extracting truly-interpretable models from deep-learning methods, for instance via symbolic models obtained through inductive biases, to ensure a sustainable development of AI.

READ FULL TEXT
research
07/27/2021

The social dilemma in AI development and why we have to solve it

While the demand for ethical artificial intelligence (AI) systems increa...
research
03/29/2018

How an Electrical Engineer Became an Artificial Intelligence Researcher, a Multiphase Active Contours Analysis

This essay examines how what is considered to be artificial intelligence...
research
05/17/2023

Echoes of Biases: How Stigmatizing Language Affects AI Performance

Electronic health records (EHRs) serve as an essential data source for t...
research
03/01/2022

Towards deep learning-powered IVF: A large public benchmark for morphokinetic parameter prediction

An important limitation to the development of Artificial Intelligence (A...
research
10/18/2022

Vision Paper: Causal Inference for Interpretable and Robust Machine Learning in Mobility Analysis

Artificial intelligence (AI) is revolutionizing many areas of our lives,...
research
06/04/2018

Relational inductive biases, deep learning, and graph networks

Artificial intelligence (AI) has undergone a renaissance recently, makin...
research
02/20/2023

SpecXAI – Spectral interpretability of Deep Learning Models

Deep learning is becoming increasingly adopted in business and industry ...

Please sign up or login with your details

Forgot password? Click here to reset