Hide-and-Seek: A Template for Explainable AI

by   Thanos Tagaris, et al.

Lack of transparency has been the Achilles heal of Neural Networks and their wider adoption in industry. Despite significant interest this shortcoming has not been adequately addressed. This study proposes a novel framework called Hide-and-Seek (HnS) for training Interpretable Neural Networks and establishes a theoretical foundation for exploring and comparing similar ideas. Extensive experimentation indicates that a high degree of interpretability can be imputed into Neural Networks, without sacrificing their predictive power.


page 14

page 17

page 19


Towards Explainable Deep Learning for Credit Lending: A Case Study

Deep learning adoption in the financial services industry has been limit...

Explainable AI in Orthopedics: Challenges, Opportunities, and Prospects

While artificial intelligence (AI) has made many successful applications...

Interpretable Neural Networks for Panel Data Analysis in Economics

The lack of interpretability and transparency are preventing economists ...

A Checklist for Explainable AI in the Insurance Domain

Artificial intelligence (AI) is a powerful tool to accomplish a great ma...

Explicability and Inexplicability in the Interpretation of Quantum Neural Networks

Interpretability of artificial intelligence (AI) methods, particularly d...

Please sign up or login with your details

Forgot password? Click here to reset