Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives

03/24/2023
by   Shunsuke Kitada, et al.
0

With the dramatic advances in deep learning technology, machine learning research is focusing on improving the interpretability of model predictions as well as prediction performance in both basic and applied research. While deep learning models have much higher prediction performance than traditional machine learning models, the specific prediction process is still difficult to interpret and/or explain. This is known as the black-boxing of machine learning models and is recognized as a particularly important problem in a wide range of research fields, including manufacturing, commerce, robotics, and other industries where the use of such technology has become commonplace, as well as the medical field, where mistakes are not tolerated. This bulletin is based on the summary of the author's dissertation. The research summarized in the dissertation focuses on the attention mechanism, which has been the focus of much attention in recent years, and discusses its potential for both basic research in terms of improving prediction performance and interpretability, and applied research in terms of evaluating it for real-world applications using large data sets beyond the laboratory environment. The dissertation also concludes with a summary of the implications of these findings for subsequent research and future prospects in the field.

READ FULL TEXT
research
02/20/2020

Interpretability of machine learning based prediction models in healthcare

There is a need of ensuring machine learning models that are interpretab...
research
11/02/2022

A Data-driven Case-based Reasoning in Bankruptcy Prediction

There has been intensive research regarding machine learning models for ...
research
07/02/2020

Addressing the interpretability problem for deep learning using many valued quantum logic

Deep learning models are widely used for various industrial and scientif...
research
07/05/2023

Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency

Ensuring the trustworthiness and interpretability of machine learning mo...
research
12/20/2022

A Comparison Between Tsetlin Machines and Deep Neural Networks in the Context of Recommendation Systems

Recommendation Systems (RSs) are ubiquitous in modern society and are on...
research
04/14/2022

Interpretability of Machine Learning Methods Applied to Neuroimaging

Deep learning methods have become very popular for the processing of nat...
research
12/06/2020

A Weighted Solution to SVM Actionability and Interpretability

Research in machine learning has successfully developed algorithms to bu...

Please sign up or login with your details

Forgot password? Click here to reset