Improving Skin Condition Classification with a Visual Symptom Checker trained using Reinforcement Learning
We present a visual symptom checker that combines a pre-trained Convolutional Neural Network (CNN) with a Reinforcement Learning (RL) agent as a Question Answering (QA) model. This method enables us to not only increase the classification confidence and accuracy of the visual symptom checker, but also decreases the average number of relevant questions asked to narrow down the differential diagnosis. By combining the CNN output in the form of classification probabilities as a part of the state structure of the simulated patient's environment, a DQN-based RL agent learns to ask the best symptom that maximizes its expected return over symptoms. We demonstrate that our RL approach increases the accuracy more than 20 up to 10 approach not only outperforms the performance of the decision tree approach but also narrows down the diagonosis faster in terms of the average number of asked questions.
READ FULL TEXT