The Silent Problem – Machine Learning Model Failure – How to Diagnose and Fix Ailing Machine Learning Models

04/21/2022
by   Michele Bennett, et al.
0

The COVID-19 pandemic has dramatically changed how healthcare is delivered to patients, how patients interact with healthcare providers, and how healthcare information is disseminated to both healthcare providers and patients. Analytical models that were trained and tested pre-pandemic may no longer be performing up to expectations, providing unreliable and irrelevant learning (ML) models given that ML depends on the basic principle that what happened in the past are likely to repeat in the future. ML faced to two important degradation principles, concept drift, when the underlying properties and characteristics of the variables change and data drift, when the data distributions, probabilities, co-variates, and other variable relationships change, both of which are prime culprits of model failure. Therefore, detecting and diagnosing drift in existing models is something that has become an imperative. And perhaps even more important is a shift in our mindset towards a conscious recognition that drift is inevitable, and model building must incorporate intentional resilience, the ability to offset and recover quickly from failure, and proactive robustness, avoiding failure by developing models that are less vulnerable to drift and disruption.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro