"How to make them stay?" – Diverse Counterfactual Explanations of Employee Attrition

03/08/2023
by   André Artelt, et al.
0

Employee attrition is an important and complex problem that can directly affect an organisation's competitiveness and performance. Explaining the reasons why employees leave an organisation is a key human resource management challenge due to the high costs and time required to attract and keep talented employees. Businesses therefore aim to increase employee retention rates to minimise their costs and maximise their performance. Machine learning (ML) has been applied in various aspects of human resource management including attrition prediction to provide businesses with insights on proactive measures on how to prevent talented employees from quitting. Among these ML methods, the best performance has been reported by ensemble or deep neural networks, which by nature constitute black box techniques and thus cannot be easily interpreted. To enable the understanding of these models' reasoning several explainability frameworks have been proposed. Counterfactual explanation methods have attracted considerable attention in recent years since they can be used to explain and recommend actions to be performed to obtain the desired outcome. However current counterfactual explanations methods focus on optimising the changes to be made on individual cases to achieve the desired outcome. In the attrition problem it is important to be able to foresee what would be the effect of an organisation's action to a group of employees where the goal is to prevent them from leaving the company. Therefore, in this paper we propose the use of counterfactual explanations focusing on multiple attrition cases from historical data, to identify the optimum interventions that an organisation needs to make to its practices/policies to prevent or minimise attrition probability for these cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2021

Directive Explanations for Actionable Explainability in Machine Learning Applications

This paper investigates the prospects of using directive explanations to...
research
06/14/2021

Counterfactual Explanations for Machine Learning: Challenges Revisited

Counterfactual explanations (CFEs) are an emerging technique under the u...
research
07/25/2023

Counterfactual Explanation Policies in RL

As Reinforcement Learning (RL) agents are increasingly employed in diver...
research
01/17/2022

Principled Diverse Counterfactuals in Multilinear Models

Machine learning (ML) applications have automated numerous real-life tas...
research
04/16/2021

MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks

Explainable AI (XAI) is a research area whose objective is to increase t...
research
11/09/2020

Explaining Deep Graph Networks with Molecular Counterfactuals

We present a novel approach to tackle explainability of deep graph netwo...
research
02/05/2020

`Why not give this work to them?' Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees

The problem of multi-agent task allocation arises in a variety of scenar...

Please sign up or login with your details

Forgot password? Click here to reset