Learning medical triage from clinicians using Deep Q-Learning

03/28/2020
by   Albert Buchard, et al.
8

Medical Triage is of paramount importance to healthcare systems, allowing for the correct orientation of patients and allocation of the necessary resources to treat them adequately. While reliable decision-tree methods exist to triage patients based on their presentation, those trees implicitly require human inference and are not immediately applicable in a fully automated setting. On the other hand, learning triage policies directly from experts may correct for some of the limitations of hard-coded decision-trees. In this work, we present a Deep Reinforcement Learning approach (a variant of DeepQ-Learning) to triage patients using curated clinical vignettes. The dataset, consisting of 1374 clinical vignettes, was created by medical doctors to represent real-life cases. Each vignette is associated with an average of 3.8 expert triage decisions given by medical doctors relying solely on medical history. We show that this approach is on a par with human performance, yielding safe triage decisions in 94 trained agent learns when to stop asking questions, acquires optimized decision policies requiring less evidence than supervised approaches, and adapts to the novelty of a situation by asking for more information. Overall, we demonstrate that a Deep Reinforcement Learning approach can learn effective medical triage policies directly from expert decisions, without requiring expert knowledge engineering. This approach is scalable and can be deployed in healthcare settings or geographical regions with distinct triage specifications, or where trained experts are scarce, to improve decision making in the early stage of care.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

06/16/2019

MoËT: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees

Deep Reinforcement Learning (DRL) has led to many recent breakthroughs o...
11/15/2018

Contextual Care Protocol using Neural Networks and Decision Trees

A contextual care protocol is used by a medical practitioner for patient...
05/30/2019

Effective Medical Test Suggestions Using Deep Reinforcement Learning

Effective medical test suggestions benefit both patients and physicians ...
10/02/2019

AI Assisted Annotator using Reinforcement Learning

Healthcare data suffers from both noise and lack of ground truth. The co...
02/01/2021

Diagnosis of Acute Poisoning Using Explainable Artificial Intelligence

Medical toxicology is the clinical specialty that treats the toxic effec...
04/22/2021

XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision Trees

We present a novel sensor-based learning navigation algorithm to compute...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Medical Triage

For many patients, medical triage is the first organising contact with the healthcare system. Be it through a telephone interaction, or performed face-to-face by a trained healthcare professional, the triage process aims to uncover enough medical evidence to make an informed decision about the appropriate point of care given a patient’s presentation. The clinician’s task is to plan the most efficient sequence of questions in order to make a fast and accurate triage decision. Although internationally recognized systems exist [1, 2, 3, 4, 5, 6], with clearly defined decision-trees based on expert consensus, in practice, the nature of the triage task is not a passive recitation of a learned list of questions. Triage is an active process through which the clinician must make inferences about the causes of the patient’s presentation and update his plan following each new piece of information.

To deploy triage systems in healthcare settings, a population of clinicians needs to go through training to ensure the reliability and quality of their practice. No triage system seems to be superior overall, but their performance varies significantly across studies [7]. In order to improve patient safety and quality of care, many decision-support tools were designed on top of those decision-trees to standardize and automate the triage process (eCTAS[8], NHS111[9]) with mixed results.

At the turn of the last decade, the field of automated clinical decision-making was invigorated by the revolution in deep-learning approaches with several applications in perceptual settings, in which the decision relies on image recognition and not on clinical signs

[10, 11]. However, research on automated triage systems that do not rely on expert-crafted decision trees, and able to learn from data, is still relatively sparse. This may be due to the meagre availability of detailed and clean Electronic Health Records and the fact that triage systems rely on highly symbolic and structured inputs, such as symptoms and physical signs.

Ideally, learning a triage system from a detailed distribution of patients’ clinical history would allow us to correct for the inherent biases of expert-crafted systems and tailor the system for a specific target population. Given enough data, we could train such systems to improve healthcare outcomes directly and enhance clinical decision making at the early stage of care to minimize the risk for patients.

A more straightforward approach is to learn medical triage policies directly from expert decisions. Using judgments from medical experts made over a dataset of patient presentations, we may hope to learn effective policies which reflect guidelines and the combined experience of many clinicians. Our work presents a Reinforcement Learning approach for learning medical triage directly from expert decisions.

1.2 Reinforcement Learning

Reinforcement Learning (RL) is a natural approach for problems requiring the optimization of sequences of actions in order to reach complex objectives. Although interaction with real patients is usually not ethically possible, researchers have used observational and generated datasets to apply RL approaches to the healthcare setting. Those methods have been successfully applied to solve complex tasks, such as treatment regimes optimization, precision medicine, automated diagnosis, and personal health assistance (for a recent review, see [12]).

RL describes an approach to learning where an agent learns through interactions with an environment, gathering rewards and penalties for the actions it performs. Under the paradigm of RL, the general interaction between the agent and its environment is well defined.

The environment describes the world in which the agent is evolving in time. At each time :

  • It keeps track of the agent’s state , with referring to the state-space, i.e. the set of all valid states the agent can be in.

  • It processes the agent’s actions , with the set of all possible actions called the action-space of the agent.

  • It encodes the system dynamics, fully defined by the transition function

    which gives the probability to transition to state

    given that the agent performed action in state :

  • It defines a notion of optimal behavior through a reward function , here defined as a map from a state-action pair to a real value (e.g +1 for a positive reward, and -1 for a penalty), which is returned at each time step to the agent.

An agent is an entity that performs actions into the environment, given its current state and a policy , a function that gives the probability of an action when in a particular state. The set of actions that an agent takes and the set of states that it consequently visits constitute a trajectory , denoted as . The goal of training an RL agent is to learn the optimal policy , a deterministic function that maps an agent’s state to a specific action so that the reward received by the agent is maximized in expectation across all interactions.

1.3 Q-Learning

Model-free RL algorithms deal with settings where an agent does not have access to the dynamics of the environment and cannot interrogate or learn the transition function. Two main classes of model-free algorithms exist: a) policy-based methods, which aim to learn the policy directly, and b) value-based methods which aim to learn one or several value functions to guide the agent policy toward high reward trajectories. In this work, we will use a variant of Q-Learning [13], which is a model-free and value-based method, called Deep Q-Learning [14].

In Q-Learning, the agent does not learn a policy function directly but instead learns a proxy value-function . This function approximates an optimal function defined as the maximum expected return achievable by any policy , over all possible trajectories , given that in state the agent performs action and the rest of the trajectory is generated by following , denoted as (with a slight abuse of notation).

(1)

is the function returning the reward gathered over the trajectory defined as:

(2)

The weight appearing in the previous equation is called the discount factor. It encodes the notion that sequences of actions are usually finite, and one should give more weight to the current reward. has an optimal substructure and can be written recursively using the Bellman Equation [15], which treats each decision step as a separate sub-problem:

(3)

This function encodes the value of performing a particular action when in state as the sum of the immediate reward returned by the environment and the weighted expected rewards obtained over the future trajectory generated by a greedy policy.

During Q-Learning, experience tuples of the agent’s interaction with its environment are usually stored in a memory ; each record is composed of an initial state , the chosen action , the received reward and next state . During learning the agent samples records from past experiences and learns the optimal Q-value function by minimizing the temporal difference error (TD-Error), defined as the difference between a target Q-value computed from a record and the current Q-value for a particular state-action pair :

(4)

The target Q-value () is computed from an experience tuple by combining the actual observed reward, and the maximum future expected reward:

(5)

In practice, the Q-values are updated iteratively from point samples until convergence or until a maximum number of steps is completed. At each iteration, the new Q-value for the state-action pair is then defined as

(6)

with the learning rate of the agent.

Notice that this algorithm requires the value of for each state-action pair to be stored somehow. Hence, the classic Q-Learning algorithm and other tabular RL methods often fall short in settings with large state-action spaces, which strongly constrain its potential use in healthcare. For example, in our case, the state space has possible configurations, corresponding to the 597 elements of the set of observable medical evidence (symptom or risk factor). The set corresponds to a subset of the clinical evidences used by the PGM model in production at Babylon Health, each of which is in one of three states: unobserved, observed present, or observed absent.

1.4 Deep Q-Learning

Deep Reinforcement Learning refers to a series of new reinforcement learning algorithms that employ (Deep) Neural Networks (NNs) to approximate essential functions used by the agent. NNs amortize the cost of managing sizeable state-action spaces, both in terms of memory and computation time, and are able to learn complex non-linear functions of the state. They are used in particular to learn a policy function directly or to learn a value function. Deep RL is better suited to handle the complex state-space associated with healthcare-related tasks. Those tasks often require reasoning over large state spaces of structured inputs composed of healthcare events, medical symptoms, physical signs, lab tests, or imagery results.

Deep Q-Learning (DQN) is a now-famous approach, which uses a NN to learn the Q-value of the state-action pairs , with

the parameters of the network. The core of the approach remains similar to classic Q-Learning but now uses stochastic gradient descent, rather than an explicit tabular update, to update

following the gradient that minimizes the squared TD-error for each batch :

(7)

In this study, we present a deep reinforcement learning approach to medical triage, where an artificial agent learns an optimized policy based on expert-crafted clinical vignettes. The trained agent matched human performance with an appropriate triage decision of on previously unseen cases, and compared to a purely supervised method, has the advantage of learning a compressed policy by learning when to stop asking questions. We do not train the agent to ask specific questions, and our approach can be used in conjunction with any question-asking system, be it human, rule-based, or model-based [16]. Future work on active triage will concentrate on training an agent to solve both the question-asking and triage tasks simultaneously.

2 Methods

2.1 Clinical vignettes

The training and testing of the model relied upon a dataset composed of clinical vignettes each describing a patient presentation containing elements from the set of all potential clinical evidences. Each represents a symptom or a risk factor, known to be either absent or present, with . Each vignette is associated with an average of 3.36 () expert triage decisions

with here denoting the set of valid triage decisions, and the multiplicity of the decision in the multiset (see Table 1). This four-colour based system indicates how urgently a patient should be seen. It is similar to the one used by the Manchester Triage Group (MTG) telephone triage [17], which simplifies to four categories the widely used 5-colour triage system in the triage literature. Red is associated with life-threatening situations, which requires immediate attention. Yellow indicates that the patient should be seen within the next couple of hours, Green that the patient should be seen but not urgently, and Blue that the patient should be given self-care advice and be directed towards a pharmacy if necessary. The validity of each vignette was evaluated independently by two clinicians. The triage decisions associated with each vignette were made by separate clinicians, blinded to the true underlying disease of the presentation. Critically, the clinician’s triage policy, which we aim to learn, was left to their expertise and was not constrained by a known triage system like the MTG.

Mean (SD) N (Vignettes)
Evidences 1374
     # Symptoms 6.15 (2.6)
          Present 3.70 (1.85)
          Absent 2.45 (2.14)
     # Risk factors 1.13 (1.14)
          Present .79 (.89)
          Absent .34 (.72)
Triage decisions 1374
     Number per vignette 3.36 (1.44)
     Distribution
          Red .09 (.24)
          Yellow .34 (.36)
          Green .48 (.37)
          Blue .09 (.22)
Inter-expert metrics 1073
     Appropriateness .84 (.17)
     Safety .93 (.12)
Table 1: Clinical vignettes statistics. The clinical vignettes () are a sparse representation of a patient presentations, with an average of symptoms and risk factors per vignette. They are associated with expert triage decisions from an average of independent clinicians (, ) having access to all the evidence on the vignette to make their decision. The inter-expert appropriateness and safety is a proxy for evaluating the average expert’s performance (see Section 2.9), and is only evaluated on vignettes with at least three different triage decisions ().

2.2 The state-action space

In the task we are considering, at each time step the agent performs one of the five available actions . That is, it either asks for more information, or it makes one of the four triage decisions. For each vignette, the set of medical evidence

is mapped to a full state vector representation

, with the state-space. Specifically, each element takes value if the corresponding symptom or risk factor is known to be absent (e.g. absence of fever), for known positive evidence (e.g., headache present), or for unobserved evidence.

2.3 The Vignette environment

At each new episode, the environment is configured with a new clinical vignette. The environment processes the evidence and triage decisions on the vignette and returns an initial state with only one piece of evidence revealed to the agent, i.e. is a vector of all zeroes of size except for one element which is either or . At each time step , the environment receives an action from the agent. If the agent picks one of the four triage actions, the episode ends, and the agent receives a final reward (see Section 2.5 for more detail on the reward shaping). If the agent asks for more evidence, the environment uniformly samples one of the missing pieces of evidence and adds it to the state . During training, we force the agent to make a triage decision if no more evidence is available on the vignette.

2.4 The Agent

The agent architecture follows a DQN approach [14]. The network is composed of four fully connected layers. The input layer takes the state vector , the hidden layers are fully connected layers with 1024 scaled exponential linear units [18], and the output layer

uses a sigmoid activation function. Keeping

between 0 and 1 allows easier reward shaping: by limiting the valid range for the rewards and treating them as probabilities of being the optimal action, rather than arbitrary scalar values. Observations gathered by the agent are stored in a variant of the Prioritised Experience Replay Memory (PER) [19] and replayed in batches of 100 independent steps during optimisation. After a burn-in period of 1000 steps during which no learning occurs, we then train the agent on a randomly sampled batch after each action. To promote exploration during training, instead of using a classic -greedy approach, a small amount of Gaussian noise is added to before the greedy policy picks the action with the highest Q-value:

(8)

where the operator

is the Iverson bracket, which converts any logical proposition into a number that is 1 if the proposition is satisfied, and 0 otherwise. The noise standard deviation

is decayed from initially to (see Algorithm 1).

The noise is only added to action ask and not to the triage actions because the goal of exploration is to evaluate when to stop rather than to gather information about specific triage rewards. Here the triage actions are terminal, and all receive a counterfactual reward, which is independent of the action performed at each time step.

2.5 Counterfactual reward

One key difference with other RL settings is that the rewards are not delayed, and akin to a supervised approach, each action receives a reward, whether the agent chose that action or not. At each time step, the reward received by the agent is then not a scalar, but a vector which represents the reward for each of the four triage actions. The ask action does not receive a reward from the environment (see Section 2.6). The reward informs all the agent’s actions rather than only the single-action it selected, as if it had done all actions at the same time in separate counterfactual worlds.

Reward shaping was of crucial importance for this task, and many reward schemes were tested to fairly promote the success metrics of Appropriateness and Safety (see Section 2.8). Trying to balance their relative importance in the reward proves to be less efficient than trying to match the distribution of experts’ triage decisions. Hence for every vignette , each triage decision is mapped to a reward equal to the normalised probability of that decision in the bag of expert decisions . Namely, denoting the element of corresponding to the reward for action as , we define:

(9)

Moreover, since all triage actions are terminal, only the reward participates in the target Q-value for triage actions:

(10)

Consequently, to account for the counterfactual reward, we use a vector form of the temporal difference update where all actions participate in the error at each time step.

The reward for the action ask is treated differently. As described in the next section, it is defined dynamically based on the quality of the current triage decision, to encode the notion that the agent should be efficient yet careful to gather enough information.

2.6 Dynamic Q-Learning

One key difference with the classic Q-Learning approach is the dynamic nature of , the target Q-value for the action ask

, which depends on the current Q-values of the triage actions. This dynamic dependency is especially useful given that the stopping and the triage part of the Dynamic Q-Learning (DyQN) agent are learning at the same time, and the value of asking for more information might change as the agent gets better at triage. The ideal stopping criterion would stop the agent as soon as its highest Q-value corresponds to a correct triage decision, and do so reliably over all the vignettes. Assuming that the Q-values for the triage decisions are a good estimate of the probability of a particular triage, the DyQN approach is a heuristic which allows the agent to learn when best to stop asking questions given its current belief over the triage decisions. We develop two such heuristics in the form of probabilistic queries.

2.6.1 The OR query

The OR query is used by the DyQN: or query agent, as well as by the baseline agent partially-observed: or query. In practice, during each optimisation cycle and for each sampled memory in the batch, the Q-values for the starting state and following state are computed. Given the parameters of the neural network, for state , we refer to the maximum Q-value for triage actions as:

(11)

and define . We then define the target Q-value for asking as:

(12)

We see that this definition can be loosely mapped to the classic target Q-value, if one considers and .

To understand the origin of Eq. (12), we must treat Q-values as probabilities and define the events and as “the agent’s choice is an appropriate triage” on the current state and next state respectively. Writing the event as the negation of , we define the probability of asking as:

(13)

that is the probability of the event “Either the triage decision is not appropriate in the current state, or it is appropriate in the next state”. The query can also be written as:

(14)

which shows that the OR query encodes a stopping criterion heuristic corresponding to the event: “the triage decision is appropriate on the current state, and not appropriate on the next state”.

To recover equation (12), we consider the Q-values for the triage actions as probabilities , then :

(15)

Assuming the Markov property and ensuing conditional independencies and , we can write:

(16)

2.6.2 The AND query

Among the other heuristics we tested, the AND query gave the best results regarding the stopping criterion, and is used by the DyQN: and query and the partially-observed: and query baseline. For this query, we define the Q-value target for the ask action as:

(17)

Contrary to the OR query, which can be viewed as a particular parametrisation of the reward and of the classic Q-Learning target, the AND query has a form which is not immediately comparable.

In this case, the Q-target is obtained by considering the sequence of the event until the end of the interaction. That is, we consider the events , , …, , for states up to , with the maximum number of questions. We then consider the probability of the event “The current triage decision is incorrect, and the next is correct, or both the current and next triage decision are incorrect, but the following triage decision is correct, or …and so on.”. We can rewrite as:

(18)

In practice, for both AND and OR queries, we obtained better results by using the known appropriate triages associated to the vignette of each sampled memories and defining

that is the maximum Q-value associated with an appropriate triage.

2.7 Memory

The agent’s memory is inspired from PER but does not rely on importance weighting. Instead, we associate to each memory tuple a priority:

which relies on the vector form of the counterfactual reward and is equal to the absolute value of the mean TD-Error over every action. We then store the experience tuple along with its priority , which determines in which of the four priority buckets the memory should be stored. The four priority buckets have different sampling probabilities, from for the lowest probability bucket to for the highest.

Before each optimisation step, we sample experience tuples from the priority buckets, and every time a tuple is sampled, its priority decays with a factor which slowly displaces it into priority buckets with lower sampling probability. This approach contrasts with the priority update of PER, which sets the priority equal to the new TD-Error computed during the optimisation cycle. This approach yields better empirical results than using the classic PER priority update and importance weighting.

2.8 Metrics

We evaluated the quality of the model on a test set composed of previously unseen vignettes, using three target metrics: appropriateness, safety, and the average number of questions asked. During training, those metrics are evaluated over a sliding window of 20 vignettes, and during testing, they are evaluated over the whole test set.

Appropriateness

Given a bag of triage decisions , we define a triage as appropriate if it lies at or between the most urgent and the least urgent triage decision for each vignette. For instance, if a vignette has two ground truth triage decisions {Red, Green} from two different doctors, the appropriate triage decisions are {Red, Yellow, Green}. Appropriateness is the ratio of agent’s triage decisions which were appropriate over a set of vignettes.

Safety

We consider a triage decision as safe if it lies at or above , the least urgent triage decision in . Correspondingly, we define safety as the ratio of the agent’s triage decisions which were safe over a set of vignettes.

Average number of questions

The RL agent is trained to decide when best to stop and make a triage decision. The average number of questions is taken over a set of vignettes and varies between and , an arbitrary limit at which point the agent is forced to make a triage decision.

Figure 1:

RL learning curves for all agents on the three key metrics show that, compared to the other learning agents, DyQN produces optimised policies which better match doctors’ decisions while asking fewer questions on both the training and test set. Each step is a question asked, and after a burn-in period of 1000 steps, each new step is followed by a batched optimisation cycle. The lighter curves are the exponential mean average of the raw metric, and the darker curves are obtained using locally weighted smoothing. The higher variance of the training curves is due to a smaller sample size with a moving average of 20 training vignettes, as compared to the test curves where each record is the average over the full test set (

).

2.9 Baselines

We compare our RL approach with a series of baselines, using the same train () and test split of the dataset

. The supervised models are voting ensembles of classifiers calibrated using isotonic regression (see Table

3, and B).

The fully-observed model

The fully-observed model is trained using the vignettes with their complete set of evidence . It represents the less optimised version of the triage policy, which can only deal with full presentations.

The partially-observed model

In addition to the two DyQn agents (OR query and AND query) defined above, we consider other two agents, referred to as partially-observed agents. The learning agents refers to the two DyQN agents and the two partially-observed agents, because those four agents learn to stop during the RL training. But the triage actions of the partially-observed agents are pre-trained in a fully supervised way on a greatly expanded dataset of clinical vignettes , constructed from the original set of vignettes . Given a vignette , we generate a new vignette for each element of the powerset of the evidence set with a cap at . If the vignette has more than ten pieces of evidence, the remaining evidences generate vignettes for each of the element of the powerset, by growing the rest of the evidence linearly and combining it with each element . For instance, if , we sample two pieces of evidence , and for each one vignette will be created with evidence set , another with , and a third one with . Using the described process, we generated new vignettes from each vignette belonging to the original dataset . Critically, for each created vignette, the correct triage decisions are the same as the generating vignette.

After having trained a classifier on this extended dataset, the RL agent uses the class probabilities returned by the classifier as Q-values for the triage actions. In other words, only the ask action is trained during the RL phase. Hence the partially-observed agent does not improve on its ability to triage given a fixed set of evidence but uses the RL process to train a stopping criterion. We present two sub-types of partially-observed agents, the partially-observed: or query which uses the same Q-value target defined in Eq. (12), and the partially-observed: and query which uses the AND query defined in Eq. (17).

Non-learning agents

We also compare our approach to a random policy, which picks random actions, and always-green policies which always picks the triage action Green

which has the highest prior probability in the dataset (

).

The human baseline

The human performance on the triage task with full evidence is estimated using a proxy metric called the sample mean inter-expert agreement for appropriateness and safety . For each vignette and each associated bag of expert decisions , this metric is the sample mean of the ratio of the experts’ decisions which were appropriate, or safe, given the decisions from the other experts. Here represents an element of multiplicity in the multiset, that is only one expert decision. We then define human appropriateness and safety as:

(19)

3 Results

3.1 The fully supervised approach performs on a par with humans on the test set

The fully-observed baseline reaches an appropriateness of and safety of on the test set (see 2). This baseline is trained in a fully supervised way on the full evidence sets of each vignette. Contrary to the RL agents, it does not learn to stop and is not trained to handle small evidence sets, like those encountered during the RL interactions. If we consider the human appropriateness () and safety () as a good estimate of experts’ performance on the task, the supervised baseline performs slightly better than humans on full evidence sets. However, using only the supervised approach does not give us a direct insight into when best to stop asking questions, and if implemented on its own would require the definition of an arbitrary stopping criterion.

Appropriateness Safety Avg. Questions N
DyQN: or query .85 (.023) .93 (.015) 13.34 (.875) 10
DyQN: and query .76 (.014) .86 (.012) 1.40 (.331) 10
partially-observed: or query .79 (.000) .88 (.000) 23 (.000) 10
partially-observed: and query .79 (.006) .86 (.005) 10.35 (.467) 10
random .39 (.027) .74 (.026) .25 (.025) 10
always-green .71 (.000) .75 (.000) 0 (.000) 10
human .84 .93 full
fully-observed .86 .94 full
Table 2: Average performance on the test set at convergence (over the last 10 evaluations), after training the agents on a common training and test set.

3.2 DyQN performs on a par with both experts and the fully supervised approach

To compare DyQN to the baselines, we trained all agents on the same training and test set (Figure 1). The results presented in Table 2 summarize the performance of the learning agents with the best hyper-parameters: DyQN: or query, DyQN: and query, partially-observed: or query , partially-observed: and query. The DyQN agent using the OR query performs better than the other agents in term of appropriateness (, , , ) and safety (, , , ). And while it relies on less clinical evidence, asking on average questions (, , ), it is on a par with human performance ( appropriateness) as well as the fully-observed baseline ( appropriateness), both of which use all the evidence on the vignette to come to a decision. Interestingly, the AND query produces comparatively worse result for the DyQN agent than it does for the partially-observed agent (see section 3.4).

3.3 The agents adapt to unseen cases by asking more questions

The average number of questions asked during training for all the learning agents is significantly lower (, ) than the number of questions asked during testing (, ) (Figure 2). This is a direct effect of the dynamic nature of the target Q-value, which adapts to the agent’s confidence in its triage decision. Given that the presentations on the test set are new to the agent, the ask action will have a high Q-value, and be favoured over triage, until the agent gathers enough information to make a high confidence triage decision.

Figure 2: The difference in number of questions asked between training and testing across the four learning agents shows that the Dynamic Q-value targets allows the agent to adapt to unseen cases by asking more questions.

3.4 The partially-observed baseline performance

The partially-observed agents yield similar results in terms of appropriateness (), with a slight increase in safety for the partially-observed: or query which obtains safety on average on the test set, compared with for partially-observed: and query (Table 2). The learning curves presented in Figure 1 show that the appropriateness and safety of the two partially-observed baselines do not improve significantly across training. This is due to their fixed triage model, pre-trained in a supervised manner on , while only the ask action is trained during RL.

What differentiates the two agents is the number of questions they ask. partially-observed: or query asks on average more questions () than partially-observed: and query () on the training set, and never stops early on the test set ( questions) while partially-observed: and query stops on average after questions.

The nature of the OR query might not be suited for a model calibrated to work well across all size of evidence sets, similar to the pre-trained model the two partially-observed agents use for triage. Indeed, for well-calibrated models, the quality of the triage decision tends to increase monotonically with the size of the evidence set (see B), which leads the OR query to favour the ask action. Because the underlying heuristic of its stopping criterion is proportional to , and it tends to be triggered when the quality of the triage decision decreases as the evidence set grows (i.e. in non-calibrated models).

The AND query, on the other hand, puts more emphasis on the quality of the current triage decision, and when is the target Q-value for ask is . The agent then becomes increasingly myopic as the current Q-values for the triage actions increase, and it will tend to discard the future Q-values. This property may aid partially-observed: and query to adapt to a well-calibrated model, assigning a higher weight to the present decision, and explain why it can stop on the test set. On the other hand, the same mechanism may impact the performance of the DyQN: and query agent by being too confident in the current decision and stopping early while the triage model is still early in its training. This quickly decreases the ability of the agent to explore, and in turn, increases the bias of the triage model towards smaller evidence sets.

3.5 DyQN shows a slightly worse performance on K-Fold cross-validation

The performance of the DyQN: or query agent was then evaluated using three different K-Fold cross-validation runs, with ten folds each. The results were slightly worse on average than the performance obtained on the test split used for the agent’s comparisons. While the performance varied greatly across runs (see 4 in the appendix), on the last ten evaluations of the test set for each fold, the average appropriateness reached (, , , ), with a safety of (, , , ), and an average number of questions asked of 13.14 (). The variability in performances across runs is a classic observation in Reinforcement Learning, but here, it is also due to the diverse difficulty of the randomly sampled test vignettes. The lower average appropriateness, which did not reach the average human performance ( evaluated over the whole dataset), may indicate that on average the random test splits were more difficult, i.e. further from their respective training set distribution, than the one sampled for the agents’ comparison. We did not submit the other agent to the same K-Fold cross-validation, but we would expect similar variability in the results as well as a lower performance overall.

4 Discussion

By learning when best to stop asking questions given a patient presentation, the DyQN is able to produce an optimised policy which reaches the same performance as supervised methods while requiring less evidence. It improves upon clinician policies by combining information from several experts for each of the clinical presentations. Moreover, while the result on the test set is on a par with human performance, the performance of the fully supervised approach on the training set ( appropriateness) indicates that the task has a low Bayes Error rate, and given enough data we would expect DyQN to exceed human performance.

One of the reasons to use the Dynamic Q-Learning over classic Q-Learning is to ensure that the Q-values correspond to a valid probability distribution. Using the classic DQN would produce unbounded Q-values for the asking action, because asking is not terminal, whereas the Q-values for the triage action would be bounded. In classic Q-Learning, only a careful process of reward shaping for the

ask action could account for this effect.

While the problem of optimal stopping has been studied in settings where actions are associated with a cost [20, 21], the other immediate advantage of DyQN is that it is able to treat the stopping heuristic as an inference task over the quality of the agent’s triage decisions. Interpreting the triage actions’ Q-values as probabilities allow us to rewrite the Q-value update as the solution to the inference query, which leads to the agent getting increasingly better at it through interaction, and adapting dynamically as triage decisions improve during training. This approach is well-tailored for information gathering tasks, where an agent must make inference on a latent variable (here the triage) given the information it has gathered so far.

Our particular task sits at the intersection between a supervised task, which allows obtaining rich counterfactual rewards at each step, and an RL task, which allows learning to triage and stop simultaneously. Contrary to a purely supervised approach, the stopping criterion impacts the triage decision, and vice-versa, and both systems learn jointly to optimise the policy. Indeed, it is the nature of the RL process to bias the collection of data towards trajectories associated with high values. In our case, the value of the ask action depends on the triage decisions, but because stopping impacts the data collected, it affects, in turn, the quality of the triage decisions. While exploration is critical for RL agents, they do not explore the state-space exhaustively, which is an advantage in very large state-spaces like ours. This might explain in part why the DyQN is able to outperform the partially-observed baseline performance on the training and test sets, because the joint optimisation of stopping and triage produces a data distribution which favours the DyQN triage performance. On the other hand, this data gathering process may also lead to significant biases due to important regions of the state-space left entirely unexplored.

This approach gives promising results for the future of data-driven triage automation and could allow learning region-specific policies or secondary triage policies for which no guidelines are available. However, like any medical algorithm, it requires thorough clinical validation, before and after deployment, in order to ensure its safety and efficacy. Furthermore, in practice, given the breadth and complexity of clinical decision making, these new expert-systems based on machine-learning are often associated with a rule-based layer which guarantees that corner cases are covered.

5 Conclusion

In this work, we introduce a method to learn medical triage from expert decisions based on Dynamic Q-Learning, a variant of Deep Q-Learning, which allows a Reinforcement Learning agent to learn when to stop asking questions by learning to infer the quality of its triage decision. Our approach can be used in conjunction with any question-asking system, and while requiring less evidence to come to a decision, the best DyQN agent is on a par with experts’ performance, as well as with a fully supervised approach. This RL approach can produce triage policies tailored to healthcare settings with specific triage needs. Moreover, it could help improve clinical decision making in regions where trained experts are scarce. A direction for future investigation should be to train agents not only to stop but also to learn a policy over questions under an active-inference framework.

Conflict of Interest Statement

The Chief Investigator and most co-investigators are paid employees of Babylon Health.

Author Contributions

A.B. conceived of the presented idea and developed the theory. A.B. developed the software necessary for training reinforcement learning agents and ran the experiments presented in the paper. A.B. supervised the work of B.B., Y.Z. and M.L. who performed the initial experiments. B.B. developed and tested many of the theories leading to the final algorithm. Y.P. and A.Ba. participated in the formal definition and gathering of the dataset. G.P., R.B., K.G., D.T., J.R., A.Ba., Y.P., Y.Z. and D.B. verified the analytical methods. D.B. and S.J. helped supervise the project. All authors discussed the results and contributed to the final manuscript.

Acknowledgments

The authors would like to thank Mario Bordbar, Nathalie Bradley-Schmieg, Lucy Kay, and Karolina Maximova, for their unwavering support.

References

  • [1] David R. Eitel, Debbie A. Travers, Alexander M. Rosenau, Nicki Gilboy, and Richard C. Wuerz. The Emergency Severity Index Triage Algorithm Version 2 Is Reliable and Valid. Academic Emergency Medicine, 10(10):1070–1080, oct 2003.
  • [2] J.G Cronin. The introduction of the Manchester triage scale to an emergency department in the Republic of Ireland. Accident and Emergency Nursing, 11(2):121–125, apr 2003.
  • [3] R Beveridge, J Ducharme, L Janes, S Beaulieu, and S Walter. Reliability of the Canadian Emergency Department Triage and Acuity Scale: Interrater agreement. Annals of Emergency Medicine, 34(2):155–159, aug 1999.
  • [4] S. B. Gottschalk, D. Wood, S. Devries, Lee A. Wallis, and S. Bruijns. The cape triage score: A new triage system South Africa. Proposal from the cape triage group. Emergency Medicine Journal, 23(2):149–153, feb 2006.
  • [5] Y. van Ierland, M. van Veen, L. Huibers, P. Giesen, and H. A. Moll. Validity of telephone and physical triage in emergency care: The Netherlands Triage System. Family Practice, 28(3):334–341, jun 2011.
  • [6] Mohsen Ebrahimi. The reliability of the Australasian Triage Scale: a meta-analysis. World Journal of Emergency Medicine, 6(2):94, 2015.
  • [7] Joany M. Zachariasse, Vera Van Der Hagen, Nienke Seiger, Kevin Mackway-Jones, Mirjam Van Veen, and Henriette A. Moll. Performance of triage systems in emergency care: A systematic review and meta-analysis, may 2019.
  • [8] Shelley L. McLeod, Joy McCarron, Tamer Ahmed, Keerat Grewal, Nicole Mittmann, Steve Scott, Howard Ovens, Jason Garay, Michael Bullard, Brian H. Rowe, Jonathan Dreyer, and Bjug Borgundvaag. Interrater Reliability, Accuracy, and Triage Time Pre- and Post-implementation of a Real-Time Electronic Triage Decision-Support Tool. Annals of Emergency Medicine, sep 2019.
  • [9] J. Turner, A. O’Cathain, E. Knowles, and J. Nicholl. Impact of the urgent care telephone service NHS 111 pilot sites: A controlled before and after study. BMJ Open, 3(11), 2013.
  • [10] Mauro Annarumma, Samuel J. Withey, Robert J. Bakewell, Emanuele Pesce, Vicky Goh, and Giovanni Montana. Automated triaging of adult chest radiographs with deep artificial neural networks. Radiology, 291(1):196–202, apr 2019.
  • [11] Scott Mayer McKinney, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg C. Corrado, Ara Darzi, Mozziyar Etemadi, Florencia Garcia-Vicente, Fiona J. Gilbert, Mark Halling-Brown, Demis Hassabis, Sunny Jansen, Alan Karthikesalingam, Christopher J. Kelly, Dominic King, Joseph R. Ledsam, David Melnick, Hormuz Mostofi, Lily Peng, Joshua Jay Reicher, Bernardino Romera-Paredes, Richard Sidebottom, Mustafa Suleyman, Daniel Tse, Kenneth C. Young, Jeffrey De Fauw, and Shravya Shetty. International evaluation of an AI system for breast cancer screening. Nature, 577(7788):89–94, jan 2020.
  • [12] Chao Yu, Jiming Liu, and Shamim Nemati. Reinforcement learning in healthcare: a survey. arXiv preprint arXiv:1908.08796, 2019.
  • [13] Christopher J. C. H. Watkins and Peter Dayan. Q-learning. Machine Learning, 8(3-4):279–292, may 1992.
  • [14] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  • [15] Richard Bellman. Dynamic programming. Princeton University Press, 1957.
  • [16] Albert Buchard, Kostis Gourgoulias, Alexandre Navarro, Max Swissele, Yura Perov, Adam Baker, and Saurabh Johri. Tuning semantic consistency of active medical diagnosis: a walk on the semantic simplex. In Frontier of AI-Assisted Care (FAC) Scientific Symposium, 2019.
  • [17] Manchester Triage Group. Emergency Triage : Telephone Triage and Advice. Wiley, 2015.
  • [18] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-Normalizing Neural Networks. Advances in Neural Information Processing Systems, 2017-Decem:972–981, jun 2017.
  • [19] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, nov 2016.
  • [20] Huizhen Yu and Dimitri P Bertsekas. A least squares q-learning algorithm for optimal stopping problems. Lab. for Information and Decision Systems Report, 2731, 2006.
  • [21] D. P. De Farias and B. Van Roy. On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Optimization Theory and Applications, 105(3):589–608, 2000.
  • [22] Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011.

Appendix A Baseline model for supervised triage

We train the supervised model using a soft-voting ensemble classifier from scikit-learn [22], calibrated using a non-parametric approach based on isotonic regression (see sklearn.isotonic). The structure of the ensemble is described in table 3.

Type Parameters
SGDClassifier max_iter=1000
LogisticRegression max_iter=1000
MLPClassifier hidden_layer_sizes=(512, 512),
alpha=1, max_iter=1000,
n_iter_no_change=5, tol=0.001
DecisionTreeClassifier max_depth=5
RandomForestClassifier max_depth=5, n_estimators=10,
max_features=1
SVC gamma=“auto”
Table 3: Structure of the supervised triage model. Each estimator of the ensemble was calibrated using the isotonic method of CalibratedClassifierCV.

Appendix B Non-Calibrated versus Calibrated voting classifier

Calibrating the voting classifier for both the partially-observed and fully-observed baselines results in better performance on smaller evidence set. The performance is also more consistent overall, and tend to improve monotonically as the evidence set grows. Figure 3 shows the performance of both approaches over a random sample of 10% of the test set (),

Figure 3: Performance comparison of the supervised approaches, trained with a) and without b) isotonic calibration. c) presents the distribution of the evidence in the test set used for the comparison, a random sample of the test set ().

Appendix C DyQN K-Fold cross-validation

The learning curves of the DyQN: or query agent across three different 10-Fold cross-validation runs, started with three different random seeds, are presented figure 4.

Figure 4: DyQN: or query learning curves during 10-Fold cross-validation.
1:DyQN’s and functions, environment’s , memory’s and , noise variance .
2:dataset of clinical vignettes
3:Initialization
4:for  to  do {until the maximum number of games is reached.}
5:   
6:   
7:   
8:   for  to  do {until maximum question is reached.}
9:      if  then {the environment forced a triage action.}
10:         
11:      else
12:         
13:      end if
14:       {noise is added to the Q-value for ask before greedy selection.}
15:      
16:      
17:       {compute memory priority.}
18:      
19:      if  then {after the burn-in period perform one optimization cycle at each step.}
20:          {Sample a batch of size N from memory.}
21:         
22:         
23:      end if
24:      if  then {sample new vignette when a triage decision is made.}
25:         break
26:      end if
27:   end for
28:end for
Algorithm 1 Training cycle of the Dynamic Q-Network (DyQN)