1 Introduction
On a variety of highstakes tasks, machine learning algorithms are on the threshold of doing what human experts do with such high fidelity that we are contemplating using their predictions as a substitute for human output. For example, convolutional neural networks are close to diagnosing pneumonia from chest Xrays better than radiologists can
[14, 15]; examples like these underpin much of the widespread discussion of algorithmic automation in these tasks.In assessing the potential for algorithms, however, the community has implicitly equated the specific task of prediction with the general task of automation. We argue here that this implicit correspondence misses key aspects of the automation problem; a broader conceptualization of automation can lead directly to concrete benefits in some of the key application areas where this process is unfolding.
We start from the premise that automation is more than just the replacement of human effort on a task; it is also the metadecision of which instances of the task to automate. And it is here that algorithms distinguish themselves from earlier technology used for automation, because they can actively take part in this decision of what to automate. But as currently constructed, they are not set up to help with this second part of the problem. The automation problem, then, should involve an algorithm that on any given instance both (i) produces a prediction output; and (ii) additionally also produces a triage judgment of its effectiveness relative to the human effort it would replace on that instance.
Viewed in this light, machine learning algorithms as currently constructed only solve the first problem; they do not pose or solve the second problem. In effect, currently when we contemplate automation using these algorithms, we are implicitly assuming that we will automate all instances or none. In this paper, we argue that when algorithms are built to solve both problems – prediction and triage – overall performance is significantly higher. In fact, even on tasks where the algorithm significantly outperforms humans on average per instance, the optimal solution is to automate only a fraction of the instances and to optimally divide up the available human effort on the remaining ones. And correspondingly, even on tasks where an algorithm does not beat human experts, the optimal solution may still be to automate a subset of instances.
Now is the right time to ask these questions because the AI community is on the verge of translating some of its most successful algorithms into clinical practice. Notably, an influential line of work showed how a wellconstructed convolutional net trained on goldstandard consensus labels for diagnosing diabetic retinopathy (DR) outperforms ophthalmologists in aggregate, and these results have led to considerable optimism about the role of algorithms in this setting [15]. But the community’s discussion around these prospects has focused on the algorithms’ perinstance prediction performance without considering the problem of recognizing which instances to automate.
Using largely the same data, we build this additional, crucial component and find that, even in this context where an algorithm outperforms human experts in the aggregate, the optimal level of triage is not full automation. Instead significantly more accuracy can be had by triaging a fraction of the instances to the algorithm and leaving the remaining fraction to the human experts. Specifically, full automation reduces the error rate from roughly 5.5% by human doctors to 4% with an algorithm solving every instance; automation with optimal triage, though, reduces the error further to roughly 3.5% – effectively adding a significant further fraction to the gains that were realized by algorithmically automating the task in the first place.
This gain occurs for two reasons: first the algorithm’s high average performance hides significant heterogeneity in performance. For example, on roughly 40% of the instances the algorithm has zero errors. By the same token, on a small set of instances, the algorithm makes far more errors than average and these instances can be assigned to humans. Second, when the algorithm automates a fraction of the cases, that frees up human effort; reallocating that effort to the remaining cases can achieve further gains. In principle these gains could come from a single doctor spending additional time on the instance, or from multiple doctors looking at it; in our case, the available data allows us to explicitly quantify the gains arising from the second of these effects, due to the fact that we have multiple doctor judgments on each instance.
These results empirically demonstrate the importance of the triage component for the automation problem. We show that the gains we demonstrate are unlikely to have fully tapped the potential gains to be had through algorithmic triage: this neglected component deserves the kind of sustained effort from the machine learning community that the prediction component has received to date. In fact, given the disparity in efforts on these two problems, it is possible that the highest return to improving automation performance is through solving triage rather than further improving prediction.
2 General Framework
In a typical application where we consider using algorithmic effort in place of human effort, the goal is to label a set of instances of the problem with a positive or a negative label. For example, in a medical diagnosis setting, we may have a set of medical images, and the goal is to label each with a binary yes/no diagnosis. Let be an instance of the labeling problem, and let be its ground truth value. For our purposes (as in the example of a binary diagnosis), we will think of this ground truth value as taking a value of either or , with corresponding to a negative label and corresponding to a positive label.
How do we approach this problem algorithmically? Given a set of instances, we could train an algorithm to produce a numerical estimate
with the goal of minimizing a loss function
, where increases with the distance between its two inputs. For notational convenience, we will write for , the algorithmic error on instance . The values are then converted into (binary) predictions, and we can evaluate the resulting error relative to ground truth. As a concrete example, one option is to threshold the to produce a or value, and evaluate agreement with .When a social planner considers the prospect of introducing algorithms into an existing task, we often imagine the question to be the following. The planner currently has human effort being devoted to instances of the task; for an instance , we can imagine that there is a human output , resulting in a loss . The question of whether to automate the task could then be viewed as a comparison between and — the loss from algorithmic effort relative to the loss from human effort.
Allocating Human Effort
In order to think about the activity of automation in a richer sense, it is useful to start from the realization that even in the absence of algorithms, the social planner is implicitly working with a larger space of choices than the simple picture above suggests. In particular, they have some available budget of total human effort, and they do not need to allocate it uniformly across instances: for an instance , the planner can allocate units of human effort for different possible values of . There are multiple possible interpretations for the meaning of ; for example, in the case of diagnosis we could think of as corresponding to the number of distinct doctors who look at the instance, or alternately to the total amount of time spent collectively by doctors on the instance. Thus, our functions and should more properly be written as twovariable functions that take an instance and a level of effort : we say that is the label provided as a result of units of human effort on instance , and is the resulting loss that we would like to minimize.
Note that as functions of effort , it may be that and are quite different for different instances and . For example, instance may be much harder than instance , and hence will be much larger than ; similarly, instance might not exhibit as much marginal benefit from additional effort as instance , and hence the growth of over increasing values of might be much flatter than the growth of . The social planner might well not have precise quantitative estimates for values like , but implicitly they are seeking to allocate human effort across the set of instances so as to minimize the total loss incurred. And indeed, a number of basic protocols — such as asking for second opinions — can be viewed as increasing the amount of effort spent on instances where there might be benefits for error reduction.
2.1 Automation involving Algorithms and Humans
When algorithms are introduced, the social planner has several new considerations to take into account. First, the full automation problem should be viewed more broadly than just a binary comparison of human and algorithmic performance; it can involve decisions about the allocation of both human and algorithmic effort. The introduction of the algorithm need not be allornothing: we can choose to apply it to some instances in a way that replaces human effort, thereby potentially freeing up this effort to be used on other instances. The average overall comparison might even hide instances where the algorithm much more significantly under (or out) performs the human. Second, decisions about the allocation of human effort depend on the function , which can be challenging to reason about. Algorithms can potentially provide assistance in estimating these quantities to help even in the allocation of human effort.
The general problem can therefore be viewed as follows. We would like to select a subset of instances on which no human effort will be used (only the algorithm), and we will then optimally allocate human effort on the remaining instances . Suppose that we have a budget on the total number of units of human effort that we can allocate, and we decide to allocate units of effort to each instance . On such an instance , we incur a loss of , using our notation above; and on the instances we incur a loss of from the algorithmic prediction.
We thus have the following optimization problem.
(1)  
(3)  
Our earlier discussions about algorithms and humans in isolation are special cases of this optimization problem: full automation, when the algorithm substitutes completely for human effort, is the case in which ; and the social planner’s problem in the absence of an algorithm — which still involves decisions about the effort variables — is the case in which . Intermediate solutions can be viewed as performing a kind of triage, a term we use here in a general sense for a process in which some instances go purely to an algorithm and others receive human attention.
By deliberately adopting a very general formulation, we can also get a clearer picture of the kinds of information we would need in order to perform automation more effectively. Specifically,

In addition to making algorithmic predictions , the automation problem benefits from more accurate estimates of the algorithm’s instancespecific error rate .

The allocation of human effort benefits from better models of human error rate, including error as a function of effort spent . As noted above, we can use an algorithm to help in estimating this human error rate.

Given estimates for the functions and , we can obtain further performance improvements purely through better allocations of human effort in the optimization problem (1).
We note that the notion of human error involves an additional set of complex design choices, which is how humans decide to make use of algorithmic assistance on the instances (in the set ) where they spend effort. In particular, if we imagine that algorithmic predictions are available on the instances in , then the humans involved in the decision on may have the ability to incorporate the algorithmic prediction into their overall output , and this will have an effect on the error rate . In general, of course, it will be difficult to model a priori how this synthesis will work, although it is a very interesting question; we show that our results for the automation problem do not require assumptions about this aspect of the process, but we explore this question later in the paper.
2.2 Heuristics for Automation
If we think of the social planner as the entity tasked with solving the automation problem in (1), they are now faced with a set of considerations: not simply the binary question of whether to use human or algorithmic effort, but instead how to divide the instances between those (in ) that will be fully automated and those (in ) that will involve human effort, and how to estimate the error rates and so as to solve the allocation problem effectively.
We will show that significant performance gains can be achieved over both algorithmic and human effort even if we use only very simple heuristics for the different components of the allocation problem. Moreover, through a stronger benchmark based on ground truth, we will also show that much stronger gains are in principle achievable with improved approaches to the components.
We can describe the simplest level of heuristics in terms of subproblems (i), (ii), and (iii) from earlier in this section. The simplest heuristic for (i) is to use the functional form of the variance,
as a measure of the algorithm’s uncertainty in its prediction on . A comparably natural predictor does not exist for (ii). We therefore design new algorithmic predictors to estimate the values of both (i) and (ii), and use these to guide the allocation of algorithmic and human effort. We show that using separate predictors in this way also strengthens the performance gains relative to the simpler heuristic based on , although even this basic heuristic yields improvements over full automation.Given these predictors for (i) and (ii), what does this suggest about simple strategies for approximating (iii), the allocation of human effort? First, we could restrict attention to solutions in which each instance in receives the same amount of effort. Thus, if there are total instances in , we could choose a real number , perform full automation on a set of instances, and divide the units of human effort evenly across the remaining fraction of the instances. This means that each instance in the set receiving human effort gets units of effort. For simplicity, let us write , so that the human effort per instance in this remaining set is . With this allocation of effort, the resulting loss is
This restriction on the set of possible solutions suggests the following heuristic. Consider any partition of the instances into and , and suppose we use the algorithm on all the instances. Then we can write the resulting loss in the following convenient way: Subtracting from the loss that results when we assign units of human effort to each instance in , we see that the difference is .
Thus, for a given value of (specifying the fraction of instances that we wish to assign to the algorithm), it is sufficient to rank all instances by , and then choose the instances with the largest values of to put in the set that we give to the algorithm. We can thus think of as the triage score of instance , since it tells us the effect of algorithmic triage relative to human effort on the expected error.
2.3 Overview of Results
We put these ideas together in the context of a widely studied medical application, concerned with the diagnosis of diabetic retinopathy, detailed in the next section. We rank instances by their triage score, using simple forms for the algorithmic loss and human loss , and we then search over possible values of , evaluating the performance at each. We find that there is range of values of , and a way of choosing instances to give to the algorithm, so that the resulting performance exceeds either of the binary choices of fully assigning the instances to the algorithm or to human effort.
As a scoping exercise, to see how strong the possible gains from our automation approach might be, we consider what would happen if we ranked instances by a triage score derived from a groundtruth estimate of the individual human error on each instance. Such a benchmark indicates the power of the optimization framework if we are able to get better approximations to the key quantities of interest — the functions and . We find large performance gains from this benchmark, and we also explore some stronger methods to work on closing the gap between our simple heuristics and this ideal.
Different Costs for Error.
In many settings, a social planner may associate higher costs to errors committed by automated methods relative to errors committed by humans — for example, there may be concern about the difficulty in identifying and correcting errors through automation, or the end users of the results may have a preference for human output. It is natural, therefore, to consider a version of the optimization problem in which the objective (1) has an additional parameter specifying the relative cost of error between algorithms and humans. This new objective function is
(4) 
One might suppose that as grows large, the social planner would tend to favor purely human effort, given the relative cost of errors from automation. And indeed, the basic comparison that is typically made between (for full automation) and (for purely human effort) would suggest that this should be the case, since eventually will exceed the ratio between these two quantities. But our more detailed framework makes clear that these aggregate measures of performance can obscure large levels of variability in difficulty across instances. And what we find in our application is that it is possible for the algorithm to identify a large set of instances on which it makes zero errors. Thus, even with strong preferences for human effort over algorithmic effort, it may still be possible to find sizeable subsets of the problem that should nevertheless be automated — a fact that is hidden by comparisons based purely on aggregate measures of performance.
3 Medical Preliminaries, Data and Experimental Setup
We first outline the details of the medical prediction problems, and describe the data and experimental setup used to design the automated decision making algorithm. As our primary goal is to study the interaction of this algorithm with human experts, we treat many of the underlying algorithmic components (e.g. a deep neural network model trained for predictions) as fixed, and focus on the different modes of interactions.
The main setting for our study is the use of fundus photographs, large images of the back of the eye, to automatically detect Diabetic Retinopathy (DR). Diabetic Retinopathy is an eye disease caused by high blood sugar levels damaging blood vessels in the retina. It can develop in anyone with diabetes (type 1 or type 2), and despite being treatable if caught early enough, it remains a leading cause of blindness [1].
A patient’s fundus photograph is graded on a five class scale to indicate the presence and severity of DR. Grade corresponds to no DR, to mild (nonproliferative) DR, to moderate (nonproliferative) DR, to severe (nonproliferative) DR and to proliferative DR. An important clinical threshold is at grade , with grades and above called referable DR, requiring immediate specialist attention, [2]. Figure 1 shows some example fundus photos.
3.1 Data
The data used for designing the algorithm consists of these fundus photographs, with each photograph having multiple DR grades. These grades are assigned by individual doctors independently looking at the fundus photograph and deciding what DR classification the image should get. There are important distinctions between the data used for training the algorithm, and the data used for evaluation. The training dataset is much larger in size (as a key component is a large deep neural network) and hence each image is more sparsely labelled – typically with one to three DR grades. The evaluation dataset is much smaller and more extensively annotated. It is described in detail below in Section 3.3.
In the mechanics of training our classifier, it will be useful to view DR diagnosis as a 5class classification, using the 5point grading scheme. However, when we consider the problem of triage and automation at a higher level, we will treat the task as a binary classification problem into images that are referable or nonreferable.
3.2 A Decision Making Algorithm for Diabetic Retinopathy
Similar to prior work [7], we first use the training dataset to train a convolutional neural network to classify each image. Specifically, the CNN outputs a distribution over the different DR grades for each fundus photograph, with the empirical distribution of the individual doctor grades for that image as the target.
The question of whether the patient has referable DR (with a grade of at least 3), and hence needs specialist attention, is one of the most important clinical decisions. The outputs of the trained convolutional neural network form the basis of an algorithm to make this decision. First, we compute the predicted probability of referable DR by summing the model’s output mass on DR grades
. For each image , this gives a predicted referable DR probability of . Next, we rank the images according to the values, and pick a threshold . Images with are labelled as referable DR by the algorithm, and the others as nonreferable.The choice of the threshold is made so that the total number of cases marked as referable by the algorithm matches the total number of cases marked as referable when aggregating the human doctor grades. This ensures that the effort, resources, and expense needed to act upon the algorithmic decisions match the current (feasible) effort resulting from the human decision making process. This is discussed in further detail in Section 3.4
The result of this process is an algorithm taking as input a patient’s fundus photograph, and outputting a binary decision on whether the patient has nonreferable/referable Diabetic Retinopathy. We illustrate the components of the DR algorithm in Figure 2. The full details of our algorithm development setup can be found in Appendix Section A.
3.3 Evaluation
We evaluate our decisionmaking algorithm on a special, goldstandard adjudicated dataset [10]. This dataset is much smaller than our training data, but is meticulously labelled. For every fundus photograph in the dataset, there are many individual doctor grades, and also a single adjudicated grade, given after multiple doctors discuss the appropriate diagnosis for the image. This adjudicated grade acts as a proxy for the ground truth condition, and we use it to evaluate both the individual human doctors and the decision making algorithm. In Appendix Section E we carry out an additional evaluation of the methods on a different dataset, which exhibits the same results.
3.4 Aggregation and Thresholding
During evaluation and the triage process, we often have multiple (binary) grades per image. These grades might correspond to multiple different human doctors individually diagnosing the image, or the algorithm’s binary decision along with human doctor grades. In all of these cases, for evaluation, we must typically aggregate these multiple grades into a concrete decision – a single summary binary grade. To do so, we compute the mean grade and threshold by a value . If the mean is greater than , this corresponds to a decision of (referable); otherwise the decision is (nonreferable).
The choice of the threshold also affects the choice of which is used for the algorithm’s decision. To compute , we first aggregate the multiple doctor grades per image into a single grade by computing their mean and thresholding with . This gives us the total number of patients marked as referable by the human doctors, and we pick so that the algorithm matches this number.
In the main text, we give results for , which corresponds to the majority vote of the multiple grades for an image. In the Appendix, we include results for , which support the same conclusions.
4 The Triage Problem and Human Effort Reallocation
The performance of human experts and algorithmic decisions are typically summarized and compared via a single number, such as average error, F1 score, or AUC. Seeing the algorithm outperform human experts according to these metrics might suggest the hypothesis that the algorithm uniformly outperforms human experts on any given instance.
What we find instead, however, is significant diversity across instances in the performance of humans and algorithms: for natural definitions of human and algorithmic error probability (formalized below), there are instances in which human effort has lower error probability than the algorithm, and instances in which the algorithm has lower error probability than human effort. Moreover, this diversity is partially predictable: we can identify with nontrivial accuracy those instances on which one entity or the other will do better. This diversity and its predictability is an important component of the automation framework, since it makes it possible to divide instances between algorithms and humans so that each party is working on those instances that are best suited to it.
We first study this performance diversity, and then move on to the problem of allocating effort between humans and algorithms across instances.
4.1 Per Instance Error Diversity of Humans and Algorithms
In order to look at the differences in performance between humans and algorithms on an instancebyinstance level, we want to define, for each instance in the adjudicated dataset, an error probability for the doctors (human experts) and an error probability for the algorithm.
The quantity is straightforward to compute on the adjudicated dataset: for an instance , suppose that doctors evaluate it, assigning it binary nonreferable/referable grades . Let be the binary adjudicated grade for . Then we can define
That is, is the average disagreement of doctors with the adjudicated grade.
Computing is a little more complicated. Recall that for an instance , the convolutional neural network model in the algorithm outputs a value between that is then thresholded to give a binary decision. A naive estimate of the error probability could therefore be if the instance is not referable, and if the instance is referable. Unfortunately, deep neural networks are wellknown to be poorly calibrated [8], and this naive approximation is both poorly calibrated and at a different scaling to the human doctors. This is not a concern for the algorithm’s binary decision, since only the rankordering of the values matter for this, but it poses a challenge for producing a probability that can serve as .
4.1.1 Determining Algorithm Error Probabilities
To overcome this issue, we develop a simple method to calibrate the convolutional neural network’s output. Recall that the neural network outputs a value for each image – i.e. it induces a ranking over the images , which is used to determine the algorithmic decision. We evaluate this induced ranking directly by asking:
Suppose we produced a (random) number of referable instances by sampling a random doctor for each instance, what is the probability that is among the top instances in the induced ranking?
We define as the probability that the prediction algorithm declares to be referable. We can then define the error probability, , as if the adjudicated grade is referable, and if is nonreferable. In Appendix Section B, we provide specific details of the implementation.
4.1.2 Results on Performance Diversity
We can now use the estimate of and to study the variation in human expert and algorithmic error across different instances. Specifically, we plot a histogram of values of across all the adjudicated image instances.
The result is shown in Figure 3. We see that while there are more images where , there is a nontrivial fraction of images with . In the subsequent sections, we analyze different ways of predicting these differences as a way to perform triage, and demonstrate the resulting gains.
4.2 Performing Triage and Reallocating Human Effort
In formulating the basic problem of automation, we considered two baselines for performance. The first is full automation, in which the overall loss is . The second is equal coverage of all instances by human effort: if we have a budget of units of effort for instances, then we allocate units of human effort to each, resulting in a loss of . Our goal here is to show that by allocating human and algorithmic effort more effectively according to optimization problem (1) from Section 2, we can improve on both of these baselines.
Recall the basic heuristic from Section 2: for an arbitrary , we compute a triage score for each instance ; we assign the first to the set to be handled by the algorithm, and we allocate equal amounts of human effort to the remaining set of instances. Note that corresponds to the full automation baseline, while corresponds to equal coverage of all instances by human effort. We will see, however, that stronger performance can be achieved for intermediate values of .
We begin with two ways of computing the triage score. The first follows the basic strategy from Section 2, where we train two algorithmic predictors to estimate (i) the algorithm’s error probability, and (ii) the human error probability . Specifically, we train two auxillary neural networks, one to predict and one to predict . To predict , we build off of the work of [13] on direct prediction of doctor disagreement: we label each example with a if there is agreement amongst the doctor grades, and otherwise, and train a small neural network to predict these agreement labels from the image embedding. A similar setup is employed for predicting , where the binary label now corresponds to whether the output of the diagnostic 5class convolutional neural network agrees with the doctor grades – i.e. does the model make an error on that image. The full details of this process are described in Appendix B.
The second method of computing a triage score establishes an “ideal” benchmark on the potential power of the optimization problem (1) using aspects of ground truth, sorting the instances by the true value of , since this divides the instances between humans and algorithms based on the relative strength of each party on the respective instances.
In both cases, we determine the performance of the human effort using the average of a corresponding number of randomly sampled doctor grades from the data. This allows us to demonstrate improvements without any assumptions on how the doctors might use information from the algorithmic predictions on these instances. It is also reasonable, however, to imagine a scenario in which the algorithmic predictions are still freely available even on the instances that we assign to the human doctors, and to consider simple models for how the doctor grades might be combined with these algorithmic predictions. We consider this case in the Appendix, which supports the same conclusions.
4.2.1 Triage Results
The results for these two triage scores, as we vary , are shown in Figure 4. The figure depicts both the average error (bottom row), as well as the F1 score (top row), which accounts for imbalances between the number of referable and nonreferable instances. The left column corresponds to using the difference between the predicted values of and as a triage score, while the right column corresponds to using the true value to perform triage. In both triage schemes, we observe that the best performance comes for , beating both the full automation protocol (dotted black line) and equal coverage of all instances by human effort (coloured dotted lines).
While combining algorithmic and human effort in both of these ways leads to performance gains, we see that the ground truth triage ordering performs significantly better than triaging by the predicted error probability. This suggests that learning better triage predictors might have an even greater impact on overall deployed performance than continuous slight improvements to diagnostic accuracy.
4.2.2 The Simplest Heuristic: Algorithmic Uncertainty
In the previous section, we saw the results of training two separate algorithmic predictors to estimate the values of and , and using the difference between these predicted values as a triage score. An even simpler triage score is given by only using the algorithm’s uncertainty, . In Figure 5, we show that even this triage score, available ‘for free’ from the algorithmic predictor, improves upon pure automation and pure human effort, although larger gains are available through using the two algorithmic error predictors. These results reiterate the rich possibilities for gains from algorithmic triage.
4.3 Differential Costs and ZeroError Subsets
Finally, we recall a further consideration from the framework in Section 2: suppose the social planner views errors made by algorithms as more costly than errors made by humans, resulting in an objective function of the form in (4), . As becomes large, what does this imply about the use of algorithmic predictions?
We find in our application that it is possible to identify large subsets of the data on which the algorithm achieves zero error. Such a fact can easily be hidden by considering only aggregate measures of algorithmic performance, and it implies that even when is large, there may still be an important role for algorithms in automation.
To quantify this effect, we order the instances by a triage score as in our earlier analyses. We then look at the average error of the algorithmic predictions on the first fraction of images: for varying between and , we plot
where is the number of errors made by the model on the first instances.
4.3.1 Results
Figure 6 left pane shows the results of plotting this quantity. We triage the cases both by our prediction of from the two error prediction algorithms as well as the simple algorithm uncertainty term, . We evaluate the average error of the algorithmic predictions on the first fraction of images, over three repetitions of training the diagnostic neural network component of the algorithm. We see that even using the simple as a triage score, we can identify a zeroerror subset that is the size of the entire dataset. Similar to Section 4.2.2, further improvements are shown by predicting the value of . The right pane of Figure 6 shows this result, where we can identify a zeroerror subset of size , again averaged over three repetitions.
5 Related Work
With the successes of machine learning and particularly deep learning methodologies in modalities such as imaging, there have been numerous works comparing algorithmic performance to human performance in medical tasks, albeit in frameworks that implicitly interpret automation as success in prediction. In this prediction setting, the general comparison is between the case in which only the algorithm is used and the case in which only human effort is used; such comparisons have been done for chest xrays
[14], for Alzheimer’s detection from PET scans [6], and for the setting we consider here based on diabetic retinopathy diagnosis from fundus photographs (and OCT scans) [4, 7]. The recent survey paper by Topol [15] references several additional studies of this kind. A few papers have begun to look at fixed modes of interaction with humans, including processes in which algorithmic outputs are reviewed by physicians [3, 5, 11], as well as fixed combinations of physician and algorithmic judgments [12].6 Discussion
This work has presented a framework for analyzing automation by algorithms. Rather than treating the introduction of algorithms in an allornothing fashion, we show that stronger performance can be obtained if algorithms are used both (i) for prediction on instances of the problem, and (ii) for providing triage judgments about which instances should be handled algorithmically and which should be handled by human effort. This broader formulation of the automation question highlights the importance of accurately estimating the propensity of both humans and algorithms to make errors on a perinstance basis, and the use of these estimates in an optimization framework for allocating effort efficiently. Analysis of an application in diabetic retinopathy diagnosis shows that this framework can lead to performance gains even for wellstudied problems in AI applications in medicine.
Through the analysis of benchmarks for stronger performance, we also highlight how stronger predictions of perinstance error has the potential to yield still better performance. Our findings thus demonstrate how further study of algorithmic triage and its role in allocating human and computational effort has the potential to yield substantial benefits for the task of automation.
Acknowledgements
We thank Vincent Vanhoucke, Quoc Le, Yun Liu and Samy Bengio for helpful feedback.
References
 [1] Hasseb Ahsan. Diabetic retinopathy – biomolecules and multiple pathophysiology. Diabetes and Metabolic Syndrome: Clincal Research and Review, pages 51–54, 2015.
 [2] American Academy of Ophthalmology. International Clinical Diabetic Retinopathy Disease Severity Scale Detailed Table.
 [3] Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, et al. Humancentered tools for coping with imperfect algorithms during medical decisionmaking. arXiv preprint arXiv:1902.02960, 2019.
 [4] Jeffrey De Fauw, Joseph R Ledsam, Bernardino RomeraParedes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature medicine, 24(9):1342, 2018.
 [5] Cem M Deniz, Siyuan Xiang, R Spencer Hallyburton, Arakua Welbeck, James S Babb, Stephen Honig, Kyunghyun Cho, and Gregory Chang. Segmentation of the proximal femur from mr images using deep convolutional neural networks. Scientific reports, 8(1):16485, 2018.
 [6] Yiming Ding, Jae Ho Sohn, Michael G Kawczynski, Hari Trivedi, Roy Harnish, Nathaniel W Jenkins, Dmytro Lituiev, Timothy P Copeland, Mariam S Aboian, Carina Mari Aparici, et al. A deep learning model to predict a diagnosis of alzheimer disease by using 18ffdg pet of the brain. Radiology, 290(2):456–464, 2018.
 [7] Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, Ramasamy Kim, Rajiv Raman, Philip Q Nelson, Jessica Mega, and Dale Webster. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22):2402–2410, 2016.
 [8] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. abs/1706.04599, 2017.
 [9] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [10] Jonathan Krause, Varun Gulshan, Ehsan Rahimy, Peter Karth, Kasumi Widner, Gregory S. Corrado, Lily Peng, and Dale R. Webster. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. abs/1710.01711, 2017.
 [11] Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q Nelson, Greg S Corrado, et al. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442, 2017.
 [12] Aniruddh Raghu, Matthieu Komorowski, and Sumeetpal Singh. Modelbased reinforcement learning for sepsis treatment. arXiv preprint arXiv:1811.09602, 2018.
 [13] Maithra Raghu, Katy Blumer, Rory Sayres, Ziad Obermeyer, Sendhil Mullainathan, and Jon Kleinberg. Direct uncertainty prediction with applications to healthcare. arXiv preprint arXiv:1807.01771, 2018.
 [14] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, and Andrew Y. Ng. Chexnet: Radiologistlevel pneumonia detection on chest xrays with deep learning. abs/1711.05225, 2017.

[15]
Eric Topol.
Highperformance medicine: the convergence of human and artificial intelligence.
Nature Medicine, 25:44–56, 2019.
Appendix A Training Data and Models Details
Our training dataset consists of fundus photographs with labels corresponding to individual doctor grades. There are 5 possible DR grades and hence 5 possible class labels. A subset of this data has fundus photographs with more than one doctor grade, corresponding to multiple doctors individually and independently deciding on the grade for the image. The label for these images is not a onehot class label but the empirical distribution of grades. For example, if an image has grades , then its label would be .
On this data, we train a convolutional neural network, an Inceptionv3 model with weights pretrained from ImageNet and a new five class classification head. We train with the Adam optimizer
[9] and an initial learning rate of 0.005. To better calibrate the model, we retrain the very top of the network (from the PreLogits layer) on just the data with two or more doctor grades.Training Error Probability Prediction Models
For Figures 4, 5 and 6, we use separate error probability prediction algorithms to predict the values of and . The setup for predicting is as follows: after training the main convolutional neural network on the train dataset, we train a small fully connected deep neural network to take the prelogit embeddings of a train image , and predict whether or not the main convolutional neural network was correct on that image. The label for the image is binary: agree/disagree on whether the mass put on referable by the convolutional neural network thresholded at equals the mass on referable by the human doctor grades, again thresholded at .
The setup for training builds off of [13]. First, we only select cases for which we have at least two doctor grades. For these, we take the image embedding from the Prelogit layer of the large diagnostic convolutional neural network as input, and the label as a binary target. This label is defined as follows: we split the available doctor grades into two evenly sized sets and . We aggregate all the grades in into a single referable/nonreferable grade by averaging and thresholding at , and do the same for the grades in . If these two aggregated grades agree, we label the image with (agreement, low doctor error probability), if not, we label with (disagreement, high doctor error probability.)
Appendix B Computing
In Section 4.1.1, we overviewed the method used to define a well calibrated error probability for the output of the convolutional neural network. In Algorithm , we give a stepbystep overview of the implementation of this method. In our experiments, we set .
Appendix C Triage and Allocation Algorithm
When using triage to reallocate human effort, we first order the instances by their triage scores, and then fully automate the first of them. On the remainder images, we allocate the budget of human doctor grades we have available. To allocate this set of grades, we use the equal coverage protocol: each of the remaining cases gets grades. If this is a noninteger amount, with spare grades, the cases identified as the hardest (according to the triage scores) get an additional grade. We then compute the final binary decision by taking the mean grade (for each case) and thresholding by (the majority vote.)
c.1 Results on other Thresholds
As described in Section 3.4, at evaluation, for an instance with multiple grades we aggregate all the scores by taking the mean and thresholding. In the main text, we pick this threshold to be , corresponding to the majority vote of all the grades. In Figure 7, we show the results corresponding to Figure 4 in the main text, but for when we take the thresholds to be (top row) and (bottom row). We see that the qualitative conclusions remain the same – combining human and algorithmic effort beats the full allocation and equal coverage protocols for both triage by the error prediction models and triage by the ground truth. We also see the same significant gap between ground truth and triage by error predictions.
Note that this choice of threshold affects the choice of , which is chosen so that the number of cases marked as referable by the model matches the number of cases marked as referable by the aggregated and thresholded grade of the human doctors, and could potentially affect the results of Figure 6. However, as shown in Figure 8, the choice of aggregation threshold does not affect the identification of zero error subsets.
Appendix D Triage and Human Effort Reallocation with Model Grades
The triage process for effort reallocation – Figures 4, 11 – assumes that the algorithm decision is not available for the cases that are not automated. This may be the situation if computing an algorithm decision is expensive (less likely) or (more likely) the algorithm decision is purposefully not shown in cases where it is unsure, so as not to bias the human doctors. However, another equally likely scenario is that the algorithm decision is also available ‘for free’ for the cases that are not fully automated. In Figure 9, we show the effort reallocation results from triaging if the model grades were available for all the cases (compare to Figure 4 in the main text). We observe that all of the main conclusions – the optimal performance is through a combination of automation and human effort, which beats both full automation and the different equal coverage baselines.
Appendix E Results on Additional Holdout Dataset
The results in the main text are on the adjudicated evaluation dataset, which, aside from multiple independent grades by individual doctors, also have a consensus score, the adjudicated grade, which is used as a proxy for ground truth. To further validate our result, we use an additional holdout set which doesn’t have an adjudicated grade, but does have many individual doctor grades. For each instance , we use half of its grades to compute a proxy ground truth grade, by aggregating and then thresholding the doctor grades. The other half of the grades are used in effort reallocation and evaluating the equal coverage baseline. The individual doctor grades in this dataset are slightly noiser (higher disagreement rates) than in the adjudicated evaluation dataset. Nevertheless this additional evaluation also supports all of the main findings.
The results on this dataset are qualitatively identical to those with the adjudicated data. Again, we see that there is a diverse spread of across instances, with around of the instances having the human experts perform better (Figure 10).
This diversity continues to be predictable, and we observe that triaging (by the error prediction models and by the ground truth) to combine human expert effort and and the algorithm’s decisions, Figure 11, also demonstrates that this combination works better than both full automation and equal coverage – the same conclusions seen in Figure 4 in the main text. Like the main text, we see a gap between triaging by the error prediction models and the ground truth score.
Finally, we also test to see if triaging can help find sets of zero error, like in Section E and Figure 6. We find that this is indeed the case, though the fractions are slightly smaller with this holdout dataset, likely because the labels are noisier than on the adjudicated evaluation dataset.
We also see that the fraction of zero error examples triaged is slightly lower with the separate error prediction model (Figure 12 right) than triaging by model uncertainty (Figure 12 left). The reason for this becomes apparent after further inspection: the results of Figure 12 are averaged over three independent repetitions of training a main diagnostic model, and a corresponding separate error model. We find that one of the three repetitions of the separate error model makes two errors – at of the way through the data, it triages two examples that are errors.This causes the percentage with zero error to drop from to . If we account for these two errors, we see that in fact triaging by the error prediction model is doing comparably to triaging by algorithm uncertainty, where allowing two errors gets to of the data.
Comments
There are no comments yet.