Man vs machine: an experimental study of geosteering decision skills

05/18/2020
by   Sergey Alyaev, et al.
NORCE
0

With the steady growth of the amount of real-time data while drilling, operational decision-making is becoming both better informed and more complex. Therefore, as no human brain has the capacity to interpret and integrate all decision-relevant information from the data, the adoption of advanced algorithms is required not only for data interpretation but also for decision optimization itself. However, the advantages of the automatic decision-making are hard to quantify. The main contribution of this paper is an experiment in which we compare the decision skills of geosteering experts with those of an automatic decision support system in a fully controlled synthetic environment. The implementation of the system, hereafter called DSS-1, is presented in our earlier work [Alyaev et al. "A decision support system for multi-target geosteering." Journal of Petroleum Science and Engineering 183 (2019)]. For the current study we have developed an easy-to-use web-based platform which can visualize and update uncertainties in a 2D geological model. The platform has both user and application interfaces (GUI and API) allowing us to put human participants and DSS-1 into a similar environment and conditions. The results of comparing 29 geoscientists with DSS-1 over three experimental rounds showed that the automatic algorithm outperformed 28 participants. What's more, no expert has beaten DSS-1 more than once over the three rounds, giving it the best comparative rating among the participants. By design DSS-1 performs consistently, that is, identical problem setup is guaranteed to yield identical decisions. The study showed that only two experts managed to demonstrate partial consistency within a tolerance but ended up with much lower scores.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 6

page 8

page 11

page 14

10/30/2020

An interactive sequential-decision benchmark from geosteering

Geosteering workflows are increasingly based on the quantification of su...
05/03/2020

Autoencoders for strategic decision support

In the majority of executive domains, a notion of normality is involved ...
04/17/2020

SportsXR – Immersive Analytics in Sports

We present our initial investigation of key challenges and potentials of...
12/07/2012

A simple method for decision making in robocup soccer simulation 3d environment

In this paper new hierarchical hybrid fuzzy-crisp methods for decision m...
08/19/2021

Improving Human Decision-Making with Machine Learning

A key aspect of human intelligence is their ability to convey their know...
09/18/2013

Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

In this work we explore the use of reinforcement learning (RL) to help w...
07/01/2011

Towards a Reliable Framework of Uncertainty-Based Group Decision Support System

This study proposes a framework of Uncertainty-based Group Decision Supp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditionally, research in geosteering has been focused on interpretation of log measurements. During the last decade there has been a steady growth of automated methods for the measurement inversion and interpretation which yields steadily growing amounts of data that needs to be handled by the decision makers. This opens the possibility to target the oil-bearing zones which were not economically viable previously. At the same time this also makes the decision-making more complex by adding more relevant information to consider and evaluate.

The literature review in Kullawan et al. (2016) showed that there was hardly any prior publication that considered a consistent framework for geosteering decision making with several objectives. The authors prepared an alternative decision-focused approach. In the last few years, there have been several more attempts to address geosteering as a sequential decision problem. Kullawan et al. (2014) introduced a multi-criteria framework optimized for sequential decisions in geosteering. Veettil et al. (2020) developed a Bayesian estimator of stratigraphy that can be further extended with forward well planing. Chen et al. (2015); Luo et al. (2015) considered an ensemble-based method for optimization of reactive steering under uncertainty. Kullawan et al. (2018) demonstrated application of dynamic programming for finding optimal long-term decision strategies for a certain set of geosteering problems. Kristoffersen et al. (2020) proposed an AI based approach to steering based on the initial field planning. In Alyaev et al. (2019), a simplified dynamic programming algorithm was used in a context of a more general geosteering problem with several targets.

For this study we have developed an easy-to-use web-based platform which can visualize and update a 2D geosteering model Alyaev et al. (2019). We are using the problem set-up of multi-target geosteering Alyaev et al. (2019) to evaluate decisions of geosteering experts. The objective expands on the classical steering objective to follow the roof of the reservoir by allowing the expert to choose one of two sand layers based on their thickness. Uncertainties in the layer positions and thicknesses are updated automatically using a synthetic electromagnetic (EM) measurement as described by Chen et al. (2015); Alyaev et al. (2019).

The main contribution of this paper is the experiment for which we have invited formation evaluation and geosteering experts to compete for getting the highest well value (an approximation of the Net Present Value, NPV). The purpose of the experiment is to compare the decisions of the experts with the fully automated algorithm that has been introduced by Alyaev et al. (2019).

The paper is organized as follows: First, we explain the rules and the setup of the experiment. After that, Section 3 summarizes and compares the results obtained by the experiment participants with the results of the automated system and discusses the pros and cons of automated decision-making. Finally, the findings of the paper and further perspectives on decision making are summarized in Section 4.

2 The Structure of the Experiment

To evaluate the decision-making strategies of the experts, we developed a simplified online Decision Support User Interface (DSUI). The online mobile application Alyaev et al. (2019) gives the contestants the same information that a geosteering decision algorithm would get in a user-friendly Graphical User Interface (GUI), see Figure 1.

Figure 1: The DSUI running on a mobile phone during the experiment.

2.1 Objective

The objective for the decision-making task is to make landing and steering decisions in a multi-layer geological setting. The pre-drill model contains 5 alternating layers: shale-sand-shale-sand-shale. The goal is to maximize an approximate NPV of the well. This is done by landing and staying near the roof of a sand layer with considerations of layer thickness and drilling costs. More specifically, the objective score is calculated by the following rules: the participant gets:

  • points for every meter in sand layer (along X-axis), where is the layer thickness

  • points when they drill in the sweet-spot near the roof (0.5 m to 1.5 m from the top boundary of a sand)

  • negative points is the cost of drilling every meter, where .

An example of synthetic truth and a possible steering trajectory is shown in Figure 2.

Figure 2: An example of synthetic truth and a possible steering trajectory. The highlighted part of trajectory gives positive score.

2.2 Uncertainty

Like any decision, geosteering decisions are made under uncertainty. The main uncertainty during geosteering is the lack of complete knowledge of the geology through which the well will be drilled. We represent uncertainty using an ensemble of 120 realizations of the layered geology. The users can view an overprint of the ensemble which provides a display of the (white) sand layers’ location uncertainties (Figure 3).

Figure 3: Uncertainty is represented by an ensemble of realizations and visualized as an overprint of all ensemble members.

2.3 Decision steps

Each round of the competition consists of at most 14 geosteering decisions, each being either a change in the direction of the next drill-stand or a stopping decision (Figure 4).

Figure 4: A user interface showing controls for steering and steering limits. The ellipses represent decision points and their size indicates the look around of the EM tool. The orange part of trajectory is already drilled; the red part is the next decision to commit to; the blue part is the plan ahead. The yellow ellipse shows the selected point at which the steerer can adjust the dip. The selection can be moved by the buttons.

Until the well is finalized, the steerer can plan ahead the entire well by changing the dip of the well in the decision points (ellipses in Figure 4). Alternatively, the contestant can decide to stop drilling at any of the points. The latter might be optimal if, for example, the well entered the underburden.

At each decision point the participant must commit to a decision: choose whether to adjust the dip for the next drill-stand or stop drilling. Stopping decisions implies that the well is finalized, and no further drilling steps can be taken.

2.4 Tools to make informed decisions

To aid in their decision making, the contestants are presented with a visual decision support tools in the DSUI . The DSUI dynamically updates uncertainty as the well is drilled and helps to estimate the well value.

Once a well trajectory is planned, it can be evaluated using the scoring function with respect to the current understanding of uncertainty (the ensemble). The results of this evaluation are summarized in a bar diagram as shown in Figure 5. The diagram shows a cumulative density diagram based on the 120 ensemble members (light blue). The results are grouped into percentiles of value (P10 - P90) shown in dark blue. The interface also shows the percentile values for the previous evaluation as gray bars on the background (Figure 5).

Figure 5: A score distribution diagram shown to a user based on current set of realizations representing the uncertainty.

The percentile / cumulative density diagram is interactive. The user can select a percentile to see the subset of realizations that give the selected value range, e.g. between P60 and P70 (Figure 6).

Figure 6: An example of selecting subset of realizations using the interactive score distribution diagram.

2.5 Automatic update of uncertainty

The original ensemble is based on a prior distribution which is also used to generate the synthetic truth in the experiment. The ensemble of realizations is updated following each decision using the Ensemble Kalman Filter (EnKF) algorithm described in

Chen et al. (2015). 111The EnKF is a Monte-Carlo (discrete) approximation of the Kalman Filter. It gives an approximation of a Bayesian update with Gaussian priors and likelihoods. For the update we use measurements produced from a synthetic EM tool which is located at the drill-bit and has look-around capability of +/- 4.8 meters. The system performs one update between decision points which uses measurements in three equally distributed locations.

2.6 Rounds of the experiment

The experiment was carried out in a plenary session at a workshop. After the presentation of the rules summarized above, the contestants had a practice period of 15 minutes to familiarize themselves with the DSUI and the rules. During this period a DSUI expert was showing the usage of the features of the user interface and a possible strategy for geosteering on a big screen.

Following the demonstration, there were three scoring rounds of approximately 6 minutes each. All rounds had an identical ensemble of starting realizations but a different synthetic truth unknown to the participants. To evaluate consistency of decisions, the truths were chosen as:

  • Round 1) The bottom layer was optimal

  • Round 2) The top layer was optimal

  • Round 3) Identical to Round 2, to allow comparison of the consistency of contestants’ decisions under the same conditions.

The synthetic truths as well as the optimal solution computed by deterministic optimization on the synthetic truth are shown in Figure 7.

3 Results and discussion

The results presented here are based on a competition (experiment) which was held as a plenary session of the biannual Formation Evaluation and Geosteering Workshop 2019 by NFES and NORCE held in Stavanger Norway NORCE (2019). Out of the 75 workshop participants, 55 participated in all three rounds. The wells ’drilled’ by all the participants are compared with the optimal trajectory in Figure 7. A fraction of the 55 participants did not reach any of sand layers or were affected by software issues. For fairness, we disregard them from the results, and consider the remaining 30 participants, whom we call qualified participants. Among the qualified participants was the fully automated decision support system referred to as DSS-1 for the rest of the paper. DSS-1 is based on the variation of the algorithm with a discount factor presented in Alyaev et al. (2019).

In the rest of this section, we first present the methodology for decision analysis. Equipped with this methodology, we discuss the results of the experiment in terms of decision outcomes (total score), quality of decisions, as well as consistency of decisions for the two identical rounds. We specifically highlight the comparison between human participants (HPs) and DSS-1 .

3.1 Decision analysis methodology

Before presenting and discussing the results from this empirical study, we need to provide the basis for how the results should be evaluated. Decision analysis clearly lays out the four elements of rational decision-making Bratvold and Begg (2010); Abbas and Howard (2015).

The first element is information, or “What do I know about the problem under consideration?” An important component of this knowledge is the determination of values or “What do we want to achieve with this decision?” In our study this was specified through the objective, or scoring, function discussed earlier.

The second element is alternatives, or “What courses of action are open to us?” For the geosteering problem discussed here, the alternatives are: continue in the current direction, steer (build-up or drop) or stop.

The third element is an assessment of uncertainty, “What don’t I know?”; In this case we are uncertain about the geology ahead of the drillbit.

Finally, there is logic, or “How do we put knowledge, alternatives, and values together to arrive at a decision?”

Given these elements, we can now characterize a good decision as one that is logically consistent with the alternatives, information, and values brought to the decision. In the decision analysis process, the outcome of a single decision does not imply the quality of that decision nor the decision strategy. That is, given the uncertainty, a good decision may lead to a bad outcome and vice versa.

Decision making under uncertainty, sometimes referred to as robust optimization Chen et al. (2015), normally will not lead to the same decision that would have been made if the geological truth was known at the time of the decision. Such an optimization results in a ”robust” decision given the current uncertainty / incomplete information. The robustness is understood in terms of the decision’s ability to cope with with the uncertainty.

Decision analysis framework allows to identify good decisions before knowing their outcome by recording the principal inputs of the decision-making process. To that end, the decision strategy of DSS-1 is designed on the principles of the robust optimization, which are known to lead to better decisions under uncertainty. When it comes to decision strategy of human participants, they are impossible to deduce based on decision data alone. Therefore, the ambition of the experiment is to assess the decision strategies based on the outcome of the series of the decisions over several similar rounds.

3.2 Analysis of the experimental results

Round 1

Round 2

Round 3

Figure 7: The well trajectories drilled by participants in each of the three rounds (also including not qualified participants). The highlighted trajectories show: the top participant result; the median participant result; DSS-1 result; and the solution obtained by optimization assuming perfect information (Possible maximum). For the qualified participants the participant ID is shown in the paranthesis.

In this study the human decision makers were presented with the same objective, the same alternatives, and the same geological information as the fully automated system. Clearly, the study participants could do better that an automated system if they possess geological knowledge, or logic, over and beyond what is built into the automated system. That may well be the case in a real-world geosteering context, but in this controlled experiment, that should not be possible. Thus, we put the decision makers and the system in equal conditions in terms of information availability.

As the outcome of every decision is a result of both skill and chance, it is quite possible for a below average skilled geosteerer to achieve good results over the 14 decisions made for one well here. However, in the long run, the decision outcomes should be representative of the decision quality of each participant. To reduce the influence of chance, the experiment included a training round and three qualifying rounds.

3.2.1 Simple ranking

There is no unique method to assess results of a competition over several distinct rounds as it requires scaling the results by a chosen metric. As a primary and simple metric, we used the percentage of maximal possible result for each round which was averaged to give the final ranking. The results are scaled by the 100% result, which is obtained by discrete optimization on the synthetic truth for each round, thus representing a close approximation to theoretical possible maximum. The scoreboard of the experiment, as well as the details, are presented in the appendix (Section 6). The human participants are identified as HP-, where is the rank (1 to 30) according to this metric. The fully automated decision system performed better than 93% of the participants placing 2nd among the 30 qualified participants. This high ranking should not be surprising given the earlier discussion in this subsection.

3.2.2 Comparative ranking

Another possibility to compare the results of different rounds is to consider the ranking within the population. The rank in the population is the position of the participant in the round among the other participants relative to the total size of the population. In the case of our experiment, we take advantage of two identical rounds and arrive at rank*, common for rounds 2 and 3. This type of ranking of top 11 participants is shown in Figure 8.

While DSS-1 was second in the simple ranking discussed previously (see Listing 1), it gets the top position in this alternative ranking. The simple ranking is highly influenced by results in a single round. HP-01 got a near-perfect score (92%) in round 1, making him the top simple-ranked participant. At the same time neither HP-01, nor any other participant has beaten DSS-1 in more then one round, bringing DSS-1 to the top of comparative ranking.

One can argue that comparative ranking is more objective as it reduces the influence of chance / luck from each single round. Another interesting observation is that, among the top 10, the relative placement of human participants did not change compared to simple ranking. This might be attributed to the similar influence of chance to the decisions of all HPs’.

Another advantage of the comparative ranking is that it allows us to see the learning, if any, over the course of the experiment’s rounds. The learning is a sign of improvement of skill, which is required for good decisions.

Figure 8 uses single ranking for levels 2 and 3, which allows us to directly compare the improvement of HPs against themselves. From the plots one can see that out of the top 11 participants, only two have improvement from round 2 to 3. Both of the improved participants have a very low result in the first round. This indicates that in such a short experiment, learning is hard to achieve above the skills required to be in top 5.

Figure 8: The comparative ranking of the top participants over three rounds of the experiment. The rounds 2 and 3 followed the identical setup which enabled to derive rank which included participants’ results from both rounds (60 results). The rank* is derived by scaling this rank to 30 participants, resulting in fractional values. The mean rank is the rank (1 to 30) based on the mean of the three rounds. For convenience this figure uses the same color-coding as Figure 9.

3.3 Consistency of decision strategies

Figure 9: Consistency of decisions when performing the same task for all the participants: DSS and Human Participants (HPs).

a.   b.    
c.   d.

Figure 10: Examples of trajectories drilled by participants over the three rounds. The rounds 2 and 3 have the same test set up, where the best solution was to drill into the top layer. Round 1 has the other test, where the best solution was to drill into the bottom layer. Consistent (predictable) decision making results in identical or similar results for round 2 and 3. Each plot showing the best participant in its consistency group (see Figure 9): a. absolutely consistent; b. consistent; c. relatively consistent; d. other.

In general, human beings are far less consistent in their decision-making than an automated DSS built on the principles of decision analysis (see e.g. Kahneman (2011)). One of the advantages of an automated decision-making system is repeatability and hence predictability. That is, the system is guaranteed to make the same decisions given the same input parameters.

The experiment allowed us to test the extent to which the HPs were consistent in their decision making by comparing their decisions in the identical rounds 2 and 3. Figure 10 shows the trajectories drilled by several participants with different levels of consistency:

  • DSS-1 , which produces identical trajectories for identical set-ups.

  • Consistent users, for whom the distance between trajectories for the same set-up was less than 0.5 meters on average.

  • Relatively consistent users, for whom the consistency was worse than for consistent users, but distance between the trajectories in the same test was at least lower than between different cases. Note that

    here is the standard deviation of average distance between the pairs of trajectories based on all combinations of tests 1-3.

  • Other users for whom the consistency for the same set-ups was not observed.

All the results arranged by the level of consistency are shown in Figure 9. For comparison the figure shows gray area of selecting the optimal layer purely by chance. 222By guessing, one has 50% chance to guess and aim for the optimal layer in each round. Given three rounds, a participant has a 1/8th chance to aim for the optimal layer all three times. Thus, if all the participants did not use any relevant knowledge and tried to land and drill in a layer chosen randomly, about four of them should have selected the correct layer in all three rounds. From Figure 9

the number of consistent users is lower than probability of random guessing. The number including the relatively consistent users is still within possible error given relatively small number of total participants.

Another important observation from Figure 9 is that when compared to only the consistent users, DSS-1 has achieved the best results by far. From this we can conclude that for HPs, the strategy that scored highest involved chance (early betting on which layer to land), rather than skill (a consistent strategy that takes into account the data and evidence early in the process of steering).

4 Conclusions

In this paper we have presented a web-based platform which provides users with the opportunity to perform assisted decision-making under uncertainty. The uncertainty is represented by an ensemble of realizations which serves two purposes. Firstly, this enables a novel visualization capability which gives information on how the geometric uncertainty relates to expected value of the planned well. Secondly, the system takes advantage of EnKF to assimilate the data along the selected well path. To our knowledge this experiment platform is novel in geosteering contexts as it provides the possibility to compare the results of decision making by human experts with automated algorithms.

For this study, the platform was used to perform an experiment which put 29 geoscientists against the DSS algorithm from Alyaev et al. (2019). The results show that DSS-1 outperformed all but one qualified participant considering relative wells’ value (doing better than 94%). What’s more, no participant beat DSS-1 more than once over the three rounds, giving it the best comparative rating among the participants.

Moreover, the decision recommendations by DSS-1 are consistent; That is, identical problem setup is guaranteed to yield identical decisions. None of the human participants managed to achieve perfect consistency, and only two experts were consistent within the tolerance specified. However, these experts achieved much lower score relative to the automated system. Also noteworthy is that the number of experts who followed a consistent strategy is relatively low. Given the limited number of trials, their consistency can also be a result of educated guessing.

The presented study highlights the advantages of decision support systems which can aid gescientists with complex operational decisions by example of DSS-1 . We see the further use of DSUI -framework as a benchmark which will aid training of people and development of algorithms.

5 Acknowledgments

We thank Robert Ewald for help with deployment of the experiment platform.

Funding: This work was supported by the research project ’Geosteering for IOR’ (NFR-Petromaks2 project no. 268122) which is funded by the Research Council of Norway, Aker BP, Equinor, Vår Energi and Baker Hughes Norway.

References

  • A. E. Abbas and R. A. Howard (2015) Foundations of decision analysis. Pearson Higher Ed. Cited by: §3.1.
  • S. Alyaev, M. Bendiksen, A. Holsaeter, S. Ivanova, and R. Ewald (2019) GEOSTEERING game the norce way. External Links: Link Cited by: §1, §2.
  • S. Alyaev, E. Suter, R. B. Bratvold, A. Hong, X. Luo, and K. Fossum (2019) A decision support system for multi-target geosteering. Journal of Petroleum Science and Engineering 183, pp. 106381. External Links: Document, ISSN 09204105, Link Cited by: §1, §1, §1, §3, §4.
  • R. Bratvold and S. Begg (2010) Making good decisions. Society of petroleum engineers. Cited by: §3.1.
  • Y. Chen, R. J. Lorentzen, and E. H. Vefring (2015) Optimization of Well Trajectory Under Uncertainty for Proactive Geosteering. SPE Journal 20 (02), pp. 368–383. External Links: Document, ISSN 1086-055X, Link Cited by: §1, §1, §2.5, §3.1.
  • D. Kahneman (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York. External Links: ISBN 9780374275631 0374275637, Link Cited by: §3.3.
  • B. S. Kristoffersen, M. C. Bellout, and C. F. Berg (2020) Automatic well planner in well placement optimization. Note: in preparation Cited by: §1.
  • K. Kullawan, R. Bratvold, and J.E. Bickel (2016) Value Creation with Multi-Criteria Decision Making in Geosteering Operations. International Journal of Petroleum Technology 3 (1), pp. 15–31. External Links: Document, ISSN 2409787X, Link Cited by: §1.
  • K. Kullawan, R. B. Bratvold, and J. E. Bickel (2018) Sequential geosteering decisions for optimization of real-time well placement. Journal of Petroleum Science and Engineering 165 (January), pp. 90–104. External Links: Document, ISSN 09204105, Link Cited by: §1.
  • K. Kullawan, R. Bratvold, and J. E. Bickel (2014) A Decision Analytic Approach to Geosteering Operations. SPE Drilling & Completion 29 (01), pp. 36–46. External Links: Document, ISSN 1064-6671, Link Cited by: §1.
  • X. Luo, P. Eliasson, S. Alyaev, A. Romdhane, E. Suter, E. Querendez, and E. Vefring (2015) An ensemble-based framework for proactive geosteering. In SPWLA 56th Annual Logging Symposium, July 18-22, 2015, pp. 1–14. Cited by: §1.
  • NORCE (2019) Formation evaluation and geosteering workshop 2019. External Links: Link Cited by: §3.
  • D. R. A. Veettil, K. Clark, et al. (2020) Bayesian geosteering using sequential monte carlo methods. Petrophysics 61 (01), pp. 99–111. Cited by: §1.

6 Appendix

The appendix provides the detailed scoreboard for all qualified participants, see Listing 1. The scoreboard table also contains the participant HP-2a*, who was ranked above DSS-1 despite getting technical issues, which excluded him from qualified participants. As HP-2a* was not considered for ranking, this information is not available.

For completeness, we include Listing 2, which provides the breakdown for individual rounds that is the basis for the scoring in Listing 1. To do comparative ranking, we take advantage of the fact that rounds 2 and 3 were identical and combine them into a single rating, marked as rank*. Rank* is computed as placement of the current result of the participant among all 60 results of 30 participants in the two rounds and then scaled to the range between 1 and 30. Thus the two columns with r* have fractional numbers.

              Score    Score    Mean
Place     ID  value  percent    rank
    1. HP-01   6112    78.6%   r:  2
  n/a. HP-2a*  5720    73.6%   r:n/a
    2. DSS-1   5426    72.5%   r:  1
    3. HP-03   5294    69.4%   r:  3
    4. HP-04   4983    66.8%   r:  4
    5. HP-05   4925    65.8%   r:  5
    6. HP-06   4613    64.3%   r:  6
    7. HP-07   4816    64.1%   r:  7
    8. HP-08   4686    63.1%   r:  8
    9. HP-09   4608    60.9%   r:  9
   10. HP-10   4549    59.5%   r: 10
   11. HP-11   4364    57.6%   r: 13
   12. HP-12   4055    54.3%   r: 20
   13. HP-13   4087    54.0%   r: 21
   14. HP-14   3829    53.9%   r: 11
   15. HP-15   4022    53.6%   r: 22
   16. HP-16   3749    51.7%   r: 17
   17. HP-17   3588    49.9%   r: 19
   18. HP-18   3984    49.8%   r: 16
   19. HP-19   3789    49.7%   r: 12
   20. HP-20   3509    49.6%   r: 15
   21. HP-21   3460    49.1%   r: 18
   22. HP-22   3828    49.0%   r: 14
   23. HP-23   3384    47.0%   r: 24
   24. HP-24   3514    46.0%   r: 25
   25. HP-25   3129    45.2%   r: 23
   26. HP-26   3148    44.1%   r: 27
   27. HP-27   3091    43.4%   r: 29
   28. HP-28   2525    37.2%   r: 28
   29. HP-29   2919    35.8%   r: 26
   30. HP-30   2691    34.9%   r: 3
Listing 1: The scoreboard of the competition results
       Round 1        Round 2           Round 3
   ID    score  rank    score    rank*    score    rank*
HP-01  1:  92%  r: 1  2:  75%  r*: 4.0  3:  68%  r*: 9.0
DSS-1  1:  62%  r: 4  2:  78%  r*: 2.0  3:  78%  r*: 2.0
HP-03  1:  70%  r: 3  2:  76%  r*: 3.5  3:  62%  r*:14.5
HP-04  1:  55%  r:12  2:  74%  r*: 5.0  3:  71%  r*: 7.0
HP-05  1:  56%  r:10  2:  71%  r*: 6.5  3:  70%  r*: 7.5
HP-06  1:  33%  r:20  2:  73%  r*: 6.0  3:  87%  r*: 1.0
HP-07  1:  57%  r: 9  2:  75%  r*: 4.5  3:  61%  r*:16.5
HP-08  1:  50%  r:14  2:  73%  r*: 5.5  3:  66%  r*:11.0
HP-09  1:  58%  r: 7  2:  66%  r*:10.0  3:  59%  r*:18.0
HP-10  1:  61%  r: 5  2:  52%  r*:26.0  3:  65%  r*:12.5
HP-11  1:  55%  r:11  2:  61%  r*:15.5  3:  56%  r*:20.5
HP-12  1:  45%  r:18  2:  52%  r*:25.5  3:  66%  r*:11.5
HP-13  1:  51%  r:13  2:  54%  r*:24.0  3:  58%  r*:18.5
HP-14  1:  24%  r:24  2:  60%  r*:17.0  3:  77%  r*: 3.0
HP-15  1:  47%  r:17  2:  54%  r*:24.5  3:  60%  r*:17.5
HP-16  1:  31%  r:21  2:  55%  r*:23.0  3:  69%  r*: 8.5
HP-17  1:  27%  r:22  2:  57%  r*:19.5  3:  65%  r*:12.0
HP-18  1:  71%  r: 2  2:  21%  r*:30.0  3:  57%  r*:19.0
HP-19  1:  50%  r:15  2:  79%  r*: 1.5  3:  21%  r*:30.5
HP-20  1:  20%  r:26  2:  62%  r*:14.0  3:  66%  r*:10.5
HP-21  1:  18%  r:28  2:  62%  r*:15.0  3:  67%  r*: 9.5
HP-22  1:  59%  r: 6  2:  64%  r*:13.0  3:  24%  r*:29.0
HP-23  1:  25%  r:23  2:  55%  r*:23.5  3:  61%  r*:16.0
HP-24  1:  47%  r:16  2:  35%  r*:27.5  3:  56%  r*:21.5
HP-25  1:  11%  r:29  2:  56%  r*:22.0  3:  69%  r*: 8.0
HP-26  1:  21%  r:25  2:  56%  r*:21.0  3:  55%  r*:22.5
HP-27  1:  20%  r:27  2:  53%  r*:25.0  3:  57%  r*:20.0
HP-28  1:   3%  r:30  2:  45%  r*:26.5  3:  63%  r*:13.5
HP-29  1:  57%  r: 8  2:  27%  r*:28.5  3:  23%  r*:29.5
HP-30  1:  38%  r:19  2:  31%  r*:28.0  3:  35%  r*:27.
Listing 2: The detailed results of the competition rounds