Learning and Recognizing Archeological Features from LiDAR Data

by   Conrad M Albrecht, et al.
Colorado State University

We present a remote sensing pipeline that processes LiDAR (Light Detection And Ranging) data through machine deep learning for the application of archeological feature detection on big geo-spatial data platforms such as e.g. IBM PAIRS Geoscope. Today, archeologists get overwhelmed by the task of visually surveying huge amounts of (raw) LiDAR data in order to identify areas of interest for inspection on the ground. We showcase a software system pipeline that results in significant savings in terms of expert productivity while missing only a small fraction of the artifacts. Our work employs artificial neural networks in conjunction with an efficient spatial segmentation procedure based on domain knowledge. Data processing is constraint by a limited amount of training labels and noisy LiDAR signals due to vegetation cover and decay of ancient structures. We aim at identifying geo-spatial areas with archeological artifacts in a supervised fashion allowing the domain expert to flexibly tune parameters based on her needs.



page 2

page 3


AutoGeoLabel: Automated Label Generation for Geospatial Machine Learning

A key challenge of supervised learning is the availability of human-labe...

Deep Learning Based 3D Point Cloud Regression for Estimating Forest Biomass

Knowledge of forest biomass stocks and their development is important fo...

Grid-Parametrize-Split (GriPS) for Improved Scalable Inference in Spatial Big Data Analysis

Rapid advancements in spatial technologies including Geographic Informat...

Learning-Based Human Segmentation and Velocity Estimation Using Automatic Labeled LiDAR Sequence for Training

In this paper, we propose an automatic labeled sequential data generatio...

Toward a new approach for massive LiDAR data processing

Laser scanning (also known as Light Detection And Ranging) has been wide...

Robust building footprint extraction from big multi-sensor data using deep competition network

Building footprint extraction (BFE) from multi-sensor data such as optic...

Algorithmic Design and Implementation of Unobtrusive Multistatic Serial LiDAR Image

To fully understand interactions between marine hydrokinetic (MHK) equip...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction & Motivation

New remote sensing technologies such as LiDAR, cf. [3]

and references therein, are revolutionizing many industries and fields—among them the one of archaeology—by providing rapid, high resolution scans of topography which might reveal e.g. the existence of ancient cities and landscapes. The problem is that, given the cost and labor intensive nature of traditional methods, archaeologists cannot effectively analyze these datasets. Via field-work and manual mapping, they exploit their domain knowledge to recognize human artifacts and classify them as houses, temples, walls, streets, and other elements of past human settlements.

LiDAR has the capacity to scan and map hundreds of square kilometers in a significantly shorter time compared to traditional archaeological fieldwork. These LiDAR datasets are delivered as a cloud of points with 3D information. The task of manual and computer-aided extraction and mapping of ground features for analytical purposes is a significant challenge and has attracted attention in the archeological and remote sensing literature since about the mid-2000s: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16].

In this paper we describe how techniques from computer vision and machine learning provide pipelines implemented on top of geospatial big data platforms such as e.g. 

IBM PAIRS Geoscope111access via https://ibmpairs.mybluemix.net

, open-source Python API wrapper available at

https://pypi.org/project/ibmpairs with code development in GitHub: https://github.com/IBM/ibmpairs, recently an Anaconda package has been published as well: https://anaconda.org/conda-forge/ibmpairs
to accelerate the archeologist’s work. Using LiDAR data from the ancient Purepecha city of Angamuco in Mexico [17], we exploit domain knowledge to learn and recognize promising ancient artifacts in this city such as e.g. houses. This allows us to more effectively identify areas of interest to the archeologists and to automatically classify and localize ancient, human-made artifacts.

We filter the data according to the archeologists’ domain knowledge on size, shape, and similar features. We then pass these shapes—in the form of an image bounded by an almost minimum bounding box—to a machine learning classifier that recognizes if they are human artifacts. The classifier follows an ensemble methodology based on a set of trained VGG artificial network [18] using manually annotated images given by the archeologists. The experimental results show that this approach is accurate and flexible for the archeologist’s needs.

Fig. 1: Overview of archeological classification scheme wrt. ancient buildings.

Ii Domain Knowledge

We consider data from the ancient city of Angamuco, Mexico, where the archeologists already performed extensive fieldwork, manually recognizing and mapping local features into specific types of human-made artifacts: houses, walls, temples, pyramids, etc. We use these labels to train and test deep learning classifiers. We also exploit the archeologists’ domain knowledge of this area and of the ancient Purepecha civilization which dominated Western Mexico during the last few centuries prior to European Conquest to guide us in our process of automating feature extraction and recognition from LiDAR data.

The following is a brief description of the kinds of structures present in the area, summarized in Fig. 1.

  • Linear features consist of single (e.g. walls or roads) that are longer than wide, and their height consists of two rock courses—which translate on average to 0.4–0.5m in height. Double linear features consists usually of more than one linear feature aggregated into a structure. For example, an “L” shape building consists of two linear features which creates an internal space/room on the inside. Triple linear features consists mostly of buildings walls (three walls) with an entrance (e.g., a house with entrance or a “U” building). Finally, a structure composed of four linear features (with no entrance) consists of a room.

  • Circular features can consist of mounds or circular rooms (that is, a linear feature with a circular geometry). Sometimes, pyramids can contain a circular or semi–circular sub–element (mostly to the front of feature). These features are most often placed facing a plaza (sunken or open).

  • Rectangular features consist of platforms (with well-defined corners), sunken plazas, and open plazas (conformed by other features) defined by the empty space formalized by such features.

  • Square features consist of rooms (with and without entrance) and enclosures, which are of same shape as a room, but larger in area, enclosing a space.

Fig. 2: Classified LiDAR point cloud and derived digital elevation model (DEM).

Iii Data Segmentation: From LiDAR Measurements to Images

The raw data used to start our analysis is geo-referenced elevation points derived from the physical LiDAR measurements


where and represent geo-location information, cf. longitude and latitude [19]. For each coordinate pair with fixed there might exist many elevation measurements due to multiple returns of the LiDAR laser pulse from e.g. vegetation. In contrast, for solid surfaces, such as e.g. streets, there is a unique . Since other surfaces such as e.g. water strongly absorb laser light, there might not even exist a single . Hence, the points form an irregular grid of data points.

Fig. 3: Left: the local DEM we derive, including manual annotations (yellow) of human made structures that occupy an area equal to or larger than . Right: the contours (black) we employ to define the minimum bounding boxes (MBBs, blue) for image cropping. Those images are then fed to the deep learning classifier. The corresponding labeling procedure is depicted in Fig. 7.

In a first step we classify bare ground data points, representing the earth’s surface or the top of a foundation,

in order to interpolate them to a regular grid, a

Digital Elevation Model (DEM). We apply the simple transformation


with . A nearest neighbor interpolation is used to convert the to a regular grid, with a spatial resolution of about 0.3 meters222This roughly corresponds to the average density of the LiDAR data scan.:


see Fig. 2.

Our approach does not filter outliers due to the fact that there exist series

where the last return does not represent bare ground. As we will see, our processing pipeline applies thresholding that naturally cuts off those peaks. To improve the image quality one could apply classical image filters as e.g. a linear Gaussian or a non-linear Median filter [20]. In contrast our simplified strategy benefits from increased speed compared to more elaborate DEM generation techniques.

Since we are interested in cultural features such as e.g. houses, only, and the irregular terrain dominates the DEM, we filter our global DEM suppressing wavelengths above a certain length scale by means of a two-dimensional Fourier analysis [21].   defines our notion of local. For our experiments we use in the range of couple of meters ( pixels). The parameter is manually tuned to yield satisfactory results by visual inspection. The left part of Fig. 3 provides an example.

Based on this local DEM we extract a set of contours for a fixed elevation :


which we again manually tune in a physically reasonable range of about to meters above local/reference ground . Of course, this procedure can be automatized by optimization as detailed in Fig. 4.

Accounting for the contour’s hierarchy we reduce the set of contours to


where denotes the area enclosed by , i.e. . Now we derive the set of Minimum Bounding Boxes (MBBs) [22]:


where defines a rectangle and its area. Fig. 3 (right part) shows both: (black contours) and (blue, semi-transparent boxes).

Note that defines relevant areas of interest to crop images from . This approach allows us to be more efficient compared to a naive sliding window procedure where one needs to systematically scan trough all possible rectangular window sizes shifted through the whole two-dimensional area!

Moreover, the MBBs allow us to apply pre-filtering to discard noise or irrelevant contours. In particular, since we are interested in recognizing ruins corresponding to house-like structures, we further restrict such that all have an area of at least , an aspect ratio of less than , and a circumference within bounds of 10 to 200 meters—since this is the typical size of a house in that area, according to the archeologists. In Fig. 6 we label the processing from to including pre-filtering by

. For our specific test setting we have .

Iv Deep Learning: From Images to House Classification

Starting from the MBBs generated as outlined in the previous section, we crop to obtain the image set


Let us denote the house classification function as


with a set of (geo-referenced) polygons. We consider the set of manual annotations of house ruins from the archeological field survey:


i.e. for our case study we pick archeological ruins representing house-like structures which occupy an area greater or equal to 20 square meters. For our benchmark we have .

In order to classify the elements of we particularly use


where is the MBB corresponding to the image and references any manual house annotation with . For our evaluation we set the constants to

Fig. 4: Number of contours for given threshold parameter . Values on the abscissa encode the normalized, one-byte grayscale values of the local DEM image on the left-hand side of Fig. 3. Automatically setting amounts for maximizing the number of MBBs and their associated contours, respectively. While typically results in a single MBB covering the total survey area, it gets partitioned into multiple smaller MBBs for increasing to the point where MBBs shrink to zero in size until none is left for . Depending on the complexity of the local DEM, the number of MBBs will typically fluctuate for near : While some MBBs shrink to zero area, others might partition into smaller ones.
Fig. 5: Sample images ( pixels) for areas with ruins classified as “houses” by the archeologists (top row). Negative samples are shown at the bottom row.

To increase the number of images as well as including the house feature’s context, we multiply the set by expanding each such that it includes up to 2 meters of ’s surroundings in steps of . We perform the increase of by a parallel shift of the boundary .

The feature vector we supply to the deep learning algorithm is constructed from an affine transformation

of the to the space


i.e. an image with aspect ratio 1 and a number of pixels. Our experiments fix to obtain the feature vectors

Fig. 6: Flow chart for generating and splitting test and training data of our experimental setup for identifying house-like structures from LiDAR data. The circled process labels P1…5 are detailed in the main text.
Fig. 7: Cartoon to illustrate the classification of from the local DEM cropped by the MBBs (blue, solid boundary) from manual annotation of domain experts (yellow, dashed boundary).

Moreover, we apply a normalization to each according to


such that the normalized images have vanishing mean and the absolute height (to global terrain) gets scaled out. Here, denotes averaging over image pixels, i.e.


We present a collection of sample images of house-like structures, as well as areas without houses, in Fig. 5. The full process of generating from MBBs is labeled as

in Fig. 6.

In order to a) increase the number of images for training the deep learning algorithm and b) to take into account the feature’s context, we multiply the number of elements of the set by expanding each such that it includes up to, but less than 2 meters of ’s surroundings in steps of . We perform the increase of by an outward parallel shift of the boundary faces . Specifically, for each MBB we generate 6 widened MBBs , , … to obtain corresponding normalized images , . We label this process by

. Another multiplication factor of 4 is achieved by rotating each by angles of :


which we refer to as


Finally, for our performance benchmarking we take and apply the classification function , cf. Eq. 10, to define


such that , cf. process

depicted in Fig. 7. Then, we randomly split into a training and a test set: and , respectively, such that and . In our specific setting we have

44 1056 16 689

Note that , i.e. the MBBs and/or LiDAR data do not perfectly capture all signatures of houses surveyed by the archeologists in the area of interest. There exists artifacts labeled that are not represented by an appropriate MBB derived from the local DEM which can be traced back to the fact that the corresponding local wall structures eroded below the picked threshold . Another root cause is heavy vegetation cover that does not allow for sufficient many LiDAR pulses to reach bare ground.

Starting from a local DEM, the general processing pipeline generates test and training data to be fed as input to models using 64 fully connected fc deeply learnt VGG network representation feature set [18]. The workflow333 However, since in our case , we apply only

to the elements of . In fact this leads to an equal number of positive and negative training samples: .
is depicted in Fig. 6.


-score, that is the harmonic mean of precision and recall

[23], for house detection, plotted vs. the ML rating index threshold .
(b) The Detection Error Tradeoff (DET) graph is obtained by tuning the ML rating index threshold (black arrows with label).
Fig. 8: Numerical evaluation of house recognition performance. The three different curves (red, blue, green) correspond to different schemes how elementary classifiers get combined, for details cf. main text.

Taking the we randomly select 90% of the training sample data into a sample training run. Ten such random selections ( in Fig. 6) result in ten different training runs and corresponding house models. Each machine learnt model DL #1, DL #2,…,DL #10 defines a classifier, each returning a confidence score . A lower score indicates that the input image is less likely to be a house and the higher score suggests that the input image is more likely to be a house.

An integration function generates a classification score for a given test image . In particular we employ the median, minimum, and maximum of the scores , , …, to define three three fused classifier models: robust, pessimistic, and optimistic, respectively.

We use a variable threshold for classification according to: , and not a house, elsewise. The numerical analysis of our pipeline’s performance is shown for each of the fused classifier models in Fig. 8 in terms of the –score as well as by plotting the detection error tradeoff (DET).

V Discussion of Results & Perspective

We are encouraged by our findings: As highlighted in the abstract, cognitive analytics can result in significant savings in terms of expert productivity444 The manual annotations used in have been collected as part of several surveys that multiple archeologists collected on the order of years while computation was performed on the order of hours. while missing a fraction of the artifacts—if permissible by the application.

The primary motivation of our remote sensing pipeline for archeology springs from the need of scaling the archeologist’s expertise. It is simply infeasible to allocate larger number of experts or taking longer periods of time to mark every structure by the archeologist meticulously looking for tell-tale signs distinguishing deteriorating evidence. Consequently, the existing best practices often suggest marking a few randomly selected artifacts in large field being surveyed and relying on using statistical techniques for estimating the number of artifacts in the entire field.

After a careful analysis of the available data and repertoire of the artifacts present within the data, cf. Fig. 1, we chose house as a representative artifact to assess the efficacy of the cognitive approach to scale archaeological expertise: These artifacts can be both high in volume, small in spatial extent, and could potentially be very similar to other structures such as round house.

Following our first experiments on a moderate size LiDAR dataset, in order to draw broad conclusions, our study aims at expanding the data processing to larger archeological sites, in particular with the aid of the Big Geospatial Data platform IBM PAIRS.

Secondly, our methodology allowed us to randomly distribute both test and training samples both originating from the same geographic area. In this respect, the reported performance of our system is an optimistic estimate with the expectation that the performance will generalize to the artifact feature population in the extended geographic area of Western Mexico.

Finally, our system processing constrained all the (cropped) images to be resized to the same dimensions and be thus agnostic to the scale information while any real system would be able to leverage scale information for recognizing the patterns. However, we aim at employing MBB characteristics such as, e.g., area , circumference, aspect ratio, , and number of contours at fixed in , to be either directly incorporated into the feature vector of

or to be fed to a separate machine learning model such as random forest. Table  

I shows actual results of the latter approach that can be merged with the classification of the deep learning models DL #1…q.

To add to the qualitative analysis, we observe the following: As we can see from in Fig. 8, the fused classifiers are performing at equal error rate (EER) in the range of 0.22 to 0.25. Depending upon the application needs and the availability of the experts, the operating point of the system can be adjusted to scale the operation of finding archaeological artifacts. For example, if the solution requires very accurate estimates of the detected artifacts, the operating point of the system needs to be shifted to left (e.g., to small missed detection rate) leading to relatively large false alarm rate and thus requiring more expert time to sieve through the real detections from the false alarms. On the other hand, if the solution can accept approximate answers, the system can be operated on the right hand side of the DET curve which will result in relatively fewer false alarms, i.e., less human oversight needed, at the risk of missing genuine artifacts.

classifier TP rate FP rate precision recall F–measure MCC ROC area
lazy-kstar 0.650 0.191 0.972 0.650 0.767 0.142 0.825
meta-Logitboost 0.706 0.190 0.973 0.706 0.808 0.167 0.807
MetaAdaboostM1 0.779 0.310 0.970 0.779 0.856 0.166 0.800
functions-Logistic 0.706 0.251 0.971 0.706 0.808 0.147 0.794
lazy-LWL 0.465 0.073 0.975 0.465 0.612 0.117 0.793
functions-SimpleLogistic 0.709 0.312 0.969 0.709 0.810 0.129 0.790
trees-randomforest 0.573 0.193 0.971 0.573 0.707 0.114 0.784
functions-VotedPerceptron 0.333 0.015 0.978 0.333 0.473 0.102 0.783
bayes-NaiveBayes 0.760 0.250 0.971 0.760 0.844 0.175 0.772
Meta-RandomCommittee 0.572 0.193 0.971 0.572 0.706 0.114 0.771
TABLE I: We trained over 30 classifiers from Weka 3.8.0 on the features of MBBs listed in the main text to assess their performance. Top 10 classifiers in terms of ROC area are reported below.

To close the discussion, let us assume that the system does indeed operate at the EER operating point, conservatively, EER=22%. Based on a random sampling of the area, we estimate that the genuine features occupy portion of the total area. If we assume that house artifact is pessimistically representative in terms of analytics performance and prevalence the experts will be at least roughly times more productive while missing about of the genuine artifacts.


  • [1] L. J. Klein, F. J. Marianno, C. M. Albrecht, M. Freitag, S. Lu, N. Hinds, X. Shao, S. Bermudez Rodriguez, and H. F. Hamann, “PAIRS: A scalable geo-spatial data analytics platform,” in Big Data (Big Data), 2015 IEEE International Conference on.   IEEE, 2015, pp. 1290–1298. [Online]. Available: http://researcher.watson.ibm.com/researcher/files/us-kleinl/IEEE_BigData_final_klein.pdf
  • [2] S. Lu, X. Shao, M. Freitag, L. J. Klein, J. Renwick, F. J. Marianno, C. Albrecht, and H. F. Hamann, “IBM PAIRS curated big data service for accelerated geospatial data analytics and discovery,” in Big Data (Big Data), 2016 IEEE International Conference on.   IEEE, 2016, pp. 2672–2675. [Online]. Available: https://static.aminer.org/pdf/fa/bigdata2016/S09208.pdf
  • [3] National Oceanic and Atmospheric Administration (NOAA) and Coastal Services Center, “Lidar 101: An Introduction to Lidar Technology, Data, and Applications,” 2012. [Online]. Available: https://coast.noaa.gov/data/digitalcoast/pdf/lidar-101.pdf
  • [4] R. Bewley, S. Chrutchley, and C. Shell, “New light on an ancient landscape: Lidar survey in the stonehenge world heritage site,” Antiquity, vol. 79, pp. 636–647, 2005.
  • [5] J. Harmon, M. Leone, S. Prince, and M. Snyder, “Lidar for archaeological landscape analysis: A case study of two eighteenth-century maryland plantation sites,” American Antiquity, vol. 71 (4), pp. 649–670, 2006.
  • [6] M. Doneus, C. Briese, M. Fera, and M. Janner, “Archaeological prospection of forested areas using full-waveform airborne laser scanning,” Journal of Archaeological Science, vol. 35, no. 4, pp. 882–893, 2008.
  • [7] M. A. Riley, Automated Detection of Prehistoric Conical Burial Mounds From LiDAR Bare-Earth Digital Elevation Models.   unpublished master thesis, Northwest Missouri State University, 2009. [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=
  • [8] R. Lasaponara, R. Coluzzi, and N. Masini, “Flights into the past: full-waveform airborne laser scanning data for archaeological investigation,” Journal of Archaeological Science, vol. 38, pp. 2061–2070, 2010.
  • [9] Ø. D. Trier and L. H. Pilø, “Automatic detection of pit structures in airborne laser scanning data,” Archaeological Prospection, vol. 19, pp. 103–121, 2012.
  • [10] K. Kvamme, “An examination of automated archaeological feature recognition in remotely sensed imagery,” in Computational Approaches to Archaeological Spaces, A. Bevan and M. Lake, Eds.   Routledge, 2016, pp. 53–68. [Online]. Available: https://www.taylorfrancis.com/books/e/9781315431932/chapters/10.4324/9781315431932-9
  • [11] L. Luo, X. Wang, H. Guo, C. Liu, J. Liu, L. Li, X. Du, and G. Qian, “Automated extraction of the archaeological tops of qanat shafts from vhr imagery in google earth,” Remote Sensing, vol. 6, pp. 11 956–11 976, 2014.
  • [12] C. T. Fisher, J. C. Fernández-Diaz, A. S. Cohen, O. N. Cruz, A. M. Gonzáles, S. J. Leisz, F. Pezzutti, R. Shrestha, and W. Carter, “Identifying ancient settlement patterns through lidar in the mosquitia region of honduras,” PLoS ONE, vol. 11 (8), 2016.
  • [13] Z. Hazell, V. Crosby, M. Oakey, and P. Marshall, “Archaeological investigation and charcoal analysis of charcoal burning platforms, Barbon, Cumbria, UK,” Quaternary International, vol. 458, pp. 178–199, Nov. 2017. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1040618216307418
  • [14] M. A. Canuto, F. Estrada-Belli, T. G. Garrison, S. D. Houston, M. J. Acuña, M. Kováč, D. Marken, P. Nondédéo, L. Auld-Thomas, C. Castanet, D. Chatelain, C. R. Chiriboga, T. Drápela, T. Lieskovský, A. Tokovinine, A. Velasquez, J. C. Fernández-Díaz, and R. Shrestha, “Ancient lowland Maya complexity as revealed by airborne laser scanning of northern Guatemala,” Science, vol. 361, no. 6409, p. eaau0137, Sep. 2018. [Online]. Available: http://www.sciencemag.org/lookup/doi/10.1126/science.aau0137
  • [15] A. Guyot, L. Hubert-Moy, and T. Lorho, “Detecting Neolithic Burial Mounds from LiDAR-Derived Elevation Data Using a Multi-Scale Approach and Machine Learning Techniques,” Remote Sensing, vol. 10, no. 2, p. 225, Feb. 2018. [Online]. Available: http://www.mdpi.com/2072-4292/10/2/225
  • [16] P. Tarolli, W. Cao, G. Sofia, D. Evans, and E. C. Ellis, “From features to fingerprints: A general diagnostic framework for anthropogenic geomorphology,” Progress in Physical Geography: Earth and Environment, vol. 43, no. 1, pp. 95–128, Feb. 2019. [Online]. Available: http://journals.sagepub.com/doi/10.1177/0309133318825284
  • [17] A. F. Chase, D. Z. Chase, C. T. Fisher, S. J. Leisz, and J. F. Weishampel, “Geospatial revolution and remote sensing LiDAR in Mesoamerican archaeology,” Proceedings of the National Academy of Sciences, vol. 109, no. 32, pp. 12 916–12 921, 2012. [Online]. Available: http://www.pnas.org/content/109/32/12916.full.pdf
  • [18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” preprint, 2014.
  • [19] D. H. Maling, Coordinate systems and map projections.   Elsevier, 2013. [Online]. Available: https://books.google.com/books?hl=en&lr=&id=tcL-BAAAQBAJ&oi=fnd&pg=PP1&dq=map+projection+theory&ots=YoMow09XJl&sig=Da6PYvWdtJ_0lZhprROQa841Y7g
  • [20] A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, 2005. [Online]. Available: https://hal.archives-ouvertes.fr/hal-00271141/file/061602r.pdf
  • [21] O. K. Ersoy, “A comparative review of real and complex Fourier-related transforms,” Proceedings of the IEEE, vol. 82, no. 3, pp. 429–447, 1994.
  • [22] H. Freeman and R. Shapira, “Determining the minimum-area encasing rectangle for an arbitrary closed curve,” Communications of the ACM, vol. 18, no. 7, pp. 409–413, 1975. [Online]. Available: https://dl.acm.org/ft_gateway.cfm?id=360919&ftid=48924&dwn=1&CFID=35534629&CFTOKEN=3a2ffd92bf67d812-3854F7FB-AF6C-19D4-00019A322C599C7D
  • [23] J. Davis and M. Goadrich, “The relationship between Precision-Recall and ROC curves,” in Proceedings of the 23rd international conference on Machine learning.   ACM, 2006, pp. 233–240. [Online]. Available: https://dl.acm.org/ft_gateway.cfm?id=1143874&ftid=364254&dwn=1&CFID=35818464&CFTOKEN=7d6ca4657118f20c-6C01E074-B15E-5050-F7C83CC57607F7FC