Point-to-set distance functions for weakly supervised segmentation

by   Bas Peters, et al.

When pixel-level masks or partial annotations are not available for training neural networks for semantic segmentation, it is possible to use higher-level information in the form of bounding boxes, or image tags. In the imaging sciences, many applications do not have an object-background structure and bounding boxes are not available. Any available annotation typically comes from ground truth or domain experts. A direct way to train without masks is using prior knowledge on the size of objects/classes in the segmentation. We present a new algorithm to include such information via constraints on the network output, implemented via projection-based point-to-set distance functions. This type of distance functions always has the same functional form of the derivative, and avoids the need to adapt penalty functions to different constraints, as well as issues related to constraining properties typically associated with non-differentiable functions. Whereas object size information is known to enable object segmentation from bounding boxes from datasets with many general and medical images, we show that the applications extend to the imaging sciences where data represents indirect measurements, even in the case of single examples. We illustrate the capabilities in case of a) one or more classes do not have any annotation; b) there is no annotation at all; c) there are bounding boxes. We use data for hyperspectral time-lapse imaging, object segmentation in corrupted images, and sub-surface aquifer mapping from airborne-geophysical remote-sensing data. The examples verify that the developed methodology alleviates difficulties with annotating non-visual imagery for a range of experimental settings.



page 6

page 7

page 8


Medical image segmentation with imperfect 3D bounding boxes

The development of high quality medical image segmentation algorithms de...

Incorporating Network Built-in Priors in Weakly-supervised Semantic Segmentation

Pixel-level annotations are expensive and time consuming to obtain. Henc...

The AAU Multimodal Annotation Toolboxes: Annotating Objects in Images and Videos

This tech report gives an introduction to two annotation toolboxes that ...

Deep Heterogeneous Autoencoder for Subspace Clustering of Sequential Data

We propose an unsupervised learning approach using a convolutional and f...

Reconstructing Hand-Object Interactions in the Wild

In this work we explore reconstructing hand-object interactions in the w...

Mask Editor : an Image Annotation Tool for Image Segmentation Tasks

Deep convolutional neural network (DCNN) is the state-of-the-art method ...

A Step Toward More Inclusive People Annotations for Fairness

The Open Images Dataset contains approximately 9 million images and is a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generating a large training set with fully segmented/annotated images for convolutional neural networks is costly and time consuming at best. Therefore, researchers developed methods for faster annotation and training algorithms to learn from the sparsely annotated images. Such methods include point annotations

[10.1007/978-3-319-46478-7_34, doi:10.1190/INT-2018-0225.1], as well as annotated slices from a 3D data volume [Unet3D]. More extreme cases to speed up the annotation time per image are bounding boxes [Dai_2015_ICCV, rajchl2016deepcut, Khoreva_2017_CVPR, kervadec2020bounding] or knowledge if an object is present in the image (image-level supervision) e.g., [10.1109/ICCV.2015.203, 10.1007/978-3-319-46484-8_25]. Annotation time/cost of many categories of images is not a fundamental problem because most people can annotate street scenes or pictures of everyday objects and animals.

More serious challenges arise when just domain experts can annotate, for instance, hyperspectral and medical images. Yet a more difficult situation occurs when even to domain experts cannot annotate, and labels come from sparsely available ground truth. Examples include data that is not imaged yet, such as multi-modality geophysical data or corrupted images.

Many approaches for learning from image-level tags and bounding boxes use more involved workflows, with either multiple networks used in sequence, networks with multiple branches, or alternating between network training and updating estimates of pixel-level masks/region proposals. The typical goal is to reduce the annotation cost for datasets with large numbers of images. The scope of this work is different. We address the limitations of working with limited annotation in case labels come from ground truth or domain experts. Such data includes hyperspectral data, geophysical data, medical images, highly corrupted images, and indirect measurements that typically require solving an inverse problem to create an image. Many of the aforementioned datasets contain just one or a few examples.

In this work, we introduce a new method to obtain pixel-level segmentation from image-level information. Our algorithm can train in various modes on datasets as small as a single example: without any annotation, with partial annotation where the annotation is missing for one or more of the classes, and with bounding boxes. We inject image-level supervision directly into the learning problem via constraints on the network output via a new implementation based on point-to-set distance functions. Contrary to some statements in the literature about training networks with constraints on the output, we show that our approach does not lead to a very challenging or computationally expensive optimization problem. The distance functions, combined with a closer look at the optimization problem and the corresponding Lagrangian, reveal that the coupling between network parameters, network output, and constraints is not as complicated as it seems from a high-level problem formulation.

1.1 Related work

Constraints to the output of a neural network dates back to at least [10.5555/2969644.2969708]. That work introduces (in)equality constraints encoded via differentiable functions as penalties or Lagrangian multipliers. The scope and applications are different from image segmentation. Various applications aimed at natural language, some not in the context of neural networks, use a probabilistic learning framework to introduce posterior constraints [JMLR:v11:ganchev10a], see also, e.g., [10.5555/1795114.1795120, JMLR:v11:mann10a] for related work. [JMLR:v11:ganchev10a] applies constraints that hold in expectation applied to multiple examples. [NIPS2019_9385] use an alternating optimization method for differentiable non-linear constraints.

Various works on vision applications are closer to our work and include penalties/constraints that promote the presence or minimum/maximum size of objects [10.1109/ICCV.2015.203, Pathak_2015_ICCV, marquez2017imposing, KERVADEC201988]. [Pathak_2015_ICCV] implement linear inequality constraints on the network output with slack via an alternating optimization strategy by introducing auxiliary variables to remove direct coupling between the network output and the constraints. [marquez2017imposing]

present a Lagrangian method for training with non-linear but differentiable constraints, primarily for human-pose estimation.

[KERVADEC201988] and [kervadec2020bounding] add a penalty or log-barrier on the violation of linear inequalities respectively. The aforementioned works all require differentiability of the penalty/constraint functions, and if provided, the gradient is specific to the particular penalty or constraint. These properties are specifics and limitations that we avoid using projection-based functions that measure the distance to constraint sets.

1.2 Contributions

The primary focus is on constraints applied to training a network on a single-example dataset in a visual context where there is no clear The primary focus and difference compared to related work from the previous section is the optimization: writing down the Lagrangian for training a network with constraints on the output shows that introducing auxiliary variables is unnecessary to obtain a simple optimization scheme without computationally expensive inner loops. A second difference is that we use the projection operator onto the constraint set directly in a distance function and its derivative. This enables the use of more than just linear inequalities or differentiable regularization functions. The projection-based approach means that different constraints change the projection operator, but the implementation, objective, and optimization all keep the same structure. Therefore, our approach easily extends to multiple types of constraints, including non-trivial intersections and sums of sets. We focus on applications that are single-example datasets in a visual context where there is no clear distinction between object and background. The contributions of this work are summarized as follows:

  • Introduce a new distance-function based formulation and optimization scheme to include high-level image information as constraints while training neural networks for semantic segmentation.

  • Point-to-set distance function and its derivative use the projection operator, there is no need to define custom regularization functions and their derivatives. This approach also avoids challenges when the functions that describe constraints are not differentiable but it is still known how to project onto the corresponding set.

  • We extend the range of applications of constrained network training to imaging sciences problems without clear object-background setting, problems where annotating requires domain-experts or ground-truth observations, and single image/data problems.

  • The method applies to settings with a) no annotation; b) bounding boxes; c) partial annotation where one or more classes have no annotation.

The organization of the remainder of this paper is as follows: we propose our approach conceptually. Next, we introduce a novel optimization implementation for training neural networks with constraints on the output, which does not rely on nested or alternating optimization schemes. We illustrate our contributions on three different problems from the imaging sciences. Finally, we discuss a few extensions of this work not covered in this paper.

2 High-level information as constraints on the network output

When fully annotated masks are not available, we can resort to high-level image information. In this work, we do not use image class-tags, but focus on quantitative prior-knowledge, particularly in terms of area/volume a certain class is expected to occupy in the segmentation. [Pathak_2015_ICCV, KERVADEC201988] showed that such information is sufficient to obtain pixel-level segmentations for object detection tasks in both general and medical image datasets, while the prior knowledge does not need to be very precise.

In this work, we show that size information is also useful for datasets with just a single example, as well as for applications where there is no clear foreground/background or object. For instance, in hyperspectral imaging, we may know roughly the surface area of farm fields that changed their land use. Alternatively, unreliable manual annotations may provide upper and lower bounds on the surface area for a class. Other examples that do not have a background-object structure include geological/hydrological mapping from multi-modality geophysical and remote sensing data. In the following, we assume no knowledge of the spatial distribution of the class in the segmentation. While the approach outlined in this section can benefit from bounding boxes, we do not rely on them.

We now formalize the previous informal description, using for simplicity, two classes. One way to express the prior knowledge from the previous paragraph is via constraints on the area of a class in the segmentation


where and are scalar bounds. In 3D, the same holds in terms of volume. The goal is to obtain a segmentation that honours (1) by adding this information as constraints on the output of a neural network. Next, we present a new implementation and algorithm, which loosely speaking, can be considered as a generalization of work by [Pathak_2015_ICCV, KERVADEC201988].

2.1 Constraints on the size/surface area of a class

Denote 2D/3D/4D data in vectorized form as

(for 3D data input), with multiple space/time/frequency coordinates, one channel coordinate, and . The non-linear function denotes a neural network for semantic segmentation that transforms the input data into probability maps with elements each. The network depends on parameters , such as convolutional kernels. The vector contains the labels, which may be partial or not available at all for one or more classes.

The main algorithmic contributions of this work are a more direct way to include surface area/size information and a new implementation to train neural networks with constraints on the output.
Cardinality constraints: The cardinality of a vector, , counts the number of non-zero elements in . Therefore, we have a direct translation from (1) as the sets


where indicates the first channel of the network output . We assumed that the last layer of the network normalizes the output to and sums to one, for example, via the softmax function. This assumption allows us to use upper bounds on the cardinality only. Lower bounds are implicit via the normalization of the output, i.e., if , it implies that . The projection onto the set of vectors with limited cardinality is known in closed form by setting all but the largest values (in absolute sense) to zero. A typical implementation is: sort the vector, determine the threshold value, set all other elements to zero, revert to original order.

2.2 Point-to-set distance functions for networks with output-constraints

If the only available information is prior knowledge on some property of the output of a network, we seek to solve the feasibility problem


where is the indicator function for the set

. If partial annotation for at least one of the classes is available, we add a loss function for the labels,

, (e.g., cross-entropy) and optimize over network parameters subject to constraints:


where the matrix selects from the output the pixels where labels, , are available. To be more general and allow multiple constraint sets simultaneously, we use the intersection of sets


which should be non-empty. The training of neural networks typically relies on variants of stochastic gradient descent to minimize the loss. The most obvious extension to problem (

6) may seem the project (stochastic) gradient descent iterations with stepsize , followed by the Euclidean projection such that the network output is an element of the set , i.e., . Because the constraints act on the network output and not on the parameters over which we optimize directly, this is not a straightforward projection problem. To proceed, we could introduce auxiliary variables and equality constraints to construct a projection problem that requires to be an element of indirectly, see, e.g., [Pathak_2015_ICCV].

In this work we present a new implementation of the constrained problems (6) and (5) that is simpler in the sense that we stay close to the original problem formulation (6), and there are no auxiliary variables or computationally expensive alternating optimization schemes. The point-to-set distance function is at the core of our approach. We use a version that measures the squared distance from a point in (vector ) to the set ,


using just the projection operator of a vector onto the intersection of constraints, . We could also opt for an exact version by removing the square [hiriart1996convex, Thm. 1.2.3] but this is not of paramount importance for our applications. The squared distance function is differentiable [hiriart2004fundamentals, Ex. 3.3] as


This is expression holds, even if the constraint set corresponds to a non-differentiable regularizer. A closed-form derivative of the distance function using the projection operator itself is a powerful result that is also at the core of algorithms for (split-) feasibility problems (e.g., the CQ algorithm [Byrne_2002]). Note that the functional form of the constraint, or the associated scalar/vector upper/lower bounds do not appear in the expression of the distance function and its derivative; they are implicit in the projection operation. This property makes the distance function a general tool that applies to any constraint set, including intersections.

We replace the constraint on the network output (the final network state ) with the squared distance-function. We thus minimize the distance to the constraint set, plus the loss related to the labels (if any), subject to one equality constraint per neural network layer,


where the network state at the first layer is equal to the data and the scaling parameter . Problem (10) is not a standard penalty approximation to a constrained problem because the distance function has a number of advantages over standard penalty functions, which we discuss and exploit in the following section. In the above problem statement, we used a standard ResNet [he2016deep]

with a nonlinear activation function

to keep notation compact. We emphasize that this work does not depend on the ResNet, and most neural networks could replace the ResNet in the above problem statement. Below, we show that minimizing a distance function applied to the output of a network does not fundamentally change the optimization process for training a neural network based on labels alone. The proposed approach combines seamless with most existing neural networks and their training algorithms. To show this, consider the Lagrangian corresponding to (10),


where are the vectors of Lagrangian multipliers for every layer . For optimization, we need the following partial derivatives of the Lagrangian:


The function is the derivative of the activation and the operator creates a diagonal of a matrix of the input. The gradient updates for the network parameters

at every layer follow via the backpropagation algorithm. First, forward propagating through the network will satisfy all equality constraints and thus

. Second, propagating backwards provides the Lagrangian multipliers and . The last step computes the gradient with respect to the network parameters using the already computed quantities. Algorithm 1 summarizes these steps.

Compared to training using labels only, is that the constraints insert information into the final Lagrangian multiplier that then backpropagates through all layers.

2.3 Stopping criteria and choice of

In this work, we assume either a) no annotated data; b) one or more classes come without any annotation; c) bounding boxes.

Case (a) Because there are no labels, the loss reduces to the point-to-set distance function for finding a feasible point (5). We stop training when the output of the network, , is an element of the constraint set , i.e., . The implementation via an inexact penalty function (problem (10)) means we need to increase the penalty parameter until we achieve .

Case (b & c) (c) is a special case of (b). As there are no labels for one or more classes, we cannot use standard early stopping (saving best model parameters) at the lowest validation loss. The constraints are all information we have on the classes without annotation, thus we need to look at the lowest validation loss for the classes with annotation, while satisfying the constraints within an -tolerance as above.

Algorithm 1 summarizes the workflow to train a network subject to constraints on the output (6). For the feasibility problem (5) we set . For the label-loss plus point-to-set distance (6), we increase the penalty parameter if the distance to the constraint set does not decrease in a window of a few iterations.

, (penalty parameter), (penalty growth factor), (history length), (learning rate);
for  do
       for  do
             // Forward;
       end for
       //Final Lagrangian multiplier;
       //Propagate backward and update network parameters for each layer;
       for  do
       end for
       if  then
             //Update penalty parameter
       end if
end for
Algorithm 1 Backpropagation to train a network including constraints on the network output via distance-to-set functions.

3 Examples

The emphasis of the examples is on data from the imaging sciences, where we often have a single example with annotation that comes from domain experts or ground truth. Therefore, we are explicitly not interested in common datasets with many examples that are easy to annotate, such as MS-COCO or PASCAL VOC datasets. While the previous section is guided by the notationally compact ResNet, the examples use a fully invertible (or reversible) hyperbolic network [Chang2017Reversible, 2]. Invertibility removes the requirement to store all network states for gradient computations. As a result, a reversible network can train on large scale data that would otherwise not fit on a standard GPU. Examples of such data include time-lapse hyperspectral, and data with a large number of channels like the remote sensing/geoscience example in the following section. The reversible hyperbolic network follows the recursion , which shows that the network structure does not alter Algorithm 1 fundamentally because still leads to a closed-form solution for . All experiments use the timestep . The network for the following hyperspectral example is layers deep with convolutional kernels per layer; ten layers for the single image segmentation example with convolutional kernels; and the multi-modality sub-surface characterization example has a network with layers. For this last example, there are input channels but has a ‘flat’ block structure of convolutional kernels to limit the number of free parameters and induce a block-low-rank structure in [peters2019symmetric].

3.1 No labels or bounding boxes: time-lapse hyperspectral land-use change detection

The goal of time-lapse hyperspectral land-use change detection is to create a 2D change map of a piece of the earth from two 3D hyperspectral datasets, collected at different times [1], see Figure 1. Domain-experts or ground truth can provide annotation. However, this is expensive, time-consuming, and prone to errors.

Figure 1: (a) and (b): Two hyperspectral datasets recorded at different times for time-lapse land-use change detection. (c) and (d): slices from (a) and (b) for a single frequency.

Working with just one example without any annotation, we can obtain reasonably accurate predictions using knowledge on the surface area occupied by the two classes (no-change / change). Such prior knowledge may be available from sources like historical estimates or unreliably labeled data that still provides bounds on the surface area.

The true surface area for land-use change is . We use loose bounds . This translates to the cardinality constraints (3) with and . We employ gradient descent as in Algorithm 1 to find a solution to the feasibility problem (5). To induce stochasticity when working with a single example, we apply random flips and permutations to the data at every iteration.

Figure 2: The provided annotations (not used for training), our prediction that does not use any labels, and the difference. Most of the errors are boundary effects, as well as a few false positive/negative identified fields.

Figure 2 shows the provided annotations (not used in this example), our prediction, and the difference. The most ‘obvious’ segmentation would be all farm fields as one class. The results, however, show a prediction that mostly selects the subset of the fields that changed land use. Except for a few errors, our method was able to capture the subtle differences over time. While these results are not as good as [FRHyperspectral], we did not require any annotation.

3.2 Point annotations for one of the classes only: Multi-modality subsurface characterization

Figure 3 shows various types of remote/airborne sensing data and geological maps. The goal is to obtain a map of the aquifers (yes/no aquifer present in the subsurface) in an area that is roughly of Arizona, see Figure 4. The labels from [3] are a combination of ground-truth observations, remote sensing, geophysical data, and expert interpretation. The goal is to see if we can reproduce domain-experts’ work, based on point annotations for only one of the two classes, supplemented with an estimate of the total area of aquifers being between and . For some applications, it is easier to annotate either true or false for a specific question. The prediction in Figure 4 shows we can reproduce domain-experts interpretation using point annotations from one of the two classes. The differences are minor along the boundaries, as well as a few small patches.

Figure 3: Data for the multi-modality remote sensing example. The rock age and rock type maps are converted to separate maps that indicate if the rock age/type is present or not.
Figure 4: Left: aquifer map, based on data and manual interpretation. Middle: prediction. Right: difference between prediction and full map. Red dots represent the annotation. Note that there is annotation for one of the two classes only.

3.3 Segmenting a corrupted image using a bounding box

Our method can also provide a pixel-level segmentation of a single corrupted image using a bounding box. The data is an image from [IEEEDavisDataset] with missing pixels. Manually generating pixel-level annotations would be challenging in this case. This task is different from most standard tasks that train on entire datasets and aim to generalize to unseen images. Figure 5 displays the target image and bounding box. From the box, we derive the constraint for the object class, and we set the constraint for the background class to . The bounding box also implies we use the area outside the box as labels for the background class. There are no labels for the object class. The prediction in Figure 5 shows that constraints on the network output help obtain an accurate pixel-level segmentation for a single image from a bounding box.

The network was trained using stochastic gradient descent for 400 iterations. At each iteration, we induce randomness by randomly sampling of the background labels and randomly flipping and permuting the data.

3.4 Comparison to other approaches

Because the scope of this work is either size information, bounding boxes, or missing (partial) annotation one of the classes with size information, we do not compare to methods that are specific to learning from bounding boxes. Furthermore, comparisons are meaningful only if the method can also deal with non-visual multi-modality data that we use in the following examples.

We compare our approach with linear inequality constraint of the form

. Bounds on the sum of do not directly tell us how many pixels will be classified as a certain class because

can be concentrated in a minimal number of pixels, or be spread out evenly among a large number of pixels. Specifically, we show results using the penalty used by [KERVADEC201988], which reads if and if , where we used and is the vector of ones to keep notation compact. The gradient for this penalty is available in closed form as and similarly for the other case. This shows the gradient is a constant shift of all parameters, which is very different from the gradient of the squared distance function, , where projects onto the set of vectors with limited cardinality that shifts most elements to zero, but leaves other elements untouched. This brief comparison also shows that the penalty function described above is specific for this type of constraints, as is the corresponding gradient. In our distance-function based approach, we only need to change to use different constraints.

Figure 5 shows a comparison of the proposed method with the penalty described above. We show the result for the best , selected by manual tuning. This result is not comprehensive but shows that the task is not trivial for any approach that includes information on the area a given class should occupy in the segmentation. The primary goal of our work is to introduce a novel algorithm to implement constraints on the network output. Extensive numerical comparisons are secondary and beyond the scope of this paper.

4 Computational cost

The added computational cost of the gradient of the squared distance function is one projection per network output-channel per example. Practical timing depends on the availability of a GPU implementation for the projection, in order to avoid slower CPU computations and loading from and to the GPU. We implement the projection for the cardinality constraint using 1) sort; 2) thresholding most elements of the vector to zero; 3) permute back to the original ordering. The relative added computational time is low when the forward propagation is expensive because of a deep network or large input data size.

Figure 5: (a) Corrupted image with missing pixels with highlighted bounding box; (b) the segmentation obtained using cardinality constraints on object size derived from the bounding box. Background and anomaly have an intersection over union accuracy of and respectively. (c) Comparison with a penalty related to a constraint on the sum of the network output, see section 3.4 for details.

5 Extensions

This section contains some extensions that are readily available using the presented material, and do not require additional computational tools or implementations.

Other constraints for size information. Besides the cardinality constraints, there are at least two ways to represent size/surface area information. First, consider the convex relaxation of the cardinality function: the norm: . A disadvantage of this constraint is that any network output within the -ball satisfies the constraints without necessarily telling us anything about how many pixels/voxels are classified as the desired class. Yet another implementation of size constraints on the network output is via histogram constraints.

Structural constraints. If there is information about the length of the boundary of a given class in the segmentation, we can represent this using the cardinality of the spatial derivative of the network output, i.e., , where is an interger, and is a discrete representation of the spatial derivatives of the network output. Other types of structural information derives from the observation that segmentations are often ‘simplified’ versions of input images. In this case, the data provides bounds on the complexity of the spatial structure of the segmentation. For instance, the rank of the segmentation should be at most the rank of the input image, leading to the constraint , where is the network output in matrix form, and is the desired maximum rank.

Intersection of multiple constraint sets. Within our frameqork, there are at least two ways to include multiple constraints simultaneously. a) using tools similar to [Censor_2005], we can add multiple distance penalties . This implementation is straightforward, but introduces multiple penalty parameters that need tuning to make sure the solution is an element of the intersection. b) a single distance function that includes the projection operator onto the intersection, avoids trade-off between multiple penalties: all constraints will be satisfied at the solution as long as the intersection is non-empty. Specialized software is available to compute projection onto intersections of sets [peters2019algorithms].

Unsupervised data exploration. If there is no annotation and no prior knowledge to define any constraints, we can still solve the feasibility problem (5) for unsupervised deep clustering. Segmenting the input data multiple times, each time using different bounds on the size of each class, may reveal interesting patterns.

Maintaining feasibility while training. Maintaining feasibility (‘hard’ constraints) is desired or required for some imaging problems [herrmann2019learned]. To enforce feasibility while solving (10), we take multiple gradient steps based on the distance to the constraint set only, and discard the gradient w.r.t. the labels if the network output is not feasible.

6 conclusions

For data from the imaging sciences where annotation can only come from ground truth, or perhaps domain experts, the lack of annotation is the primary obstacle to obtain pixel-level segmentations. Examples include remote sensing data, geophysical data, corrupted images, or data is not ‘imaged’ yet. Including prior knowledge on the minimum/maximum size/surface-area of an object proved to be a weak supervision technique that enables the segmentation of general and medical images using bounding boxes. We showed that such weak supervision also applies to data from the imaging sciences without clear object-background structure, even when only a single example is available. To directly implement class-size information, we use constraints on the cardinality of a vector, and introduced a novel training algorithm based on point-to-set distance functions. This approach requires the projection operator only, so that different constraints do not require translation into custom penalty functions and their derivatives. A quick look at the Lagrangian shows that the constraints implemented via a distance function fit seamless into training networks via backpropagation. Examples showed that constraints can replace missing annotation for a range of segmentation problems in the imaging sciences. Particularly, we showed segmentations from hyperspectral data without any annotation, segmentation of corrupted images from bounding box information, and segmentation from multi-modality remote sensing and geophysical data where we have sparse annotation for one of the classes only.


  • [1] M. Hasanlou and S. T. Seydi (2018) Hyperspectral change detection: an experimental comparative study. International Journal of Remote Sensing 39 (20), pp. 7029–7083. External Links: Document, Link, https://doi.org/10.1080/01431161.2018.1466079 Cited by: §3.1.
  • [2] K. Lensink, E. Haber, and B. Peters (2019) Fully hyperbolic convolutional neural networks. arXiv preprint arXiv:1905.10484. Cited by: §3.
  • [3] S. G. Robson and E. R. Banta (1995) Ground water atlas of the united states: segment 2, arizona, colorado, new mexico, utah. Technical report U.S. Geological Survey. External Links: Document, Link Cited by: §3.2.