Equal Area Breaks: A Classification Scheme for Data to Obtain an Evenly-colored Choropleth Map

05/04/2020
by   Anis Abboud, et al.
University of Maryland
0

An efficient algorithm for computing the choropleth map classification scheme known as equal area breaks or geographical quantiles is introduced. An equal area breaks classification aims to obtain a coloring for the map such that the area associated with each of the colors is approximately equal. This is meant to be an alternative to an approach that assigns an equal number of regions with a particular range of property values to each color, called quantiles, which could result in the mapped area being dominated by one or a few colors. Moreover, it is possible that the other colors are barely discernible. This is the case when some regions are much larger than others (e.g., compare Switzerland with Russia). A number of algorithms of varying computational complexity are presented to achieve an equal area assignment to regions. They include a pair of greedy algorithms, as well as an optimal algorithm that is based on dynamic programming. The classification obtained from the optimal equal area algorithm is compared with the quantiles and Jenks natural breaks algorithms and found to be superior from a visual standpoint by a user study. Finally, a modified approach is presented which enables users to vary the extent to which the coloring algorithm satisfies the conflicting goals of equal area for each color with that of assigning an equal number of regions to each color.

READ FULL TEXT VIEW PDF

page 6

page 7

08/09/2021

Two-Class (r,k)-Coloring: Coloring with Service Guarantees

This paper introduces the Two-Class (r,k)-Coloring problem: Given a fixe...
02/15/2018

Distributed coloring in sparse graphs with fewer colors

This paper is concerned with efficiently coloring sparse graphs in the d...
02/08/2021

Superfast Coloring in CONGEST via Efficient Color Sampling

We present a procedure for efficiently sampling colors in the model. It...
11/13/2017

Tilings with noncongruent triangles

We solve a problem of R. Nandakumar by proving that there is no tiling o...
02/05/2022

Exploring the Dynamics of the Circumcenter Map

Using experimental techniques, we study properties of the "circumcenter ...
02/16/2022

Recoloring Unit Interval Graphs with Logarithmic Recourse Budget

In this paper we study the problem of coloring a unit interval graph whi...
07/01/2022

Smooth Pycnophylactic Interpolation Produced by Density-Equalising Map Projections

A large amount of quantitative geospatial data are collected and aggrega...

1 Related Work

Choropleth maps are a well studied and commonly used visualization technique for the display of geographic information. Examples of their use include Jern et al. [12] in which choropleth maps are used in combination with tree maps [14] to visualize hierarchical demographic statistics and Lima et al. [15] who use choropleth maps as a component in a system for the display of a similarly structured hierarchical dataset. Prior analysis of choropleth classification techniques such as Brewer and Pickle [3]

has thoroughly investigated the effect of various well known classification methods on response accuracy for questions asked about data visualized on a map. Other work such as

Zhang and Maciejewski [29] explores the visual impact modifying the class boundaries of a choropleth map with respect to visual clustering of entities.

In the remainder of this section, we briefly review techniques for choropleth map data classification and provide a more detailed look at prior work on equal area classification. We divide classification techniques into two broad classes: those that are purely statistical and those that incorporate geographic information into the classification.

1.1 Statistical Classification

The most common classification techniques do not use any geographic information when constructing class boundaries. The simplest of these are equal intervals and equal length (quantiles) classification. In equal intervals, each class covers the same range of possible input values; similarly, equal length classification constructs classes such that each class contains the same number of elements [24]. Despite its simplicity, equal length classification has been found to be superior to more complicated options on some datasets [3].

A more complicated and widely used technique is Jenks natural breaks [11, 10]

. This classification scheme attempts to classify data values into different classes according to the breaks or gaps that naturally exist in the data by minimizing the amount of variance between elements in the same class 

[24].

A more recent example of a purely statistical classification scheme is the head/tail breaks method 

[13]. The technique is designed to deal with distributions that have a heavy tail such as power laws. In other words, it recognizes the fact that there are far more objects of small magnitude than objects of large magnitude.

1.2 Geographic Classification

There are methods other than equal area classification that incorporate ancillary geographic information when deciding on class boundaries. This is generally desirable since, being visualised on a map, geographic information is relevant to readers.

An example of this are two measures of choropleth map accuracy proposed by Jenks and Caspall [11]: overview accuracy index (OAI) and boundary accuracy index (BAI). OAI incorporates weighting for area into the accuracy measure used by the standard Jenks natural breaks classification by making it more important that variance is minimized for classes with a large total area. BAI, while proposed by Jenks and Caspall, was fully developed into a concrete measure of accuracy by Armstrong et al. [1]. BAI optimizes for large differences between neighboring polygons that are not in the same class.

Following a similar motivation to that of BAI, Chang and Schiewe [6] present a method that assigns classes such that geographically local extrema are easily identified. Such extrema are polygons that have a higher or lower value than all neighboring polygons. To make these polygons identifiable, Chang and Schiewe attempt to find a classification where these polygons are not in the same class as their neighbors.

Instead of aiming to make neighboring regions more distinguishable, McNabb et al. [18] merge small neighboring polygons into larger polygons that are homogeneously colored. While this approach makes it impossible to distinguish between the constituent polygons, it reduces the overall number of polygons on a finished map which can increase the perceptibility of areal patterns. To make individual polygons visible when necessary, the authors dynamically recompute this merged classification as users manipulate an interactive map. By doing this, the user can zoom in on areas of interest to view the individual polygons.

Du et al. [8] described a variant on the standard choropleth map for the display of spatiotemporal data using a method they call the banded choropleth map. In their method, each areal unit in the choropleth map is further subdivided into a number of vertically oriented bands where each band represents a discrete time interval. They present two methods for determining the placement of these bands: one in which each band in a given areal unit has the same width and another in which each band has the same area. These are analogous to the equal length and equal area classification methods in a traditional choropleth map.

1.3 Equal Area Classification

Equal area classification incorporates geographic information, but we address it separately since it is directly related to this paper. This classification scheme has clearly been considered by researchers in the past, but there are limited details available on past implementations. For instance, Murray and Shyy [19] make use of an implementation of equal area classification provided by Esri ArcView 3.1, but public details are not available on this implementation and the classification scheme was dropped from later releases of the software.

Robinson et al. [20]111We deliberately reference the 1984 edition of this book. Discussion of equal area classification is absent in later editions. address equal area classification in their book and suggest an alternate name for the classification technique: “geographical quantiles”. This name emphasises the similarity between equal area and equal length classification. To compute an equal area classification, the authors suggest consulting a cumulative frequency graph to find breaks after accumulating the desired percentage of total area. Computing breaks by hand with this technique should yield results that are very similar to the first greedy algorithm we propose in Algorithm 1.

The earliest treatment of equal area classification we were able to find is in a series of papers by Lloyd and Steinke starting in 1976 [16, 17, 26]. In these papers the authors evaluate the factors used by map readers when making comparisons between different maps. They conclude that map readers first compare maps according to their relative blackness before considering other map features. To aid readers in comparing maps, they propose and evaluate using an equal area classification to keep relative blackness constant between maps, forcing the reader to compare other features. While equal area classification is discussed at length in these papers, no specific technique for its computation is mentioned.

Armstrong et al. [1]

obtain an equal area classification by minimising a measure of areal inequality termed Gini coefficient for equal area (GEA) using a genetic algorithm. While this approach obtains an equal area classification, it is not able to make guarantees with respect run time or optimality of the classification due to the use of genetic algorithms in finding the solution. Rather than focusing specifically on equal area classification this paper obtains classification schemes that are optimized with respect to multiple error metrics using the aforementioned genetic algorithms. This is similar in concept to the balanced classification we propose in Algorithm 

6. While the authors do not demonstrate the specific equal area and equal length combination that we use, their framework could, in principle, obtain such a classification. A software package that implements these algorithms is described in a later paper [28].

Rather than dividing the map into some number of regions with approximately the same area, it is possible to assign each class in the map a target proportion of the total area and attempt to satisfy these proportions when creating class boundaries. Brewer and Pickle [3] term this technique shared area classification and incorporate it into their evaluation of choropleth map classification techniques for use with epidemiological data. Prior to this work, Carr et al. [4] applied the same classification scheme to hexagon mosaic maps. In neither case is a concrete algorithm for computing of the classification given.

2 Equal Area Algorithms

In this section we give a definition of the equal area classification problem and outline a number of algorithms to perform it of varying complexity.

2.1 Problem Definition

Given a sequence of numbers (the area of each region, sorted by the value of the property associated with the regions), and a number , the goal is to partition the sequence into parts, so that the sum of the areas comprising each part is roughly equal.

2.2 Greedy Algorithms

The simplest algorithm (Algorithm 1) is to build the chunks one-by-one while traversing the sequence of values in increasing order. We start inserting values into the chunk until the sum of the elements in the chunk exceeds the target chunk sum, at which time we deem it full, and start filling a new chunk.

1:
2:function GreedySplit(, )
3:   
4:    empty list of chunks
5:    empty list of numbers
6:   for  in  do
7:      Append to .
8:      if sum()  then
9:         Append to .
10:          empty list of numbers          
11:   Append to . The last chunk.
12:   return
Algorithm 1 Greedy algorithm for partitioning a sequence of numbers into chunks with roughly an equal sum. Complexity: .

Since the above greedy algorithm always overestimates the chunks (it keeps inserting in the chunks until the sum of the elements in the chunk exceeds the target chunk sum) the last chunk will be significantly smaller than the others. To mitigate this issue, we can consider the total sum so far in order to decide when to start a new chunk. This modified approach is presented in Algorithm 2.

1:
2:function GreedySplit2(, )
3:   
4:    empty list of chunks
5:    empty list of numbers
6:    1
7:   for  in  do
8:      Append to .
9:      if sum(numbers up to number)
10:              then
11:         Append to .
12:          empty list of numbers
13:         ++          
14:   Append to . The last chunk.
15:   return
Algorithm 2 Modified greedy algorithm for partitioning a sequence of numbers into chunks with roughly equal sum. Complexity: .

2.3 Optimal Dynamic Programming Algorithm

Before we present an optimal algorithm, let us define the criterion which we are using to measure the performance of an algorithm that returns a set of chunks that partition the sequence of values:

is the sum of the optimal chunk, and is the sum of the values in the chunk of the partitioning. In other words, the formula computes the average distance between the sum of a chunk in the given partitioning, and the optimal chunk sum. An optimal algorithm would find a set of breaks to partition the list with the minimal ERROR.

Key Idea 1

The first key idea in the algorithm, is that we can find an optimal position for the last break with a linear pass on the values. We can start traversing the list in decreasing order and sum up the values, until they add up to at least AVG (let be the last value we include). Since the values are arbitrary, it’s unlikely that they will add up to AVG exactly, so the sum will be slightly greater than AVG if we include in the final chunk and slightly less than AVG if we do not include in the final chunk.

Observation: There is an optimal partitioning in which the final break is around – either before it or after it. Proof: Given an optimal partitioning of the list of values, if the final break is around , then we are done. Otherwise, there are two options for the final break:

  1. The final break is before candidate 1. In this case, we argue that moving it to where candidate 1 is can only reduce the error. Notice that moving the final break to the right (assuming that the list of values is sorted in increasing order from left to right) modifies only the final two chunks, call them and . The sum of chunk will increase by some (the sum of the values between the current break and candidate 1), and the sum of chunk will decrease by the same . Since the sum of chunk A was greater than and is still greater, but now is closer to , the error stemming from it will decrease by exactly . Since the sum of chunk B changed by , the error stemming from it can increase by at most . Therefore, the error in the new partitioning can only decrease. Hence, the new partitioning is also optimal.

  2. The final break is after candidate 2. In this case, we argue that moving it to where candidate 2 is can only reduce the error. Similarly to the other case, the error from the final chunk will decrease by exactly . The only caveat here is that in the given optimal partitioning, there might be multiple breaks after candidate 2. In this case, we will move them one by one to where candidate 2 is, starting from the leftmost one (i.e., having the smallest value). Every time we shift a to the left, the error from the chunk on its right will decrease by some , and the error from the chunk on its left can increase by at most . Therefore, the total error cannot grow. Hence, the new partitioning is also optimal.

The pseudocode for a procedure to find the two candidates described above is given below.

1:
2:function FindLastBreakCandidates()
3:    Sum of the values so far.
4:   for  until  do
5:      
6:      if  then
7:         return (, )          
Key Idea 2

Let be the minimum error we can get for placing breaks to partition the first elements of the list (which has a total of values), and be the set of corresponding breaks. We are interested in .

Building on top of Key Idea 1, we give the following recursive definition of . First, find the two candidate locations for the final pivot, call them . Next compare and . Select the break with the smaller error.

In other words, for each candidate for the location of the final break, we recursively find the other breaks, and then compare the error from the two options to choose the better candidate. Calculating this recursively will have a high computational cost, as we might repeat many computations. Therefore, we will compute it using dynamic programming. In particular, we allocate a two-dimensional array to store the values of for all , and compute them column-by-column so that whenever we need a value for we can look it up in the table in constant time.

Key Idea 3

As described in Key Idea 1, finding the candidates for the final break takes linear time, as we need to traverse the list of values in decreasing order. However, as we are using dynamic programming to compute all the values for (for all values of and ), we can save on some computations. Notice that if the candidates for the rightmost break in were around position , then the candidates for the rightmost break in have to be on the left of (they can’t be on the right), because the list is shrinking from the right, so the rightmost chunk needs to grow from the left to remain close to the average. Thus the break needs to move left. Therefore, we can compute the in a loop where we move and the candidates backward simultaneously until they hit index 0. This way we compute cells in the array in time.

Key Idea 4

In the above key ideas we used the sum function, which when implemented naively, requires linear time. By preprocessing the sequence of values and building a cumulative sum array (e.g. given construct ), we can build a function that returns the sum of any sub-sequence between indices and in constant time. This procedure is presented in Algorithm 3.

1:
2:function Preprocess(values)
3:   
4:   s
5:   for  in  do
6:      
7:      Append to .    
8:   return
1:
2:function PSum(, )
3:   return
Algorithm 3 Prefix sum array calculation. Preprocess() is invoked once on the original array, to compute the prefix sum array in time. After that, PSum(, ) can be invoked to compute the sum of the values between any two indices (inclusive) and (exclusive) in the original array in time.

It is important to note that from now on, when we invoke PSum(i, j) in the pseudocodes, then we assume that the preprocessing above has been performed, and we call PSum(i, j) to calculate the sum of the values between indices i (inclusive) and j (exclusive) in constant time. Indices are 0-based. Using the above ideas, Algorithm 4 gives the pseudocode for the optimal algorithm to find breaks that partition a sequence of values into chunks with a roughly equal sum.

1:
2:function DPOptimalEqualArea(values)
3:    sum() / Average chunk sum.
4:    2D array with rows (from 0 to ) for the end index , and columns (from 0 to ) for the number of breaks .
5:    2D array like above, for the breaks
6:   for  in  do Fill the first column (column 0).
7:      [][] PSum(, )
8:      [][] []    
9:   for  in  do Loop over breaks.
10:      
11:       The position of the final break.
12:      while  do
13:         if  then
14:                      
15:          Go back until reaching the candidate positions for the last break:
16:         while PSum(, ) do
17:                      
18:          Choose between the two candidates for the last break:
19:         if [][]
                   + PSum(, )
                   [][]
                   + PSum(, ) then
20:                      
21:          After choosing the better break, add it to the arrays.
22:         [][] best_error[][]
                               + PSum(break, m)
23:         [][] [][] + []
24:                   
25:   return [][]
Algorithm 4 Dynamic Programming optimal algorithm for finding the breaks for partitioning a sequence of values into chunks with a roughly-equal sum. This algorithm returns the breaks. Complexity: .

2.3.1 Computational Complexity

Algorithm 4 fills a table with rows and columns, and takes constant time to fill each cell. Therefore, the total space and time complexity is , which is almost linear, as is usually low (3–10 colors on the map). Note that filling each column (with cells) takes time, because as explained in Key Idea 3, the variables and which control the nested loops in lines 12 and 17 of the pseudocode, start from and go backwards together until reaching 0. In other words, these two nested while loops run in linear time.

As written, the algorithm requires space because stores breaks. However, this can be easily optimized so that would store only the last break and a pointer to the list of the other breaks (stored in a different cell). The full list of optimal breaks would then be retrieved at the end of the computation. This way the space complexity will be .

2.3.2 Caveat - Using a Different Optimization Criteria

At the start of this section, we defined the optimality criterion to be the difference in absolute value from the average chunk sum. An alternative optimality criterion could be formed by the sum of squares of the differences. In this case, key idea 1 will no longer hold, as shifting the breaks has a different effect on the squared errors. Therefore, we cannot only consider two candidate locations for the final break. Instead, we will need to consider all the possible locations. This will require linear time for computing every cell in the two-dimensional array, leading to a total complexity of .

3 Evaluation

Section 2 described a number of variations of the equal area algorithm. They differed in part on the basis of their computational complexity and on the extent to which the areas of the resulting chunks differed from being equal which is characterized by the term error. In this section we use a dataset to evaluate the different equal area algorithms as well as their alternatives which are the naive equal length and the Jenks natural breaks methods. In the case of the equal area algorithms, the evaluation is relative and is in terms of both low quantitative error and execution time perspectives. In the case of the alternative methods, the evaluation is from both a visual perspective and a user study, where the equal area algorithm with the lowest error is used.

The rest of this section is organized as follows. Section 3.1 evaluates the maps that result from the application of the various equal area algorithms to a dataset. Having chosen the equal area algorithm that achieves the lowest quantitative error, Section 3.2 uses a visual perspective to compare its use on the same dataset with the alternative methods using both the Winkel-Tripel and Mercator projections. Section 3.3 repeats the comparison with the aid of a user study. Section 3.4 performs the comparison on an alternative related dataset which provides the motivation for a generalized algorithm (Section 4) that combines the benefits of both the equal area and equal length methods.

Before proceeding further, we first describe the dataset used in our evaluation. It consists of population by country mapped using the Winkel-Tripel projection. Choice of projection is particularly important because we compute equal area breaks using projected area in pixels on the screen rather than real land area. Using a projection that makes no attempt to minimize area distortion could yield considerably different classifications that do not accurately represent geographic distribution.

3.1 Equal Area Algorithm Error Comparison

Table 1 contains a summary of the average errors resulting from implementing a number of variations of the equal area algorithm given in Section 2 for our world population dataset with the Winkel-Tripel projection where the goal was to reduce the average error with the baseline so that the areas assigned to each chunk or color were equal. Notice that for our dataset, using the first greedy algorithm to assign the colors almost halves the error, and using the second greedy algorithm shrinks it even further. Moreover, using the optimal dynamic programming algorithm produces a significantly better result than all the others. Since the execution time complexity of the optimal dynamic programming algorithm is good (almost linear), we recommend using it instead of the simpler algorithms and this is the variant of the algorithm that we use in the comparisons described in the remaining sections.

Algorithm Average Error
Equal length (Quantiles) 34,928
Greedy algorithm 1 15,192
Greedy algorithm 2 5,572
Optimal dynamic programming 3,244
Table 1: Average difference between the optimal sum and the sum of each chunk.

3.2 Visual Comparison

In addition to the error and computational complexity evaluation in Section 3.1 we also created maps resulting from the use of the Jenks natural breaks, equal length, and the optimal dynamic programming equal area algorithms using the Winkel-Tripel projection, and they are given in Figures 1(a), 1(c), and 1(e), respectively. Each map contains a legend that indicates the range of data values for each color. In addition, each map contains a scale at its bottom that indicates for each color the proportion or number of the countries that are depicted in it.

From the images we see that the lighter colors (corresponding to smaller areas which generally also have lower populations) dominated when using the Jenks natural breaks algorithm, while the darker colors (corresponding to larger areas which generally also have higher populations) dominated when using the equal length algorithm. On the other hand, the color distribution area-wise is clearly superior when using the equal area algorithm to the other two methods. This is primarily because the equal area algorithm reduces the number of highly populated and highly sized countries for the darkest color. It also increases the number of smaller populated and smaller sized countries for the lightest color. The natural breaks are hard to predict other than to note that they are far less likely to occur for the countries with a small population which is why the area of the countries with the lighter color is much greater when using the Jenks natural breaks algorithm vis-a-vis the equal length algorithm. On the other hand, the difference in the area of the countries with the lighter color in the Jenks natural breaks algorithm from those in the equal area method is much smaller than the difference from those in the equal length method.

The above comparison used the Winkel-Tripel projection because it is an equal area projection and hence thought to be more relevant to the goal of our study which was to demonstrate the utility of the equal area method. It turns out that the nature of the projection did not have a material effect on our evaluation except for the equal area projection. In order to see the relative independence of our comparison from the chosen projection, Figures 1(b), 1(d), and 1(f), show Choropleth maps when using the Mercator projection for the Jenks natural breaks, equal length, and equal area methods, respectively. We observe that the Mercator projection has almost identical behavior in terms of the color assignment for the Jenks natural breaks and equal length methods as does the Winkel-Tripel projection although the rationale for using the projections is different. In particular, the rationale for using the Winkel-Tripel projection is that it attempts to minimize area distortion. The Mercator projection greatly distorted areas for countries near the poles (overestimation) and the Equator (underestimation). This distortion is the reason for the difference between the projections for the equal area algorithm where we see that the distortion (exaggeration) in the area of Greenland causes a reduction in the number of the lightest colored countries in the Mercator projection vis-a-vis the number in the Winkel-Tripel projection. This can be observed by looking at the African continent where the number of the countries with the lightest color is lower in the Mercator projection than it is in the Winkel-Tripel projection.

(a) Natural Breaks — Winkel-Tripel
(b) Natural Breaks — Mercator
(c) Equal length (Quantiles) — Winkel-Tripel
(d) Equal length (Quantiles) — Mercator
(e) Equal area — Winkel-Tripel
(f) Equal area — Mercator
Figure 2: Equal area compared to two common choropleth classification methods on Mercator and Winkel-Tripel projections.

3.3 User study

To evaluate the utility of the equal area classification algorithm for the coloring of choropleth maps, we conducted a survey of 30 arbitrary people through Amazon Mechanical Turk. Participants were shown four maps of the continental United States containing the same data classified using four different techniques (equal area, equal length, equal interval, and natural breaks), but participants were not informed that underlying data for each map was the same. They were then asked questions about the underlying data for the maps to judge how effective each map was at conveying patterns in the data to readers of the map. Finally, participants were asked to make a subjective comparison between the four maps by indicating which map they felt was most and least visually appealing. Similar task based approaches to evaluating choropleth maps have been used in prior studies [3, 8, 23].

Before asking participants about the contents of the maps, we gauged their background knowledge in cartography and in the geography of the United States by asking them to rank their familiarity with these areas as either “not familiar”, “slightly familiar”, or “highly familiar”. For both questions, few participants ranked themselves as not familiar. Responses to these questions are tabulated in Table 2.

United States Geography Cartography
Not Familiar 2 5
Slightly Familiar 16 22
Highly Familiar 12 3
Table 2: User study participant background familiarity with cartography and United States geography.

The data we chose to use in creating the maps was the number of confirmed COVID-19 cases per residents in each state as reported by the United States Centers for Disease Control and Prevention on April 22, 2020. Participants were not informed that this was the datasets because we did not want prior knowledge of the distribution of COVID-19 cases to affect responses. Even though higher resolution data is available (e.g. county or zip code), state level data was chosen because states have considerably greater variation in area than any smaller division. The resulting classification from applying the equal area algorithm approaches that obtained from equal length as the variation in area between objects in the dataset approaches zero. An evaluation of equal area classification under such circumstances would not be interesting, as the results would be the same as any prior evaluation of equal length classification (e.g. Brewer and Pickle [3]).

Choropleth maps are a lossy abstraction of the data used in their creation, so we did not expect participants to be able to make precise statements about data values in individual states. Instead, we asked participants to make comparative generalizations about the relative data values in different regions of the map. In particular, we had participants compare the western (W), midwestern (MW), northeastern (NE), and southern (S) regions of the United States as defined by the Census Bureau. We did not expect all participants to be familiar enough with United States geography to fully understand these regions, so we included in our survey a map indicating exactly what states are included in each region.

The three questions we asked for each map are the following: (1) what region of the country has the highest average value, (2) what region of the country has the lowest average data value, and (3) compare the average data value in the midwest to that in the south. All questions were posed multiple chose questions. The possible answers for the first two were the regions is enumerated above. Options for the third questions were that the average value in the midwest is greater than (GT), less than (LT), or equal to (EQ) the average value in the south. The correct answers to the questions are northeast, west, and that the average value in the is less than that in the south respectively.

Equal Area Equal Interval Natural Breaks Equal Length
MW NE S W MW NE S W MW NE S W MW NE S W
Question 1 5 17 5 3 4 17 6 3 6 19 3 2 7 18 3 2
Question 2 9 5 6 9 8 8 3 11 7 8 3 12 10 7 5 8
EQ GT LT EQ GT LT EQ GT LT EQ GT LT
Question 3 14 6 10 14 8 8 19 8 3 12 10 8
Percent Correct 40 40 37.78 37.78
Most Appealing 15 4 3 8
Least Appealing 5 12 7 6
Table 3: Responses to questions posed in our user study for four choropleth map classification techniques. Bold entries indicate correct responses.

In Table 3 we tabulate participant responses to our questions. We found that overall response accuracy was very similar across all the classification techniques. Equal area and equal interval tied for the highest average percent correct while natural breaks and equal length followed very closely behind. This shows that the choice of classification method is not overly important for accurately conveying patterns in data, at least for this specific dataset.

If this is the case, then it is reasonable to pick a classification based on more subjective criteria, such as how visually appealing it is. In the last two questions of our survey, we had participants indicate which maps they found most and least appealing. Out of 30 responses, 15 listed equal area as the most appealing map while only five listed it as their least favorite. Thus we have clear evidence for a preference for the equal-area classsification.

(a) Natural Breaks
(b) Natural Breaks
(c) Equal length (Quantiles)
(d) Equal length (Quantiles)
(e) Equal area
(f) Equal area
Figure 3: Population Density and Area Per Person Maps

3.4 Density Maps

Our discussion of the advantages of the equal area method has been in the context of Choropleth maps. It is interesting to note that there is a school of thought that claims that a common error in the use of Choropleth maps is their use to visualize raw data such as population where colors serve to differentiate between magnitudes rather than to visualize normalized values such as densities where the colors are more intuitively used to convey intensities. This is based on the natural tendency to associate darker colors with larger intensities and lighter colors with lower intensities. The use of a normalized measure such as density reflects the distribution of a value over an area. Thus in the case of raw data values, it is suggested that proportional symbol maps are used instead of choropleth [24].

This is fine when the areas are all of approximately the same size, but it is not appropriate for areas that differ greatly in size as is the case with a map of countries which are irregularly shaped. In this case point symbols are likely to overlap and thus greatly mask the different areas. Approaches such as necklace maps have been suggested to resolve this particular issue while continuing to use proportional symbols [25]. It is also worth noting that the equal area method is a form of normalization where the normalization is applied to the embedding space (i.e., the world map) to yield equal-sized areas, while density measures normalize the measured quantity over an area.

Despite the above we still feel that the use of Choropleth maps for the visualization of raw data such as population is important even if conventional cartographic wisdom frowns on their use and calls instead for the use of densities. Our aim in this paper is to introduce the equal area method to novices who form the majority of users rather than just experienced cartographers, as these novices are more likely to be the producers of the maps that are so common on the web today. Therefore, it is important for us to make sure that the coloring algorithm used in Choropleth maps produce good results even if the data is not completely appropriate for being visualized with them. This is what motivated our development of the equal area algorithm. Figures 2(a), 2(c), and 2(e) show Choropleth maps for the population density using the Winkel-Tripel projection for the Jenks natural breaks, equal length, and equal area methods, respectively.

Although not shown explicitly here, we note that most of the countries have similar population densities. This means that it is hard to find natural breaks for the smaller population densities, while it is easier to find the natural breaks for the larger population densities. This is why most of the countries are colored lightly (i.e., yellow) when using the Jenks natural breaks algorithm and only India, China, Indonesia, and a few European countries colored darkly. The natural breaks are simply too few. We also observe that the larger countries tend to have smaller population densities which is why in the equal length algorithm a large area is colored yellow although much less often than with the Jenks natural breaks algorithm but still substantially less often than the area colored yellow in the equal area algorithm where the total areas of the countries associated with each color is approximately the same.

One way to overcome the drawback associated with the use of the Jenks natural breaks and equal length algorithms for population density is to note that a more appropriate measurement alternative to population density is the inverse measurement of average area per person. Figures 2(b), 2(d), and 2(f) show Choropleth maps for the average area per person using the Winkel-Tripel projection for the Jenks natural breaks, equal length, and equal area methods, respectively. In this case, we find that fewer of the countries have similar average area per person but this number is still substantial and thus its still hard to find natural breaks (e.g., some of the countries like the US are still colored yellow as also in the case of the density maps). Nevertheless there is greater variation in the colors of the countries. Taking the inverse measurement of average area per person makes no difference in the map produced by the equal length algorithm other than making the formerly lightly colored countries dark and making the formerly darkly colored countries light. In the example map, we find that since the larger countries tend to have higher values of average area per person, a large part of the map will be colored darkly. The fact that the total areas of the countries associated with each color is approximately the same for the equal area algorithm means that when using it there is no difference in the color distribution for the population density measurement or its inverse measurement other than the change in the color associated with each country from light to dark and from dark to light.

4 Optimized Algorithms

Recall that each map contains one legend for the boundaries of data values for each color, and another for the proportion of countries associated with each color. This information indicates the extent to which each map deviates from the equal length map in the sense of whether the number of countries displayed with that color deviates from being the same. Looking at this information we notice that the number of countries colored red (the darkest color) is usually small. However, this is not the case for the equal area population density map where over half of the countries are colored red thereby covering almost all of Europe and Asia leading to a perception that the map is not balanced even though the colors are equally distributed in terms of area. This prompted us to try to improve our maps further, by attempting to also take into account the number of countries that are assigned to each color. In other words, we are attempting to strike a better balance between the equal area classification and the equal length classification.

4.1 Optimized Greedy Algorithm

1:
2:U = 2 Upper bound on the chunk length/sum ratio. Chunk length and sum should not exceed twice those of the average chunk.
3:L = Lower bound on the chunk length ratio. Chunk length should be at least half the length of the average chunk.
4:function OptimizedGreedy(,,).
5:   if  == 1 then
6:      return [] One color - no breaks.    
7:   
8:   
9:    0 The length of the new chunk.
10:    0 The sum of the new chunk.
11:   for  in .. len(do
12:      
13:      
14:      if 
                    then
15:         return [i + 1] + OptimizedGreedy(, , )          
Algorithm 5 Greedy algorithm for partitioning a sequence of numbers into chunks with roughly equal sum, while simultaneously trying to balance the lengths of the chunks as well. The algorithm returns the breaks. Complexity: .

In our construction of an equal area algorithm we proposed two greedy algorithms (Section 2.2). In the first of these algorithms, we added elements to a chunk until its sum exceeded the average area per chunk denoted by AVG. At this point, we make a few modifications so that we can achieve a balance in the lengths of the chunks in addition to balancing the sums of the areas of their elements. In particular, we keep inserting elements into a chunk until one of the following conditions is satisfied.

  1. The chunk is full in terms of area (as previously), and its length is at least half of average_length (a new condition that we introduce in order to ensure that a color is not associated with too few countries).

  2. The chunk is already twice the average_length (a new condition to ensure that we don’t have too many countries associated with the same color).

  3. The chunk’s area reaches double AVG (i.e., it’s getting too big).

It is important to note that as the chunks are not necessarily evenly-balanced, if we blindly follow the above conditions, then we might run out of elements before we get to the last chunk. We mitigate this issue by determining the breaks recursively. In particular, each time we find a break, we recursively apply the algorithm to the rest of the elements. The pseudocode for the algorithm is given by Algorithm 5. It is a recursive procedure that finds the greedy breaks as described above. It is invoked using OptimizedGreedy(areas, 0, ) and it returns the indices of the breaks.

4.2 Optimized Dynamic Programming Algorithm

Recall that when we discussed equal area algorithm, we optimized: . We now modify this expression to also account for the lengths of the chunks as well. Therefore, our goal is to minimize the following formula.

is a user-defined constant specifying the weight to be given to the lengths. We use the term W-score to describe it on account of its similarity to the concept of an f-score used in information retrieval to vary the influence of precision and recall [21]. The left term in the summation corresponds to the normalized average error in area, while the right term in the summation corresponds to the normalized average error in length. Therefore, setting yields the equal area algorithm, while setting yields the equal length (quantiles) algorithm. On the other hand, setting yields an intermediate measure between equal area and equal length, which is what we are seeking.

Again, as in Section 2.3 we use dynamic programming to optimize the new criterion. However, the trick that we used to find only two candidate locations for the last break will not work here, as moving the break to the left or the right results not only in changing the sums of the chunks, but also in changing their lengths. To overcome this, we consider all the locations for each break, at the cost of a higher complexity of . The pseudocode for the algorithm is given by Algorithm 6

1:
2:function DPOptimized(numbers).
3:    sum() / Average chunk sum.
4:    Average chunk length.
5:    2D array with +1 rows (from 0 to ) for the end index , and columns (from 0 to -1) for the number of breaks .
6:    2D array like above, for the breaks
7:   for  in  do Fill the first column (column 0).
8:      [][0]
9:       []    
10:   for  in  do
11:      for  in  do
12:          uninitialized
13:          uninitialized
14:         for  in  do
15:            
                      
                      
16:            if  then
17:               
18:                                     
19:         
20:                   
21:   return
Algorithm 6 Optimized dynamic programming algorithm for partitioning a sequence of numbers into chunks with roughly equal sum, while simultaneously balancing the lengths of the chunks. The result is something between equal length (quantiles) and equal area. specifies the desired weight to be given to the lengths factor. The algorithm returns the breaks. Complexity: .

Figure 4 shows Choropleth maps using the optimal algorithm with for the population, population density, and average area per person using the Winkel-Tripel projection. Here we see that varying in this way increases (reduces) the proportion of lightly colored countries in the population map when using the equal length (equal area) algorithm, reduces (increases) the proportion of lightly colored countries in the population density map when using the equal length (equal area) algorithm, and increases (reduces) the proportion of lightly colored countries in the average area per person map when using the equal length (equal area) algorithm. Readers can see this by making use of the varying the slider tool available at our online tool222www.visumaps.com/more/final_surveys/W_BlackBorders.html. In fact, when we asked 15 arbitrary people on Amazon Mechanical Turk for their preferred values for , we received different preferred values for . The most popular answers were 0.3, 1, and 0.4 for the population, population density, and average area per person datasets, respectively.

(a) Population
(b) Population density
(c) Area per person
Figure 4: Statistics mapped using the optimized algorithm with

5 Concluding Remarks

It is worth noting that generally-speaking, larger countries tend to have higher population and lower population densities. Since the equal length algorithm assigns colors so that an equal number of countries are assigned to each color, many large countries are assigned dark colors for the population map , and many large countries are assigned light colors for the population density map, resulting in a fairly dark population map and a light population density map. Users seemed to prefer the lighter maps which led us to believe that people prefer maps that are dominated by light colors rather than dark colors. This theory needs to be tested in future studies. It is also interesting to research whether changing the color of the borders between regions from black to white has any effect on the results.

The feedback we received also revealed that some users prefer more variability (contrast) between the colors of adjacent countries in order to facilitate the easier discovery of differences. In particular, the criteria that we discussed in this paper could also be modified to minimize the number of adjacent regions with the same color.

We also believe that people like to be able to easily identify extremes on the map (e.g., densest country). To address such an issue, we could devise an algorithm that assigns the colors based on a variant of a bell curve, such that fewer regions fall in the initial and final color ranges. Another direction for future work is to devise an algorithm that exploits the middle ground between the equal area and equal length algorithms that ignore the values when partitioning, and clustering algorithms such as the Jenks natural breaks algorithm which focus on finding natural breaks in the values.

It is interesting to note that the equal area and equal length methods are similar in spirit to spatial indexing methods that are differentiated on the basis of whether they organize the underlying embedding space (region quadtrees) or the underlying data (point quadtrees) [22]. In the former case, the areas for each color are the same while in the latter the number of objects associated with each color are the same. We used a similarly-spirited analogy to compare the application of the equal area method to magnitude data to the use of normalized data such as densities in Section 3.4. In particular, we pointed out that the equal area method applies the normalization to the embedding space (i.e., the world map) for which the data has been collected to yield equal-sized areas, while density measures normalize the measured quantity over an area. Thus the equal area method can be said to be an alternative to the argument that raw data values should be normalized using measures such as densities before visualizing them with a Choropleth map.

plus 0.3ex

References

  • Armstrong et al. [2003] M. P. Armstrong, N. Xiao, and D. A. Bennett. Using Genetic Algorithms to Create Multicriteria Class Intervals for Choropleth Maps. Annals of the Association of American Geographers, 93(3):595–623, Sept. 2003.
  • Brewer [1994] C. A. Brewer. Color Use Guidelines for Mapping and Visualization. Modern Cartography Series, pages 123–147, Jan. 1994.
  • Brewer and Pickle [2002] C. A. Brewer and L. Pickle. Evaluation of Methods for Classifying Epidemiological Data on Choropleth Maps in Series. Annals of the Association of American Geographers, 92(4):662–681, Dec. 2002.
  • Carr et al. [1992] D. B. Carr, A. R. Olsen, and D. White. Hexagon Mosaic Maps for Display of Univariate and Bivariate Geographical Data. Cartography and Geographic Information Systems, 19(4):228–236, Jan. 1992.
  • Central Intelligence Agency [2019] Central Intelligence Agency. The world factbook 2019 — country comparison :: GDP - per capita (PPP), 2019.
  • Chang and Schiewe [2018] J. Chang and J. Schiewe. Task-oriented Data Classification of Choropleth Maps for Preserving Local Extreme Values. In Photogrammetrie - Fernerkundung - Geoinformatik - Kartographie, pages 213–218, Mar. 2018.
  • David M. Goldberg [2007] I. David M. Goldberg, J. Richard Gott.

    Flexion and Skewness in Map Projections of the Earth.

    Cartographica, 42(4):297–318, 2007.
  • Du et al. [2018] Y. Du, L. Ren, Y. Zhou, J. Li, F. Tian, and G. Dai. Banded Choropleth Map. Personal Ubiquitous Computing, 22(3):503–510, June 2018.
  • Harrower and Brewer [2003] M. Harrower and C. A. Brewer. Colorbrewer.org: An online tool for selecting colour schemes for maps. The Cartographic Journal, 40(1):27–37, 2003.
  • Jenks [1977] G. F. Jenks. Optimal data classification for choropleth maps. Geography Department Occasional Paper No. 2 2, University of Kansas, Lawrence, KS, 1977.
  • Jenks and Caspall [1971] G. F. Jenks and F. C. Caspall. Error on Choroplethic Maps: Definition, Measurement, Reduction. Annals of the Association of American Geographers, 61(2):217–244, 1971.
  • Jern et al. [2009] M. Jern, J. Rogstadius, and T. Åström. Treemaps and Choropleth Maps Applied to Regional Hierarchical Statistical Data. In Proceedings of the 13th International Conference on Information Visualisation, IV, pages 403–410, Bacelona, Spain, July 2009.
  • Jiang [2013] B. Jiang. Head/tail breaks: A new classification scheme for data with a heavy-tailed distribution. The Professional Geographer, 65(3):482–494, 2013.
  • Johnson and Shneiderman [1991] B. Johnson and B. Shneiderman. Tree-maps: a space-filling approach to the visualization of hierarchical information structures. In Proceedings of IEEE Visualization ’91, pages 284–291, San Diego, CA, Oct. 1991.
  • Lima et al. [2019] R. Lima, M. S. B. Leal, Y. P. d. S. Brito, C. G. Resque dos Santos, and B. S. Meiguins. ChoroLibre: Supporting georeferenced demographic information visualization through hierarchical choropleth maps. In Proceedings of 23rd International Conference in Information Visualization – Part II, pages 56–61, Adelaide, Australia, July 2019.
  • Lloyd and Steinke [1976] R. Lloyd and T. Steinke. The Decision making Process for Judging the Similarity of Choropleth Maps. The American Cartographer, 3(2):177–184, 1976.
  • Lloyd and Steinke [1977] R. Lloyd and T. Steinke. Visual and Statistical Comparison of Choropleth Maps. Annals of the Association of American Geographers, 67(3):429–436, 1977.
  • McNabb et al. [2018] L. McNabb, R. S. Laramee, and R. Fry. Dynamic Choropleth Maps – Using Amalgamation to Increase Area Perceivability. In Proceedings of the 22nd International Conference Information Visualisation, IV, pages 284–293, Fisciano, Italy, July 2018.
  • Murray and Shyy [2000] A. T. Murray and T.-K. Shyy. Integrating attribute and space characteristics in choropleth display and spatial data mining. International Journal of Geographical Information Science, 14(7):649–667, Oct. 2000.
  • Robinson et al. [1984] A. H. Robinson, R. D. Sale, J. L. Morrison, and P. C. Muehrcke. Elements of cartography. Wiley, New York, 5th ed. edition, 1984.
  • Salton [1989] G. Salton. Automatic Text Processing: The Transformation Analysis and Retrieval of Information by Computer. Addison-Wesley, Reading, MA, 1989.
  • Samet [2006] H. Samet. Foundations of Multidimensional and Metric Data Structures. Morgan-Kaufmann, San Francisco, 2006. (Translated to Chinese ISBN 978-7-302-22784-7).
  • Schiewe [2019] J. Schiewe. Empirical Studies on the Visual Perception of Spatial Patterns in Choropleth Maps. Journal of Cartography and Geographic Information, 69(3):217–228, Sept. 2019.
  • Slocum et al. [2009] T. A. Slocum, R. B. McMaster, F. C. Kessler, and H. H. Howard. Thematic Cartography and Geovisualization. Prentice Hall series in geographic information science. Pearson Prentice Hall, Upper Saddle River, NJ, 3rd ed. edition, 2009.
  • Speckmann and Verbeek [2010] A. Speckmann and K. Verbeek. Necklace Maps. IEEE Transactions on Visualization and Computer Graphics, 16(6):881–889, Nov. 2010.
  • Steinke and Lloyd [1983] T. Steinke and R. Lloyd. Judging The Similarity of Choropleth Map Images. Cartographica: The International Journal for Geographic Information and Geovisualization, 20(4):35–42, 1983.
  • Tobler [2004] W. Tobler. Thirty-five years of computer cartograms. Annals of the Association of American Geographers, 94:58–73, 2004.
  • Xiao and Armstrong [2006] N. Xiao and M. P. Armstrong. ChoroWare: A Software Toolkit for Choropleth Map Classification. Geographical Analysis, 38(1):102–121, 2006.
  • Zhang and Maciejewski [2017] Y. Zhang and R. Maciejewski. Quantifying the visual impact of classification boundaries in choropleth maps. IEEE Transactions on Visualization and Computer Graphics, 23(1):371–380, Aug. 2017.