Area cartograms are map-based data visualizations in which the area of each map region (e.g., state or province) is proportional to a numeric data value (e.g., population or gross domestic product). We say that an area cartogram is contiguous if it preserves conventional map topology such that neighboring regions on a conventional (e.g., equal-area) map remain neighbors on the cartogram, and vice versa. Consider the contiguous cartogram in Figure 1 displaying agriculture sector output by state in the United States in 2018. Notice that, while Colorado (CO) appears larger in area than Iowa (IA) in the conventional map because Colorado has more land area, Iowa’s area appears over three times larger than that of Colorado in the cartogram. The proportion of the cartogram areas reflects that Iowa’s agriculture sector output of US$ billion is more than three times higher than Colorado’s agriculture sector output of US$ billion (Kassel, 2021). While various types of non-contiguous cartograms exist, we focus on contiguous cartograms, which performed well in a previous evaluation of different cartogram types (Nusrat ., 2016).
First becoming popular in print media in the early 20th century (Tobler, 2004; Hennig, 2018), widespread adoption of computer technology and the Internet over the last three decades has created new opportunities for cartograms to be presented electronically (Ware, 1998). Consequently, cartograms have become popular in online media as an accompaniment to news articles and to display statistics such as election results (Almukhtar ., 2018; Andre ., 2020; Evershed, 2013).
Despite their popularity, there is a dearth of software tools for generating cartograms that are accessible to non-technical users unfamiliar with traditional Geographic Information Systems (GIS) software. A survey of cartogram generation tools by Markowska Korycka-Skorupa (2015) revealed several web and desktop applications. However, as of now, many of these tools are either no longer functional or designed to be used in conjunction with a GIS software package, rendering them firmly out of reach to non-technical users. Unmentioned in the survey of Markowska Korycka-Skorupa (2015), the fBlog Online Cartogram Tool provides a simple web interface for users wanting to generate cartograms of the United States and Europe (van den Broek, 2012). While its feature set is limited, it remains one of the only such easy-to-use software tools in working order.
To address this lack of easy-to-use and effective cartogram generation software, Tingsheng . (2019) developed go-cart.io, a web application aimed at non-technical users that generates cartograms of a variety of geographies from uploaded data. We present the results of an experiment to evaluate the usability of go-cart.io and fBlog, which are, to the best of our knowledge, the only two functional web-based cartogram generation tools presently available. Participants were required to generate a cartogram of a provided data set with go-cart.io and fBlog. The order in which participants encountered the two cartogram tools was randomized. Afterwards, participants referred to a figure generated from their cartogram to answer questions about the data set. Finally, participants rated the usability of both cartogram generation tools on the widely adopted System Usability Scale and left additional written feedback. On average, participants gave go-cart.io higher scores than fBlog. Still, the usability of go-cart.io is deficient in several areas. We recommend that web-based cartogram generation tools provide better user interface layout and data entry methods.
2 Related work
2.1 Usability of GIS software
The International Organization for Standardization (2018) defines system usability as the combination of three criteria: effectiveness, efficiency, and satisfaction. Komarkova . (2009) adopted these three criteria as their framework to evaluate GIS usability. Despite the increasing prevalence of geospatial data, Komarkova . (2009) note that traditional GIS software tools suffer from poor usability. They are often desktop applications, requiring users to install and configure them. Often not designed with usability in mind (Unrau Kray, 2019), GIS user interfaces are usually complicated and require extensive training before they can be used quickly and comfortably. The reliance of some GIS tools on proprietary file formats is also a concern for users who need integration with other software systems (Komarkova ., 2009). The emergence of web-based GIS tools presents an opportunity to address some of these concerns because these tools do not need to be installed and configured. However, web-based applications must overcome additional challenges. While it may be acceptable for desktop GIS packages to require extensive training for users to take advantage of their rich feature set, web applications must cater to users who are not tech-savvy and are often in a hurry.
In a survey of GIS usability studies, Unrau Kray (2019) pinpointed additional areas where GIS tools possess poor usability. Over half the tools surveyed were reported to give poor error messages that did not provide any indication of how to fix the problem encountered. Moreover, Unrau Kray (2019) found that many users failed to complete study tasks because the interface of the studied GIS tool provided no visual indication of how to proceed. These findings echo the results of a survey of desktop GIS software users conducted by Davies Medyckyj-Scott (1994). Participants in this survey also complained of “total nonsense” error messages and being “unsure what to do when they sit in front of a GIS” (Davies Medyckyj-Scott, 1994). This latter point may result from the failure of many GIS software tools to conform to traditional norms of software interface layout, preventing users from applying general software skills to GIS systems. Davies Medyckyj-Scott (1994) also found that most GIS systems do not do enough to support novice and infrequent users. They suggested that GIS software should provide better documentation and online help because the presence of these features was associated with a higher usability rating by survey participants.
2.2 Cartogram generation tools
Existing cartogram generation tools also suffer from some of the usability challenges discussed above. Markowska Korycka-Skorupa (2015) conducted an extensive survey of cartogram generation tools. Out of the five studied tools, three support generating contiguous cartograms. One tool, MAPresso, relies on obsolete Java applet technology and is no longer usable (Herzog, 2003). The other two tools, ScapeToad and the Cartogram Utility for ArcGIS, remain working but are inaccessible to non-technical users. The former is a plugin for ArcGIS, a commercial GIS software package requiring specialized training to use effectively (Markowska Korycka-Skorupa, 2015). While a standalone desktop application, ScapeToad reads input data from ESRI shapefiles, a geographic data file format that can be produced only with specialized knowledge of GIS software or programming tools (Andrieu ., 2008). While all above-mentioned tools aim for a high degree of automation, the recently developed Windows application Cartogram Studio (Kronenfeld, 2021) focuses on the manual construction of cartograms. On one hand, this feature makes Cartogram Studio an excellent didactic tool. On the other hand, manual customization of cartograms is time-consuming (Kronenfeld, 2018), limiting the practical usability of Cartogram Studio outside an educational setting.
Markowska Korycka-Skorupa (2015)
quantify the performance of the generation tools they studied using a numeric scale that awards points for possessing certain functionality (e.g., whether the tool supports saving generated cartograms in a vector image format). However, they do not explicitly consider usability in their evaluation. This omission, combined with the development of new tools like go-cart.io, provides an opportunity for an updated, usability-focused assessment of currently available cartogram generation tools.
2.3 System Usability Scale
Brooke’s (1996) System Usability Scale (SUS) questionnaire provides a standardized method of quantifying the usability of software and other systems. The questionnaire comprises ten statements alternating between positive and negative sentiment. Table 2.3 contains all of the SUS items. The alternating sentiment of the questions ensures that participants carefully consider each questionnaire item (Davies Medyckyj-Scott, 1994). Respondents indicate their agreement or disagreement with each statement on a -point Likert scale. Since its development, the SUS has become widely adopted and has been evaluated as one of the best performing surveys at measuring system usability (Lewis, 2018). Unrau Kray (2019) also report that the SUS has been adopted by many evaluations of GIS specifically. Therefore, we adopt the SUS as the main measure of cartogram generation tool usability in our experiment.
3 Overview of go-cart.io
The web application go-cart.io allows users to generate cartograms for a set of selected geographies using their own data. The generation tool uses modern web technologies, and runs in any up-to-date web browser without the need to install additional software. Table 3 summarizes the features offered by go-cart.io as compared to existing cartogram generation tools. Cartograms are generated on a cloud server using the fast flow-based method developed by Gastner . (2018). Figure 2 provides a screenshot of the go-cart.io interface. Numbers in the figure highlighting the user interface elements of go-cart.io correspond to the steps of the instructions below. To generate a cartogram, users must:
Select the geography for which they want to generate a cartogram from the drop-down list at the top-right. There are currently available geographies on go-cart.io, including countries, sub-country divisions (e.g., states and provinces), and multinational political entities like the European Union and ASEAN countries.
Input numeric data and colors for each region of the selected geography. Users may do this in one of two ways:
Downloading a template spreadsheet in comma-separated values (CSV) format by clicking the “Download CSV Template” button on the top-left. After entering numeric data, as well as colors in hex-code format for each region, users may upload their edited CSV file by clicking the “Upload Data” button on the top-right.
Clicking the “Edit” button at the top-right, then entering the numeric data and color for each region in an editing interface that appears in a pop-up window, shown in panel (b) of Figure 2.
Confirm that their numeric data are appropriate for a cartogram. In most cases, cartograms should only be used for data that add up to an interpretable total (e.g., absolute population or gross domestic product by region, but not gross domestic product per capita; see Tingsheng ., 2020). To aid users, go-cart.io displays users’ numeric data in a pie chart, as shown in panel (c) of Figure 2. If the pie chart is an acceptable visualization of users’ numeric data, they may proceed to generate the cartogram. Otherwise, they should select different numeric data.
Once go-cart.io generates a cartogram from the given data, users may preview and interact with it. Following recommendations from Dent (1975), go-cart.io always displays generated cartograms alongside the corresponding conventional map and a square-shaped area-to-value legend as anchor stimulus, as shown in panel (a) of Figure 2. go-cart.io also provides the following interactive features:
Infotip: Hovering the mouse over a region on the cartogram or conventional map causes an infotip to appear next to the mouse cursor with the region’s name, population, land area, and numeric data used to generate the cartogram.
Linked brushing: Hovering the mouse over a region on the cartogram or conventional map highlights the hovered-over region and the corresponding region on the other map. This feature is implemented by lightening the selected color for the region.
Map-switching animation: Using the drop-down menu above the cartogram, users may switch between the conventional map, population cartogram, and user-generated cartogram. Each time a new map is selected, the currently-selected map morphs into the newly selected map during a one-second animation.
Users may export or share their generated cartogram by clicking the relevant buttons at the bottom of the page. Cartograms may be downloaded in the Scalable Vector Graphics (SVG) format for inclusion in a report or presentation as a figure, or in GeoJSON format for import into GIS software. Users may also opt to share their generated cartogram on popular social media sites with a unique link.
4.1 Cartogram software
When evaluating the usability of a software system, Lewis (2018) recommends performing norms-based evaluations, where software system usability is evaluated against standards for usability generally applicable to software systems, and competitive evaluations. Norms-based evaluations are insufficient alone because they may be unduly harsh or lenient based on the category of software being considered. To conduct a competitive evaluation of go-cart.io’s usability, we performed a survey of similar cartogram generation tools. At present, the fBlog Online Cartogram Tool is the only such tool that is web-based and does not require users to download any software programs or browser plugins.
Figure 3 shows a screenshot of the fBlog user interface, which has a more linear layout than go-cart.io. To generate a cartogram, users must:
Select whether they would like to generate a cartogram of the United States or Europe.
Input numeric data and colors for each region of the selected geography in the text boxes on the page. Alternatively, users may choose from a few preset numeric data sets, including population and gross domestic product, by clicking the appropriate button at the bottom-right of the numeric input section.
Fill in the captcha and click “Create Map”. The cartogram image may be previewed in the browser and downloaded in Portable Network Graphics (PNG) format, but no interactive analysis tools are provided.
The experiment comprised cartogram generation and analysis tasks. During the generation tasks, participants were instructed to generate a cartogram of a provided data set using one of the two generation tools (i.e., go-cart.io or fBlog). Data sets provided for both tools were for the United States because this is one of two maps available on both generation tools. A different data set was used for each generation tool to avoid a learning effect upon subsequent generation and analysis tasks. Both data sets involve agricultural data by state. During the go-cart.io generation task, participants generated a cartogram of 2018 agricultural output by state (Kassel, 2021), while for the fBlog generation task participants generated a cartogram of 2017 crop sales by state (United States Department of Agriculture, 2021). The usage of similar data sets helped to equalize the difficulty of the subsequent analysis tasks. Additionally, we anticipated most participants would be unfamiliar with these data sets, reducing the likelihood they would rely on their own knowledge of the data sets to complete the analysis tasks.
Participants could not ask the experiment supervisor for help during generation tasks, but they could reference a written tutorial provided by each generation tool on its website. While the fBlog tutorial focuses on how to choose good data and colors for a cartogram, the go-cart.io tutorial provides step-by-step instructions for generating a cartogram once you already have a data set. Screenshots of both tutorials are available as online supplemental material on the publisher’s website. If they could not complete a generation task, they could skip it and proceed to the tasks for the next generation tool.
Upon completing a generation task, participants were presented with the cartogram they generated and a correct reference cartogram. Participants were asked to compare the two by eye. If they found the two were not identical, participants could reattempt the generation task to correct their mistakes. Otherwise, they proceeded to a set of analysis tasks.
Analysis tasks were designed to simulate how cartograms are used as a visual aid in reports and presentations. For each analysis task, a static figure was generated from the participant’s cartogram that resembled a figure in a report. Figure 4 depicts an example cartogram analysis task and generated figure. Using the figure or any interactive analysis tools provided by the generation tool, participants answered one multiple choice question about the data set. The questions in the analysis tasks were loosely inspired by the task taxonomy adopted by Nusrat . (2016) to evaluate the effectiveness of different cartogram types. All analysis tasks given to participants during the experiment are available as supplemental online material to this article.
We recruited participants to complete the experiment. All participants were university students and staff. The majority of participants reported being at least somewhat familiar with computer graphics ( participants), spreadsheet software ( participants), and cartograms ( participants). Half of participants () also indicated that they would at least generally look up unfamiliar locations on a map. The participants ranged in age from to (mean). Participants’ gender was evenly distributed ( female, male, other). All participants received the equivalent of US$ in local currency or one hour of research credit for a college course as compensation for participation.
We administered an Ishihara color blindness test to all participants because completing the analysis tasks required participants to distinguish between map regions by color. Three participants made at least one error during the test. However, the responses of these participants did not differ significantly from the others; thus, we included them in the data analysis.
Participants completed the experiment remotely over Zoom using a single screen. Each participant’s screen was recorded during the session so that their interactions with the generation tools could be analyzed later in more detail. We used Qualtrics XM to display the experiment tasks and collect participants’ answers. The experiment comprised four parts:
Introduction: Participants watched a short introductory video giving a brief overview of cartograms and a description of experiment tasks. They could ask the experiment supervisor for clarification at any time. The video is available as supplemental online material to this article.
Preliminary questions: Participants answered demographic questions about age, gender, and level of education. Then, they indicated their affinity with computer graphics, maps, cartograms, and spreadsheet software on a -point Likert scale. Finally, participants completed an Ishihara color blindness test.
Cartogram tasks: Participants completed one cartogram generation task and six analysis tasks for each generation tool. After each set of tasks, participants indicated the extent to which they relied on a tutorial provided by the generation tool. If they did indicate that they relied on this tutorial, they were also required to indicate the helpfulness of the tutorial on a -point Likert scale. Finally, participants indicated how much they relied on the following sources when completing the analysis tasks:
The numbers in the data set table.
The cartogram they generated.
The interactive analysis tools provided by the generation tool.
Their own knowledge of the data set.
Usability survey: Participants completed an SUS questionnaire for go-cart.io and fBlog. Then, they left written, free-form feedback about their experience using both web tools.
We adopted a within-subject experiment design with one independent variable: the cartogram generation tool (go-cart.io or fBlog). Participants completed one trial for each generation tool. The order in which participants encountered the tools was treated as a blocking factor: participants completed the go-cart.io trial before the fBlog trial, and participants completed the fBlog trial before the go-cart.io trial. Each trial consisted of one generation task followed by six analysis tasks.
Prior to the experiment, we anticipated that features of go-cart.io and fBlog would impact how participants rated their relative usability. Our hypotheses were as follows:
4.6.1 Numeric data input
While go-cart.io allows users to quickly fill out and upload a CSV spreadsheet to input numeric data, fBlog requires users to enter data for each map region manually in its interface.
H1: Participants would find go-cart.io more usable than fBlog because of go-cart.io’s option to upload numeric data as a CSV file.
4.6.2 Interactive analysis tools
While go-cart.io provides interactive analysis tools for generated cartograms (e.g., infotips), fBlog provides none. Duncan . (2021) conducted an evaluation of the interactive analysis tools in go-cart.io and found them to improve performance on cartogram reading tasks.
H2: Participants would find go-cart.io more usable than fBlog because go-cart.io’s interactive analysis tools will aid them in completing the cartogram analysis tasks.
4.6.3 User interface layout
While fBlog has a linear top-to-bottom user interface layout, go-cart.io provides no clear indication of where to start and how to proceed with cartogram generation.
H3: Participants would find fBlog more usable than go-cart.io because its user interface layout clearly indicates how users should proceed.
4.7 Data analysis
Because the generation task times were not normally distributed, we used the non-parametric paired Wilcoxon ranked-sign test to compare generation task times between generation tools. For this and other tests, we considered-values as significant if they are less than . To avoid overrelying on
-values alone, we also computed confidence intervals. We adopted the method developed byBauer (1972)
to estimate the% confidence interval of the pseudomedian difference in generation task times between the two tools.
For rating participants’ accuracy on the generation tasks, we analyzed mistakes made by entering the wrong color and wrong numeric data for each region separately, as well as the total number of errors of all types made by each participant. Similarly, for the analysis tasks we analyzed the error rates for each task, as well as the number of analysis tasks performed correctly for each generation tool. We considered an analysis task to be performed correctly only if all parts of the task were completed correctly. We used a permutation test with random simulations to determine the effect of generation task accuracy on the number of correct analysis tasks for each generation tool.
For the System Usability Scale, we computed the score for each participant using the standard methodology presented by Brooke (1996). We used Cronbach’s alpha with an acceptability range of as recommended by Lewis (2018) to evaluate the internal consistency of the SUS questions as applied to this experiment. Because SUS scores were approximately normally distributed, we used a paired
-test to evaluate the effect of generation tool on mean SUS score. We also used Welch’s unequal variances-test to evaluate the effect of generation task completion on mean SUS score for each tool.
Finally, we analyzed whether the hypotheses we presented in the previous section are supported. For H1, we used Spearman’s , which evaluates how well a monotonic function describes the relationship between two numeric variables, to see if there was a positive correlation between participants’ indicated affinity with spreadsheet software and SUS score for go-cart.io. We adopted the bootstrapping method presented by Bishara Hittner (2017) to compute % confidence intervals for . For H2, we also used Spearman’s to see if there was a positive correlation between participants’ indicated reliance on go-cart.io’s interface to complete the go-cart.io analysis tasks and SUS score for go-cart.io. For H3, we considered reliance on the tutorial provided by a generation tool during the tool’s generation task as an indication that the tool’s interface is unclear. We used a permutation test with simulations to determine if reliance on the tutorial was significantly higher for go-cart.io than for fBlog.
5.1 Generation tasks
Most participants completed both generation tasks. out of participants finished the generation task with fBlog, while participants finished the generation task with go-cart.io. (i.e., % of participants) completed both generation tasks. One participant was not able to complete the go-cart.io generation task due to a technical error with the go-cart.io web application, and another participant mistakenly used fBlog to complete the go-cart.io generation task. Data from these two participants have been excluded from the remaining analysis.
Median generation task duration for fBlog was minutes versus minutes for go-cart.io (% confidence interval for pseudomedian difference in minutes: ). Figure 5 shows the distribution of generation task duration for both generation tools. The distribution of duration for fBlog was roughly uniform, with a minimum time of minutes and maximum time of
minutes. The distribution of duration for go-cart.io was right-skewed, with a minimum time ofminutes and maximum time of minutes. The difference in pseudomedian duration of generation task between the two tools was statistically significant ().
Most participants who finished the generation tasks completed these tasks accurately. % of participants who completed the go-cart.io generation task completed it with perfect accuracy, while % of participants who completed the fBlog generation task completed it with perfect accuracy. Table 5.1.3 provides a breakdown of the accuracy and error rates for the fBlog and go-cart.io generation tasks, respectively. Overall, participants’ accuracy in entering state areas was high across both tasks. % of participants completing the go-cart.io generation task entered all region areas correctly, while % of participants completing the fBlog generation task entered all region areas correctly.111fBlog includes Alaska and Hawaii in the map of the United States, whereas the maps on go-cart.io only contain the states in the conterminous United States and Washington, D.C. For both tasks, accuracy of region colors was lower. % of participants who completed the go-cart.io generation task entered all region colors correctly, while only % of participants did so for the fBlog generation task.
Among participants who made errors during the fBlog generation task, the number of errors was usually small. Figure 6 shows a distribution of the number of area and color errors for both generation tasks. For the fBlog generation task, out of participants who made an area error made only one such error. Similarly, out of participants who made a color error on the fBlog task made at most two such errors.
However, the opposite is true for the go-cart.io generation task. Both participants who made an area error made at least three such errors, and seven out of eight participants who made a color error made at least three such errors.
5.1.4 Reliance on and helpfulness of tutorial
Participants indicated lower reliance on the tutorial during the fBlog generation task than during the go-cart.io generation task. Figure 7 provides an overview of participants’ indicated reliance on and helpfulness of the tutorials during both generation tasks. Considering participants’ responses on an interval scale from (Not at all) to (Very frequently), mean reliance on the tutorial was points higher for go-cart.io than for fBlog (% confidence interval ]). Mean reliance on the tutorial for fBlog was , while for fBlog it was . This difference was significant ().
Among participants who relied on each tutorial, the mean helpfulness of the go-cart.io tutorial was points higher than the mean helpfulness of the fBlog tutorial (% confidence interval ), on an interval scale from (Not at all) to (Very helpful). Mean helpfulness of the fBlog tutorial was , while mean helpfulness of the go-cart.io tutorial was . This difference was also significant ().
5.2 Analysis tasks
5.2.1 Error rates
Participants found most analysis tasks for both fBlog and go-cart.io to have moderate difficulty, with some tasks being unexpectedly very difficult. Figure 8 provides an overview of participants’ performance on the analysis tasks. The first analysis task for both generation tools had the highest highest error rates ( for fBlog and for go-cart.io). This may have been the partial result of the first analysis task having two parts for both generation tools (participants had to name the largest and second-largest region), while all other analysis tasks had only one part.
5.2.2 Effect of generation tool
For both generation tools, the majority of participants completed over half of the analysis tasks correctly (% for fBlog and % for go-cart.io).
5.2.3 Effect of generation task accuracy
Participants who completed the fBlog generation task without error completed on average (% confidence interval ) more fBlog analysis tasks correctly than participants who made at least one error of any type during the generation task. However, fBlog generation task accuracy did not have a significant effect on the number of fBlog analysis tasks completed correctly ().
Participants who completed the go-cart.io generation task without error completed on average (% confidence interval ) more go-cart.io analysis tasks correctly than participants who made at least one generation task error. The accuracy of go-cart.io generation tasks did not have a significant effect either on the number of go-cart.io analysis tasks completed correctly ().
5.2.4 Participants’ indicated methodology
Most participants relied on the figure created from their cartogram during the generation tasks for both generation tools. Figure 9 provides an overview of the sources of information participants indicated they relied on while completing the analysis tasks. % of participants who completed the fBlog analysis tasks reported relying on the generated cartogram figure frequently or very frequently, while % of participants who completed the go-cart.io analysis tasks indicated the same.
A minority of participants relied on the numbers in the data table or their own knowledge to complete the analysis tasks. Only % of participants completing the fBlog analysis tasks reported relying on numbers in the data table more than rarely, and % relied on their own knowledge more than rarely. Similarly, % of participants completing the go-cart.io analysis tasks indicated that they relied on the numbers in the data table more than rarely, and % relied on their own knowledge more than rarely.
Reported reliance on the generation tool interface, including interactive analysis tools, was also low. Only % of participants completing the fBlog analysis tasks reported relying on the fBlog interface, while % of participants indicated the same of go-cart.io.
5.3 System Usability Scale score
The SUS was a highly reliable measure of perceived usability for both fBlog () and go-cart.io ().
The mean SUS score was points higher for go-cart.io than for fBlog (% confidence interval ). While the mean SUS score for fBlog was (standard deviation ), the mean SUS score for go-cart.io was (standard deviation ). The difference in mean SUS score for fBlog and go-cart.io was significant ().
Completion of the generation task for fBlog and go-cart.io was associated with a higher mean SUS score. The mean fBlog SUS score for participants failing to complete the fBlog generation task was , while for participants who completed this task the mean SUS score was (% confidence interval of difference in means: ). Similarly, while the mean go-cart.io SUS score was for participants who did not complete the go-cart.io generation task, the mean SUS score was for participants who did complete this task (% confidence interval of difference in means ). For both fBlog () and go-cart.io (), completion of generation task had a significant effect on mean SUS score.
Participants’ indicated familiarity with spreadsheet software was not significantly correlated with SUS score for fBlog (, % confidence interval ) or go-cart.io (, % confidence interval . Reliance on the generation tool interface, including any interactive analysis tools, during the analysis tasks was not significantly correlated with SUS score for fBlog (, % confidence interval ) or go-cart.io (, % confidence interval ) either.
5.4 Written participant feedback
Participants reported using fBlog to be “troublesome” and “tedious” due to the manual numeric data entry method. Participants indicated that they would have preferred a spreadsheet upload option, as implemented in go-cart.io, or the ability to copy tabular data directly into the web interface.
Several participants indicated that they found the go-cart.io interface to be aesthetically pleasing. While a few participants wrote that the generation tool was easy to use, most indicated that they faced issues using the spreadsheet upload feature to input numeric data for the generation task. Participants complained that the instructions for formatting and saving the spreadsheet were unclear. Some participants were unaware that only CSV spreadsheet files could be uploaded, and expressed frustration that Microsoft Excel spreadsheets could not be uploaded. Many of the participants who eventually succeeded indicated that they relied heavily on go-cart.io’s written tutorial, and had to read it carefully. Participants who gave up trying to use the spreadsheet upload feature and instead used the pop-up editing interface [shown in panel (b) of Figure 2] complained that entering map region colors was difficult because color codes could not be pasted into the editing interface.
5.5.1 Hypothesis 1: Numeric data input
While several participants indicated in their written feedback that they preferred go-cart.io’s spreadsheet upload option to fBlog’s manual input method, most participants struggled to utilize the spreadsheet upload method. Additionally, we did not find a significant relationship between participants’ affinity to spreadsheet software and their perceived usability of go-cart.io. For these reasons, H1 is rejected.
5.5.2 Hypothesis 2: Interactive analysis tools
Few participants made use of the interactive analysis tools provided by go-cart.io to complete the go-cart.io analysis tasks, and there was no significant correlation between increasing reliance on go-cart.io’s interactive analysis tools and SUS score. For these reasons, H2 is rejected.
5.5.3 Hypothesis 3: User interface layout
Although mean SUS score was significantly lower for fBlog than for go-cart.io, participants relied on the tutorial to complete the go-cart.io generation task significantly more than during the fBlog generation task. This greater reliance implies that more participants were unable to use go-cart.io without guidance than fBlog. H3 is thus partially supported.
The majority of participants were able to complete the generation and analysis tasks for go-cart.io with reasonable accuracy. Participants who did make errors on the go-cart.io generation task were more likely to have made many more errors than for the fBlog generation task, where participants who made errors generally made one or two. We hypothesize that the differing numeric data and color entry methods of the two tools account for this difference. For the go-cart.io task, participants were able to copy numeric data and region colors as columns and paste them into a CSV spreadsheet for upload. This method of data entry means that likely sources of error, like transposition of spreadsheet columns, would lead to many errors in the generated map. By contrast, fBlog’s manual entry method caused each typo or incorrect pasting to affect only one region.
The time savings from using the spreadsheet upload feature as compared to manual entry of numeric and color data for each region were substantial, accounting for the significant difference in median generation time between fBlog and go-cart.io. To help users detect when they have made errors in the spreadsheet they upload to go-cart.io, the application shows uploaded numeric data in pie chart form and asks users to confirm that the data are correct and appropriate for a cartogram before proceeding to the generation phase. Panel (c) of Figure 2 demonstrates the pie chart display.
Yet, these time savings did not translate into a high usability rating of go-cart.io by participants. The mean SUS score of for go-cart.io is slightly lower than the mean SUS score of for desktop GIS software tools found by Davies Medyckyj-Scott (1994). Like the respondents to their 1994 survey, participants in this experiment complained that go-cart.io was unintuitive to use and had poor error messages. In the following sections, we make recommendations to improve the usability of go-cart.io as a web-based GIS tool. We believe these recommendations will also be informative for the development of other web-based cartogram generation tools.
6.1.1 Data entry
Entering the numeric data and color for each map region during the go-cart.io generation task proved to be one of the most difficult tasks for participants during the experiment. A variety of factors are responsible for making this task more difficult than anticipated.
First, while go-cart.io only accepts CSV-format spreadsheets for upload, several participants were unaware of the difference between spreadsheet formats and erroneously assumed they could upload Excel-format spreadsheets.
Recommendation 1: Due to the format’s popularity, web-based cartogram generation tools should accept Excel-format spreadsheet files in addition to CSV-format spreadsheets.
Secondly, go-cart.io requires the numeric and color data for each region in the uploaded spreadsheet to be organized in a very particular way. Figure 10 shows a spreadsheet template for the “Conterminous United States” map that participants had to edit, save, and reupload to generate their cartogram using the spreadsheet upload feature. In order for go-cart.io to recognize their data, participants were required to delete the third and fourth columns and replace them with the numeric and color data, respectively, given during the generation task. If participants replaced the “Population” column instead, or inserted another column before or after the “Colour” column, their data were either silently ignored, or the nondescript error message “There was a problem reading your CSV file” was displayed. No participant was able to successfully complete the go-cart.io task using the spreadsheet upload feature without referencing the tutorial, and most made several attempts while referencing the tutorial before they were successful.
Recommendation 2: Web-based cartogram generation tools should attempt to automatically determine which columns in the uploaded spreadsheet contain the numeric information for target areas (e.g., population) and other visual variables (e.g., color) for each region.
Recommendation 3: If the cartogram tool is unable to parse the uploaded spreadsheet, it should give a descriptive error message with hints about possible fixes (e.g., “It looks like you have uploaded a spreadsheet for Austria, even though you have selected a map of the United States.”).
Finally, although the spreadsheet upload feature is the preferred data entry method for go-cart.io, the pop-up editing interface [shown in panel (b) of Figure 2] remains an important alternative data entry method. Several participants used this interface after they were unable to successfully use the spreadsheet upload feature. However, the pop-up editing interface suffers from several usability challenges. While it is styled to look like a spreadsheet, the interface does not support basic spreadsheet data entry methods. Entire columns cannot be copied and pasted, only individual cells. Additionally, the implementation of the color column using the HTML color input element means that color codes cannot be pasted at all on some browsers.
Recommendation 4: Web-based cartogram generation tools should include a pop-up editing interface as an alternative to uploading a spreadsheet. The interface should support copying and pasting entire columns to and from the clipboard.
6.1.2 Interactive analysis tools
We hypothesized that participants would rely on the interactive analysis tools (infotip, linked brushing and morphing animations) provided by go-cart.io because Duncan . (2021) showed that these tools improve performance on some cartogram tasks. However, this hypothesis (H2) was rejected. Multiple factors may account for participants ignoring the interactive features. First, many of the analysis tasks were relatively simple. Duncan . (2021) found that interactive analysis tools were not beneficial for simple tasks; thus, participants in this experiment may have avoided using these tools for some tasks because they were unnecessary.
Secondly, while the go-cart.io website itself provides interactive analysis tools, the SVG graphics that it produces for export are static and do not support any interactivity. Because these SVG graphics were used to generate the figure for the go-cart.io analysis tasks during the experiment, the figure did not support interactivity either. To make use of the interactive analysis tools provided by go-cart.io, participants had to switch back to the go-cart.io tab in their browser during the experiment. Participants were likely disinclined to use go-cart.io’s interactive analysis tools because they were not readily accessible.
Recommendation 5: Web-based cartogram generation tools should include interactive features for analysis (e.g., infotips). The generation tools should support embedding cartograms with interactive features on other websites.
6.1.3 User interface layout
While participants complimented the aesthetics of go-cart.io, many complained that the generation tool interface was unintuitive. Indeed, participants’ reliance on the tutorial during the generation tasks was much higher for go-cart.io than for fBlog. Having good written documentation has been shown to improve the perceived usability of a software system (Davies Medyckyj-Scott, 1994), and most participants rated the go-cart.io tutorial as helpful or very helpful. However, as a web-based tool, go-cart.io cannot expect users to read an extensive written tutorial before using the tool. If a user gets an initial impression that a web-based tool is difficult to use or unintuitive, they are likely to abandon the tool quickly.
Recommendation 6: Web-based cartogram generation tools should implement a tutorial overlay that guides users through the cartogram generation process without requiring users to reference a separate written tutorial. Figure 11 depicts a proposed design for this overlay on go-cart.io.
Although cartograms are becoming increasingly popular for visualizing geospatial data, cartogram generation software has historically suffered from poor usability like many other GIS software tools. While Tingsheng . (2019) designed go-cart.io with the explicit goal of creating an easy-to-use web-based cartogram generation tool, the results of our experiment show that go-cart.io has poor usability in key areas, like data entry. We have made recommendations for the future design of web-based cartogram generation tools to address the usability concerns raised by participants during the experiment. Implementing and evaluating the effect of these recommended practices is a source of future work.
The authors would like to acknowledge Venkatkrishna Karumanchi for his help supervising experiment participants.
This work was supported by the Singapore Ministry of Education (AcRF Tier 1 Grant IG18-PRB104, R-607-000-401-114) and capstone funding by Yale-NUS College.
The authors report there are no competing interests to declare.
Data availability statement
The data that supports the findings of this study are openly available at https://figshare.com/s/592d9e8dcf5aa933d800.
- Almukhtar . (2018) almukhtar_us_2018Almukhtar, S., Andre, M., Andrews, W., Bloch, M., Bowers, J., Buchanan, L.Williams, J. 2018. U.S. House Election Results 2018. The New York Times, 2018-11-06. U.S. House Election Results 2018. The New York Times, 2018-11-06. https://www.nytimes.com/interactive/2018/11/06/us/elections/results-house-elections.html. accessed 2021-11-18
- Andre . (2020) andre_presidential_2020Andre, M., Aufrichtig, A., Beltran, G., Bloch, M., Buchanan, L., Chavez, A.White, I. 2020. Presidential Election Results: Biden Wins. The New York Times, 2020-11-03. Presidential Election Results: Biden Wins. The New York Times, 2020-11-03. https://www.nytimes.com/interactive/2020/11/03/us/elections/results-president.html. accessed 2021-11-18
- Andrieu . (2008) andrieu_scapetoad_2008Andrieu, D., Kaiser, C. Ourednik, A. 2008. ScapeToad: Not Just One Metric. ScapeToad: Not just one metric. https://scapetoad.choros.ch/. accessed 2018-11-07
- Bauer (1972) bauer_constructing_1972Bauer, DF. 1972. Constructing Confidence Sets Using Rank Statistics Constructing confidence sets using rank statistics. Journal of the American Statistical Association67339687–690. 10.1080/01621459.1972.10481279
- Bishara Hittner (2017) bishara_confidence_2017Bishara, AJ. Hittner, JB. 2017. Confidence Intervals for Correlations When Data Are Not Normal Confidence intervals for correlations when data are not normal. Behavior Research Methods491294–309. 10.3758/s13428-016-0702-8
- Brooke (1996) brooke_sus_1996Brooke, J. 1996. SUS: A ’quick and Dirty’ Usability Scale SUS: A ’quick and dirty’ usability scale. Usability Evaluation In Industry Usability Evaluation In Industry ( 189–194). CRC Press.
- Davies Medyckyj-Scott (1994) davies_gis_1994Davies, C. Medyckyj-Scott, D. 1994. GIS Usability: Recommendations Based on the User’s View GIS usability: Recommendations based on the user’s view. International Journal of Geographical Information Systems82175–189. 10.1080/02693799408901993
- Dent (1975) dent_communication_1975Dent, BD. 1975. Communication Aspects of Value-by-Area Cartograms Communication aspects of value-by-area cartograms. The American Cartographer22154–168. 10.1559/152304075784313278
- Duncan . (2021) duncan_task-based_2021Duncan, IK., Tingsheng, S., Perrault, ST. Gastner, MT. 2021. Task-Based Effectiveness of Interactive Contiguous Area Cartograms Task-based effectiveness of interactive contiguous area cartograms. IEEE Transactions on Visualization and Computer Graphics2732136–2152. 10.1109/TVCG.2020.3041745
- Evershed (2013) evershed_building_2013Evershed, N. 2013. Building a Better Election Map. The Guardian, 2013-09-06. Building a Better Election Map. The Guardian, 2013-09-06. https://www.theguardian.com/world/datablog/2013/sep/06/better-election-results-map. accessed 2021-11-18
- Gastner . (2018) gastner_fast_2018Gastner, MT., Seguy, V. More, P. 2018. Fast Flow-Based Algorithm for Creating Density-Equalizing Map Projections Fast flow-based algorithm for creating density-equalizing map projections. Proceedings of the National Academy of Sciences11510E2156-E2164. 10.1073/pnas.1712674115
- Hennig (2018) hennig_kartogramm_2018Hennig, BD. 2018. Kartogramm Zur Reichstagswahl: An Early Electoral Cartogram of Germany Kartogramm zur Reichstagswahl: An early electoral cartogram of Germany. Bulletin of the Society of Cartographers5215–25. https://societyofcartographers.files.wordpress.com/2019/04/52_hennig-1.pdf. Accessed 2021-11-18
- Herzog (2003) herzog_developing_2003Herzog, A. 2003. Developing Cartographic Applets for the Internet Developing cartographic applets for the internet. M. Peterson (), Maps and the Internet Maps and the Internet ( 117–130). OxfordElsevier Science. 10.1016/B978-008044201-3/50009-8
- International Organization for Standardization (2018) international_organization_for_standardization_ergonomics_2018International Organization for Standardization. 2018. Ergonomics of Human-system Interaction — Part 11: Usability: Definitions and Concepts (ISO 9241-11:2018). Ergonomics of Human-system Interaction — Part 11: Usability: Definitions and Concepts (ISO 9241-11:2018). https://www.iso.org/standard/63500.html. accessed 2021-12-05
- Kassel (2021) kassel_state_2021Kassel, K. 2021. State Fact Sheets. State Fact Sheets. https://www.ers.usda.gov/data-products/state-fact-sheets/. accessed 2021-11-02
- Komarkova . (2009) komarkova_usability_2009Komarkova, J., Jakoubek, K. Hub, M. 2009. Usability Evaluation of Web-Based GIS: Case Study Usability evaluation of web-based GIS: Case study. Proceedings of the 11th International Conference on Information Integration and Web-based Applications & Services Proceedings of the 11th International Conference on Information Integration and Web-based Applications & Services ( 557–561). New York, NY, USAAssociation for Computing Machinery. 10.1145/1806338.1806443
- Kronenfeld (2018) kronenfeld_manual_2018Kronenfeld, BJ. 2018. Manual Construction of Continuous Cartograms through Mesh Transformation Manual construction of continuous cartograms through mesh transformation. Cartography and Geographic Information Science45176–94. 10.1080/15230406.2016.1270775
- Kronenfeld (2021) kronenfeld_principles_2021Kronenfeld, BJ. 2021. Principles for Cartogram Design, Elicited from Manual Construction of Cartograms for the 50 U.S. States Principles for cartogram design, elicited from manual construction of cartograms for the 50 U.S. States. Abstracts of the ICA3165. 10.5194/ica-abs-3-165-2021
- Lewis (2018) lewis_system_2018Lewis, JR. 2018. The System Usability Scale: Past, Present, and Future The System Usability Scale: Past, present, and future. International Journal of Human–Computer Interaction347577–590. 10.1080/10447318.2018.1455307
- Markowska Korycka-Skorupa (2015) markowska_evaluation_2015Markowska, A. Korycka-Skorupa, J. 2015. An Evaluation of GIS Tools for Generating Area Cartograms An evaluation of GIS tools for generating area cartograms. Polish Cartographical Review47119–29. 10.1515/pcr-2015-0002
- Nusrat . (2016) nusrat_evaluating_2016Nusrat, S., Alam, MJ. Kobourov, S. 2016. Evaluating Cartogram Effectiveness Evaluating cartogram effectiveness. IEEE Transactions on Visualization and Computer Graphics2421077–1090. 10.1109/TVCG.2016.2642109
- Tingsheng . (2020) tingsheng_motivating_2020Tingsheng, S., Duncan, IK., Chang, YN. Gastner, MT. 2020. Motivating Good Practices for the Creation of Contiguous Area Cartograms Motivating good practices for the creation of contiguous area cartograms. T. Bandrova, M. Konečný S. Marinova (), 8th Int. Conf. Cartography and GIS 8th Int. Conf. Cartography and GIS ( 1, 589–598). SofiaBulgarian Cartographic Association.
- Tingsheng . (2019) tingsheng_span_2019Tingsheng, S., Duncan, IK. Gastner, MT. 2019. go-cart.io: A Web Application for Generating Contiguous Cartograms go-cart.io: a web application for generating contiguous cartograms. Abstracts of the International Cartographic Association1333. 10.5194/ica-abs-1-333-2019
- Tobler (2004) tobler_thirty_2004Tobler, WR. 2004. Thirty Five Years of Computer Cartograms Thirty five years of computer cartograms. Annals of the Association of American Geographers94158–73. 10.1111/j.1467-8306.2004.09401004.x
- United States Department of Agriculture (2021) united_states_department_of_agriculture_usdanass_2021United States Department of Agriculture. 2021. USDA/NASS QuickStats Ad-hoc Query Tool. USDA/NASS QuickStats Ad-hoc Query Tool. https://quickstats.nass.usda.gov/data/printable/CD947B31-C084-357D-8EB9-E2666C9B1129. accessed 2021-11-21
- Unrau Kray (2019) unrau_usability_2019Unrau, R. Kray, C. 2019. Usability Evaluation for Geographic Information Systems: A Systematic Literature Review Usability evaluation for geographic information systems: A systematic literature review. International Journal of Geographical Information Science334645–665. 10.1080/13658816.2018.1554813
- van den Broek (2012) van_den_broek_online_2012van den Broek, K. 2012. Online Cartogram Tool. Online Cartogram Tool. http://fblog.dreamhosters.com/. Accessed 2022-01-06
- Ware (1998) ware_using_1998Ware, JA. 1998. Using Animation to Improve the Communicative Aspect of Cartograms Using Animation to Improve the Communicative Aspect of Cartograms Michigan State University. 10.25335/m5ms3k49w