Quality Assurance Technologies of Big Data Applications: A Systematic Literature Review

Big data applications are currently used in many application domains, ranging from statistical applications to prediction systems and smart cities. However, the quality of these applications is far from perfect, leading to a large amount of issues and problems. Consequently, assuring the overall quality for big data applications plays an increasingly important role. This paper aims at summarizing and assessing existing quality assurance (QA) technologies addressing quality issues in big data applications. We have conducted a systematic literature review (SLR) by searching major scientific databases, resulting in 83 primary and relevant studies on QA technologies for big data applications. The SLR results reveal the following main findings: 1) the impact of the big data attributes of volume, velocity, and variety on the quality of big data applications; 2) the quality attributes that determine the quality for big data applications include correctness, performance, availability, scalability, reliability and so on; 3) the existing QA technologies, including analysis, specification, model-driven architecture (MDA), verification, fault tolerance, testing, monitoring and fault failure prediction; 4) existing strengths and limitations of each kind of QA technology; 5) the existing empirical evidence of each QA technology. This study provides a solid foundation for research on QA technologies of big data applications. However, many challenges of big data applications regarding quality still remain.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/09/2018

Architectural Tactics for Big Data Cybersecurity Analytic Systems: A Review

Context: Big Data Cybersecurity Analytics is aimed at protecting network...
04/10/2019

State of the Art on the Quality of Big Data: A Systematic Literature Review and Classification Framework

One of the most significant problems of Big Data is to extract knowledge...
04/10/2019

Big Data Quality: A systematic literature review and future research directions

One of the challenges manifested after global growth of social networks ...
07/19/2020

Event Prediction in the Big Data Era: A Systematic Survey

Events are occurrences in specific locations, time, and semantics that n...
11/13/2017

Digitising Cultural Complexity: Representing Rich Cultural Data in a Big Data environment

One of the major terminological forces driving ICT integration in resear...
11/01/2019

Bivariate, Cluster and Suitability Analysis of NoSQL Solutions for Different Application Areas

Big data systems development is full of challenges in view of the variet...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The big data technology market grows at a 27% compound annual growth rate (CAGR), and big data market opportunities will reach over billion $ in 2020 2013Big ; dai2019big . Big data application systems chen2014data ; allam2019big , abbreviated as big data applications, refer to the software systems that can collect, process, analyze or predict a large amount of data by means of different platforms, tools and mechanisms. Big data applications are now increasing being used in many areas, such as recommendation systems, monitoring systems, and statistical applications Tao2016Quality ; jan2019deep . Big data applications are associated with the so-called 4V attributes, e.g., volume, velocity, variety and veracity Hilbert2016Big . Due to the large amount of generated data, the fast velocity of arriving data, and the various types of heterogeneous data, the quality of data is far from ideal, which makes the software quality of big data applications far from perfect Laranjeiro2015A . For example, due to the volume and velocity attributes Anagnostopoulos2016Handling ; Gudivada2015Big , the generated data of big data applications are extremely large and increasing with high speed, which may affect data accuracy and data timeliness Montagud2012A , and consequently lead to software quality problems, such as performance and availability issues Montagud2012A ; nguyen2015impact . Due to the huge variety of heterogeneous data Wang2016Towards ; Bagriyanik2016Big , data types and formats are increasingly rich, including structured, semi-structured, and unstructured, which may affect data accessibility and data scalability, and hence lead to usability and scalability problems.

Numbers Key findings Implications
F1 The three main big data attributes (volume velocity, and variety) have a direct impact on the quality of the applications. The volume, velocity and variety can affect data quality, and thus also affect the quality of big data applications.
F2 Some quality attributes, such as correctness, performance, availability, scalability and reliability, can determine the quality of big data applications. Through our research, we can identify the most important quality attributes that state-of-the-art works address.
F3 Existing QA technologies include Analysis, Specification, Model-Driven Architecture (MDA), Testing, Fault tolerance, Fault and Failure Prediction, Monitoring and Verification. Surveying and summarizing existing quality assurance technologies for big data applications.
F4 Existing strengths and limitations of each kind of QA technique. Through the systematic review, strengths and limitations of each kind of QA technique are discussed and compared.
F5 Existing empirical evidence of each kind of QA technique. Validating the proposed QA technologies through real cases and providing a reference for big data practitioners.
Table 1: Key Findings and Implications of this Research

In general, quality assurance (QA) is a general way to detect or prevent mistakes or defects in manufactured software/products and avoid problems when solutions or services are delivered to customers schulmeyer1992handbook . However, compared with traditional software systems, big data applications raise new challenges for QA technologies due to the four big data attributes (for example, the velocity of arriving data, and the volume of data) Gao2016Big . Many scholars have illustrated current QA problems for big data applications Lai2016Data , [P39]. For example, it is a hard task to validate the performance, availability and accuracy of a big data prediction system due to the large-scale data size and the feature of timeliness. Because of the volume and variety attributes, keeping big data recommendation systems scalable is very difficult. Therefore, QA technologies of big data applications is now becoming a hot research topic currently.

Compared with traditional applications, big data applications have the following special characteristics: a) statistical computation based on large-scale, diverse formats, with structured and non-structured data; b) machine learning and knowledge-based system evolution; c) intelligent decision-making with uncertainty and d) more complex visualization requirements. These new features of big data applications need novel QA technologies to ensure quality. For example, compared with data in traditional applications (such as graphics, images, sounds, documents, etc.), there is a substantial amount of unstructured data in big data applications. These data are usually heterogeneous and lack of integration. Consequently, traditional testing processes lack testing methods for unstructured data and cannot adapt to the diversity of data processing requirements. Some novel QA technologies are urgently needed to solve these problems.

In the literature, many scholars have investigated the use of different QA technologies to assure the quality of big data applications Zhou2015An ; Juddoo2016Overview ; Gao2016Big ; Zhang2017A . Some papers have presented overviews on quality problems of big data applications. Zhou et al. Zhou2015An presented the first comprehensive study on the quality of the big data platform. For example, they have investigated the common symptoms, causes, and mitigation of quality issues, including hardware faults, code defects and so on. Juddoo Juddoo2016Overview et al. have systematically studied the challenges of data quality in the context of big data. Gao et al. Gao2016Big did a profound research on validation of big data and QA, including the basic concepts, issues, and validation process. They also discussed the big data QA focuses, challenges and requirements. Zhang et al. Zhang2017A introduced big data attributes and quality attributes, some quality assurance technologies like testing and monitoring were also discussed. Although these authors have proposed a few QA technologies for big data applications, publications on QA technologies for big data applications remain scattered in the literature, and this hampers the analysis of the advanced technologies and the identification of novel research directions. Therefore, a systematic study of QA technologies for big data applications is still necessary and critical.

In this paper, we provide an exhaustive survey of QA technologies that perform a significant role in big data applications, covering 83 papers published from Jan. 2012 to Dec. 2019. The major purpose of this paper is to look into literature that is related to QA technologies for big data applications. Then, a comprehensive reference concerning challenges of QA technologies for big data applications is also proposed. In summary, the major contributions of the paper are described in the following:

  • The elicitation of big data attributes, and the quality problems they introduce to big data applications;

  • The identification of the most frequently used big data QA technologies, together with an analysis of their strengths and limitations;

  • A discussion of existing strengths and limitations of each kind of QA technologies.

  • The proposed QA technologies are generally validated through real cases, which provides a reference for big data practitioners.

Our research results in five overall main findings that are summarized in Table 1.

The findings of this paper contribute general information for future research as the quality of big data applications becomes increasingly more important. Existing QA technologies have a certain effect on the quality of big data applications; however, some challenges still exist, such as the lack of quantitative models and algorithms.

The rest of the paper is structured as follows. The next section reviews related background and previous studies. Section 3 describes our systematic approach for conducting the review. Section 4 reports the results of themes based on five research questions raised in Section 3. Section 5 provides the main findings of the survey and provides existing research challenges. Section 6 describes some threats in this study. Conclusions and future research directions are given in the final section.

2 Related work

To begin our study, we searched Google Scholar, Baidu Scholar, Bing Academic, IEEE, ACM and other search engines and databases (using the search strings: (systematic study OR literature review OR SLR OR SMS OR systematic literature review OR systematic mapping study) AND (Big data) AND (application OR system) AND (quality OR performance OR quality assurance OR QA))). We finally found that there is no systematic literature review (including a systematic mapping study, a systematic study, and a literature review) that focuses on QA technologies for big data applications. However, quality issues are prevalent in big data informatics5020019 , and the quality of big data applications has attracted attention and been the focus of research in previous studies. In the following, we first try to describe all the relevant reviews that are truly related to the quality of big data applications.

Zhou et al. Zhou2015An firstly present comprehensive study on the quality of the big data platform. They have investigated the familiar symptoms, causes, and mitigation of quality problems. In addition, big data computing also presents different types of problems, including hardware failure, code defects and so on. Their discovery is of great significance to the design and maintenance of big data platforms in the future.

Juddoo et al. Juddoo2016Overview systematically study the challenges of data quality in the context of big data. They mainly analyze and propose the data quality technologies that would be more suitable for big data in a general context. Their goal is to probe diverse components and activities forming part of data quality management, metrics, dimensions, data quality rules, data profiling, and data cleansing. In addition, the volume, velocity, and variety of data may make it impossible to determine the data quality rules. They believe that the measurement of big data attributes is very important to the users’ decision-making. Finally, they also list existing data quality challenges.

Gao and Tao Tao2016Quality ; Gao2016Big first provide detailed discussions for QA problems and big data validation, including the basic concepts and key points. Then they discuss big data applications influenced by big data features. Furthermore, they also discuss big data validation processes, including data collection, data cleaning, data cleansing, data analysis, etc. In addition, they summarize the big data QA issues, challenges and needs.

Zhang et al. Zhang2017A further consider QA of big data applications, combined QA technologies with big data attributes, and explore the big data 4V attributes of existing QA technologies.

Liu et al. Liu2016Rethinking point out and summarize the issues faced by big data research in data collection, processing and analysis in the current big data area, including uncertain data collection, incomplete information, and big data noise, representability, consistency, reliability and so on.

To sum up, big data applications offer many opportunities to adjust businesses and enhance promotion models. In addition, big data applications can also help governments perform accurate prediction, such as weather forecasts, preventing natural disasters, and developing appropriate policies to improve the quality of human life. The survey of existing literature (such as paper Lai2016Data ; Zhou2015An ; Juddoo2016Overview ; Zhang2017A ; Liu2016Rethinking

) shows that there have been many studies to introduce big data QA, but little scientific research has focused on comprehending, defining, classifying and communicating QA technologies for big data applications. Consequently, there is no definite way to address QA of big data applications. As a result, it is necessary to conduct a systematic study of QA technologies for big data applications.

3 Research Method

In this work, a systematic literature review (SLR) approach proposed by Kitchenham et al. Kitchenham2009Systematic is used to extract QA technologies for big data applications and related questions. Based on the SLR and our research problem, research steps can be raised as shown in Fig. 1. Through these research steps, we can obtain the desired results.


Figure 1: SLR Protocol

3.1 Research Questions

We used the Goal-Question-Metric (GQM) perspectives (i.e., purpose, issue, object, and viewpoint)Kitchenham2004Procedures to draw up the aim of this study. The result of the application of the Goal-Question-Metric approach is the specification of a measurement system targeting the given set of problems and a set of rules for understanding the measurement data Basili1994The . Table 2 provides the purpose, issue, object and viewpoint of the research topics.

Research questions can usually help us to perform an in-depth study and achieve purposeful research. Based on this research, there are five research questions. Table 3 shows the research questions that we translated from Table 2. The five research questions are in Table 3, and their subresearch questions are detailed below. For each research question, we also propose the primary objective of the investigation.

RQ1: How do the big data attributes affect the quality of big data applications? Generally, this problem involves all big data attributes.

Objective: Discuss the influence of the big data attributes on the quality of big data applications.

Goal
Purpose
Issue
Object
viewpoint
Identify, analyze and extract QA technologies
for big data applications, and then understand
the features and challenges of technologies
in existence from a researcher’s viewpoint.
Table 2: Goal of this Research
ID Research Question
RQ1 How do the big data attributes affect the quality of big data applications?
RQ2 Which kind of important quality attributes do big data applications need to guarantee?
RQ3 Which kinds of technologies are used to guarantee the quality of big data applications?
RQ4 What are the strengths and limitations of the proposed technologies?
RQ5 What are the real cases of using the proposed technologies?
Table 3: Research Questions

RQ2: Which kind of important quality attributes do big data applications need to ensure?

Objective: Identify and classify existing common quality attributes and understand the implication of them.

RQ3: Which kinds of technologies are used to guarantee the quality of big data applications?

Objective: Identify and classify existing QA technologies and understand the effect of them.

RQ4: What are the strengths and limitations of the proposed technologies?

Objective: Analyze the strengths and limitations of those QA technologies.

RQ5: What are the real cases of using the proposed technologies?

Objective: Validate the proposed QA technologies through real cases and provide a reference for big data practitioners.

3.2 Search Strategy

The goal of this systematic review is thoroughly examining the literature on QA technologies for big data applications. Three main phases of SLR are presented by EBSE (evidence-based software engineering) Kitchenham2004Evidence guidelines that include planning, execution, and reporting results. Moreover, the search strategy is an indispensable part and consists of two different stages.

Stage 1: Database search.

Before we carried out automatic searches, the first step was the definition and validation of the search string to be used for automated search. This process started with pilot searches on seven databases as shown in Table 4. We combined different keywords that are related to research questions. Table 5 shows the search terms we used in the seven databases, and the search string is defined in the following:

Source Address
ACM Digital Library http://dl.acm.org/
IEEE Xplore
Digital Library http://ieeexplore.ieee.org/
Springer Link http://link.springer.com/
Science Direct http://www.sciencedirect.com/
Scopus http://www.scopus.com/
Engineering Village http://www.engineeringvillage.com/
ISI Web of Science http://isiknowledge.com/
Table 4: Studies Resource
Search ID Database Search Terms
a big data
b application
c system
d quality
e performance
f quality assurance
g QA
Table 5: Search Terms

(a AND (b OR c) AND (d OR e OR f OR g)) IN (Title or Abstract or Keyword).

We used a “quasi-gold standard”Zhang2010On to validate and guarantee the search string,. We use IEEE and ACM libraries as representative search engines to perform automatic search and refine the search string until all the search items meet the requirements and the number of remaining papers was minimal. Then, we use the defined search string to carry out automatic searches. We choose ACM Digital Library, IEEE Xplore Digital Library, Engineering Village, Springer Link, Scopus, ISI Web of Science and Science Direct because these seven databases are the largest and most complete scientific databases that include computer science. We manually downloaded and searched the proceedings if venues not included in the digital libraries. After the automatic search, a total of 3328 papers were collected.

Stage 2: Grey literature.

To cover grey literature, some alternative sources were investigated as follows:

  • Google Scholar

    In order to adapt the search terms to Google Scholar and improve the efficiency of the search process, search terms were slightly modified. We searched and collected 1220 papers according to the following search terms:
    – (big data AND (application OR system) AND ((quality OR performance OR QA) OR testing OR analysis OR verification OR validation))
    – (big data AND (application OR system) AND (quality OR performance OR QA) AND (technique OR method))
    – (big data AND (application OR system) AND (quality OR performance OR QA) AND (problem OR issue OR question))

  • Checking the personal websites of all the authors of primary studies, in search of other related sources (e.g., unpublished or latest progress).

Through two stages, we found 4548 related papers. Only 102 articles met the selection strategy (discussed below) and are chosen in the next stage. Then, we scanned all the related results according to the snowball method Goodman1961Snowball , and we referred to the references cited by the selected paper and include them if they are appropriate. We expanded the number of papers to 121; for example, we used this technique to find P72, which corresponds to our research questions from the references in P10.

To better manage the paper data, we used NoteExpress 111https://noteexpress.apponic.com/, which is a professional-level document retrieval and management system. Its core functions cover all aspects of “knowledge acquisition, management, application, and mining”. It is a perfect tool for academic research and knowledge management. However, the number of these results is too large and therefore detrimental to the study. Consequently, we filtered the results by using the selection strategy described in the next section.

3.3 Selection Strategy

In this subsection, we focus on the selection of research literature. According to the search strategy, much of the returned literature is unnecessary. It is essential to define the selection criteria (inclusion and exclusion criteria) for selecting the related literature. We describe each step of our selection process in the following:

Step 1: Combination and duplicates removal. In this step, we sort out the results that we obtain from stage 1 and stage 2 and remove the duplicate content.

Step 2: Selection of studies. In this step, the main objective is to filter all the selected literature in the light of a set of rigorous inclusion and exclusion criteria. There are five inclusion and four exclusion selection criteria we have defined as described below.

Step 3: Exclusion of literature during data extraction. When we read a study carefully, it can be selected or rejected according to the inclusion and exclusion criteria in the end.

When all inclusion criteria are met, the study is selected; otherwise, it is discarded if any exclusion criteria are met. According to the research questions and research purposes, we identified the following inclusion and exclusion criteria.

Inclusion criteria: a study should be chosen if it satisfies each inclusion criteria.

1) The study of the literature focuses on the quality of big data applications or big data systems, in order to be aligned with the theme of our study.

2) One or more of our research questions must be directly answered.

3) The selected literature must be in English.

4) The literature must consist of journal papers or papers published as part of conference or workshop proceedings.

5) Studies are published in or after 2012. From a simple search (for which we use the search item ”big data application”) in the EI (engineering index) search library, we can see that most of the papers on big data applications or big data systems are published after 2011, as shown in Fig. 2. The literature published before 2012 rarely takes into account the quality of big data applications or systems. By reading the relevant literature abstracts, we found that these earlier papers were not relevant to our subject, so we excluded them.

The main objective of our study is to determine the current technologies of ensuring the quality of big data applications and the challenges associated with the quality of big data applications. This means that the content of the article should be related to the research questions of this paper.

Exclusion criteria: a study should be discarded if it satisfies any one of the following exclusion criteria.

1) It is related to big data but not related to the quality of big data applications. Our goal is to study the quality of big data applications or services, rather than the data quality of big data, although data quality can affect application quality.

2) It does not explicitly discuss the quality of big data applications and the impact of big data applications or quality factors of big data systems.

3) Duplicated literature. Many articles have been included in different databases, and the search results contain repeated articles. For conference papers that meet our selection criteria but are also extended to journal publications, we choose journal publications because they are more comprehensive in content.

4) Studies that are not related to the research questions.


Figure 2: Distribution of Papers in EI until Dec 2019

Figure 3: The Search Process

Inclusion criteria and exclusion criteria are complementary. Consequently, both the inclusion and exclusion criteria are considered. In this way, we can achieve the desired results. A detailed process of identifying relevant literature is presented in Figure 3. Obviously, the analysis of all the literature presents a certain degree of difficulty. First, by applying these inclusion and exclusion criteria, two researchers separately read the abstracts of all studies selected in the previous step to avoid prejudice as much as possible. Of the initial studies, 488 were selected in this process. Second, for the final selection, we read the entire initial papers and then selected 102 studies. Third, we expanded the number of final studies to 121 according to our snowball method. Conflicts were resolved by extensive discussion. We excluded numbers of papers because they were not related and had 83 primary studies at the end of this step. Details of the selected papers are shown in Table 17 of Appendix B.

In the process, the first author and the second author worked together to develop research questions and search strategies, and the second author and four students of the first author executed the search plan together. During the process of finalizing the primary articles, all members of the group had a detailed discussion on whether the articles excluded by only few researchers are in line with our research topic.

3.4 Quality Assessment

After screening the final primary studies by inclusion and exclusion criteria, the criteria for the quality of the study were determined according to the guidelines proposed by Kitchenham and ChartersKitchenham07guidelinesfor . The corresponding quality checklist is shown in Table 6. The table includes 12 questions that consider four research quality dimensions, including research design, behavior, analysis, and conclusions. For each quality item we set a value of 1 if the authors put forward an explicit description, 0.5 if there is a vague description, and 0 if there is no description at all. The author and his research assistant applied the quality assessment method to each major article, compared the results, and discussed any differences until a consensus was reached. We scored each possible answer for each question in the main article and converted it into a percentage after coming to an agreement. The presentation quality assessment results of the preliminary study indicate that most studies have a deep description of the problem and its background, and most studies have fully and clearly described the contributions and insights. Nevertheless, some studies do not describe the specific division of labor in the method introduction, and there is a lack of discussion of the limitations of the proposed method. However, the total average score of 8.8 out of 12 indicates that the quality of the research report is good, supporting the validity of the extracted data and the conclusions drawn therefrom.

ID Question Percentage
Yes partially No
Design
Q1 Are the aims of the study clearly stated? 100% 0% 0%
Q2 Are the chosen quality attributes distinctly stated and defined? 55.4% 43.2% 0.4%
Q3 Was the sample size reasonable? 32.4% 45.9% 21.7%
Conduct
Q4 Are research methods adequately described? 90.5% 9.5% 0%
Q5 Are the datasets completely described (source, size, and programming languages)? 32.4% 43.2% 24.4%
Q6 Are the observation units or research participants described in the study? 2.7% 0% 97.3%
Analysis
Q7 Is the purpose of the analysis clear? 98.6% 1.4% 0%
Q8 Are the statistical methods described? 14.9% 5.4% 79.7%
Q9 Is the statistical significance of the results reported? 14.9% 5.4% 79.7%
Conclusion
Q10 Are the results compared with other methods? 27.0% 1.4% 71.6%
Q11 Do the results support the conclusions? 100% 0% 0%
Q12 Are validity threats discussed? 18.9% 28.4% 52.7%
Table 6: Quality Assessment Questions and Results

3.5 Data Extraction

The goal of this step is to design forms to identify and collect useful and relevant information from the selected primary studies so that it can answer our research questions proposed in Section 3.1. To carry out an in-depth analysis, we can apply the data extraction form to all selected primary studies. Table 7 shows the data extraction form. According to the data extraction form, we collect specific information in an excel file 222https://github.com/QXL4515/QA-techniques-for-big-data-application. In this process, the first author and the second author jointly developed an information extraction strategy to lay the foundation for subsequent analysis. In addition, the third author validated and confirmed this research strategy.

3.6 Data Synthesis

Data synthesis is used to collect and summarize the data extracted from primary studies. Moreover, the main goal is to understand, analyze and extract current QA technologies for big data applications. Our data synthesis is specifically divided into two main phases.

Phase 1: We analyze the extracted data (most of which are included in Table 7 and some are indispensable in the research process) to determine the trends and collect information about our research questions and record them. In addition, we classify and analyze articles according to the research questions proposed in Section 3.1.

Phase 2: We classify the literature according to different research questions. The most important task is to classify the articles according to the QA technologies through the relevant analysis.

Data Item Extracted data Description Type
1
Study title
Reflect the relevant research direction
Whole research
2
Publication year
Indicating the trend of research
Whole research
3
Journal/Conference
The type of study: the conference or the journal
Whole research
4
Authors
Research author’s other relevant studies
Whole research
5
Context study
Understand the full text
Whole research
6
Existing challenges
The limitations of the approaches and the challenges
of big data application
Whole research
7
Big data attributes
Related 4V attributes
RQ1
8
Quality requirements
The attributes of the demand
RQ2
9
Quality attributes
Quality attributes of big data application
RQ2
10
Technology
Application technology
RQ3
11
Quality assurance technologies
Application domain
RQ3
12
Experimental results
The effectiveness of the methods
RQ3
13
Strengths
The advantages of the approaches
RQ4
14
Empirical Evidence
Real cases of the methods
RQ5
Table 7: Data Extraction Form

4 Results

This section, by deeply analyzing the primary studies listed in Appendix B, provides an answer to the five research questions presented in Section 3. For readability, we use [Px] refers to a surveyed paper listed in Appendix B.

In addition, Figures 4,  5, and 6 provide some simple statistics. Figure 4 presents how our primary studies are distributed over the years. Figure 5 groups the primary studies according to the type of publication. Figure 6 counts the number or studies retrieved from different databases.

While Section 4.1 provides an overview of the main concepts discussed in this section, Sections 4.2 to 4.6 report the answer to the research questions.


Figure 4: Distribution of Papers during Years 2012-2019

Figure 5: Distribution of the Types of Literature

Figure 6: The Number Papers from Different Databases

4.1 Overview of the main concepts

While answering the five research questions identified in previous sections, we will utilize and correlate three different dimensions: big data attributes, data quality parameters, and software quality attributes. (The output of this analysis is reported in Section 5.2 and Figure 9).

Big data attributes: big data applications are associated with the so-called 4V attributes, e.g., volume, velocity, variety and veracity Hilbert2016Big . In this study, we take into account only three of the 4V big data attributes (excluding the veracity one) for the following reasons: First, through the initial reading of the literature, many papers are not concerned about the veracity. Second, big data currently have multi-V attributes; only three attributes (volume, variety and velocity) are recognized extensivelyAggarwal2016Identification Fasel2014Potentials .

Data quality parameters: data quality parameters describe the measure of the quality of data. Since data are an increasingly vital part of applications, data quality becomes an important concern. Poor data quality could affect enterprise revenue, waste company resources, introduce lost productivity, and even lead to wrong business decisions Gao2016Big . According to the Experian Data Quality global benchmark report333Erin Haselkorn, New Experian Data Quality research shows inaccurate data preventing desired customer insight”, Posted on Jan 29 2015 at URL http://www.experian.com/blogs/news/2015/01/29/data-quality-research-study/, U.S. organizations claim that 32 percent of their data is wrong on average. Since data quality parameters are not universally agreed upon, we extracted them by analyzing papers Gao2016Big ; becker2015big ; Clarke2016Big .

Software quality attributes: software quality attributes describe the attributes that software systems shall expose. We start from the list provided in the ISO/IEC 25010:2011 standard and select those quality attributes that are mostly recurrent in the primary studies. Improved software quality attributes definitions are as follows:

1) A quality model consists of five characteristics (correctness, performance, availability, scalability, and reliability) that relate to the outcome of the interaction when a product is used in a particular context of use. This system model is applicable to the complete human-computer system, including both computer systems in use and software products in use.

2) A product quality model composed of eight characteristics (specification, analysis, MDA, fault tolerance, verification, testing, monitoring, fault and failure prediction) that relate to static attributes of software and dynamic attributes of the computer system. The model is applicable to both computer systems and software products.

4.2 Identify the Effect of Big Data Properties (RQ1)

The goal of this section is to answer RQ1 (how do the big data attributes affect the quality of big data applications?). Table 8 lists the general challenges proposed by big data attributes. These challenges produce great difficulties regarding the quality of the data, thus affecting the quality of big data applications.

The volume data property poses storage and scale challenges. The size of data sets used in industrial environments is huge, usually measured in terabytes, or even exabytes. Various types of data sets need a huge space to be stored and processed [P40]. Application performance will decline as data volume grows. When the amount of data reaches a certain size, the application crashes and cannot provide mission services [P41]. Therefore, massive amounts of data will inevitably affect the processing performance of big data applications.

The velocity data property poses fast analysis and processing challenges. With the flood of data generated quickly from smart phones and sensors, the new trend of big data analysis has shifted the focus to “what can we do with data” article . Rapid analysis and processing of data need to be considered [P52]. The data generates and processes quickly and is therefore prone to errors. Mapping MapReduce frameworks to the cloud architecture became imperative in recent years because of the need to manage large data sets in a fast, reliable (and as cheap as possible) way [P22].

The variety property poses the heterogeneity challenge, which leads to higher requirements on the data processing capacity of big data applications Lai2016Data ; Zhou2015An . The increasing amount of sensors that are deployed on the Internet makes the generated data complex. It is impossible for human beings to write every rule for each type of data to identify relevant information. As a result, most of the events in these data are unknown, abnormal and indescribable. The collection, analysis, auditing, management and testing of such a complex amount of data by industry, researchers, government and media has become a major problem [P39].

Properties Challenge Description
Volume
Storage/Scale
The data scale has a significant effect on the performance of big data applications.
Velocity
Fast Processing
The data generates quickly, the data are processed quickly, and the data are prone to errors.
Variety
Heterogeneity
Multitype data need higher requirements on the data processing capacity of big data applications.
Table 8: The Challenge and Description of Big Data Properties
Relation Paper ID
Volume-Data Correctness
[P39], [P42]
Volume-Data Completeness
[P23], [P54]
Volume-Data Timeliness
[P41], [P53], [P24]
Variety-Data Accuracy
[P42], [P55]
Variety-Data Consistency
[P39], [P55], [P42], [P33]
Velocity-Data Timeliness
[P39], [P65], [P23], [P56]
Velocity-Data Correctness
[P54], [P7]
Table 9: Distribution of the Relation between Big Data Properties and Data Quality Parameters

To answer RQ1, we extracted the relationships existing between the big data attributes and the data quality parameters. Table 9 identifies the primary studies discussing the relationship between couples of big data attributes and data quality parameters. The relationships that we found on the primary studies are reported in Table 9 and discussed below.

  • Volume-Data Correctness: According to [P39] and [P42], the larger the volume

    of the data, the greater the probability that the data will be modified, deleted, and so on. In other words, a large volume of data has a high probability of errors in transmission, processing, and storage.

  • Volume-Data Completeness: Data completeness is a quantitative measurement that is used to evaluate how much valid analytical data are obtained compared to the planned number Gao2016Big . Data completeness is usually expressed as a percentage of usable analytical data. In general, increased data reduces data completeness [P23][P54].

  • Volume-Data Timeliness: In the era of big data, people are not only concerned with the size of data but also with the way to process data. Because the amount of data to be processed is too large, the business needs and competitive pressures require a real-time and effective data processing [P24], and response time is critical in most situations [P53]. Thus how to deal with a massive amount of data in a very short time is a vast challenge. If these data cannot be processed in a timely manner, the value of these data will decrease and the original goal of building big data systems will be lost [P41].

  • Variety-Data Accuracy: For big data applications, the sources of data are varied, including structured, semi-structured, and unstructured data. A part of these data has no statistical significance, which greatly influences the accuracy of big data application results [P55]. In addition, the contents of the database became corrupted by erroneous programs storing incorrect values and deleting essential records. It is hard to recognize such quality erosion in large databases, but over time, it spreads similar to a cancerous infection, causing ever-increasing big data system failures. Thus, not only data quality but also the quality of applications suffers under erosion [P42].

  • Variety-Data Consistency: Data consistency is useful to evaluate the consistency of given data sets from different perspectives [P33]. The consistency of various data has a positive impact on the content validity and consistency of large data application systems [P42], [P55]. Unstructured data will produce the consistency problem such as continuous availability and data security issues in big data applications [P39].

  • Velocity-Data Timeliness: How to effectively handle high-speed data transmission and ensure the timeliness of data processing are very important. These data must be analyzed in time because the velocity of data generation is very quick,[P65], [P23]. Generally, the greater the velocity with which data can be analyzed is, the larger the profit for the organization [P39]. Low-speed data may result in the fact that the big data systems are unable to respond effectively to any negative change (speed)[P56]. Therefore, the velocity engenders challenges to data timeliness.

  • Velocity-Data Correctness: It is vital to optimize the use of limited computing resources to transfer data [P7]. Data in the high-speed transmission process will greatly increase the data failure rate. Abnormal or missing data will affect the correctness and availability of big data applications [P54].

Overall, as summarized in Table 9, the volume property has a significant impact on all aspects of data quality, including data correctness, data timeliness, data completeness, and so on. Variety affects the data consistency, data accuracy, etc. Velocity plays an important role in data timeliness and data correctness.

We mainly determine the influences of big data attributes on big data applications. Specifically, the major data quality issues are mostly because of volume, and we have finalized five major quality parameters, including data timeliness, data completeness, data correctness, data accuracy and data consistency.

4.3 Identify the Important Quality Attributes in Big Data Applications (RQ2)

The goal of this section is to answer RQ2 (which kind of important quality attributes do big data applications need to ensure?). We use the ISO/IEC 25010:2011 to extract the quality attributes of big data applications.

Table 11 provides the statistical distribution of different quality attributes. According to statistics, we identify related quality attributes, as shown in Figure 7. For some articles that may involve more than one quality attribute, such as [P41] and [P58], we choose the main quality attribute that they convey.

From the 83 primary studies, some quality attributes are discussed, including correctness, performance, availability, scalability, reliability, efficiency, flexibility, robustness, stability, interoperability, and consistency.

From Fig. 7, we can see that the five main quality attributes have been discussed in the 83 primary studies. However, there are fewer articles focused on other attributes, such as stability, consistency, efficiency, and so on. In addition, from the relevant literature, we can analyze which technologies can affect the corresponding quality attributes, although there is no clear statement in the relevant literature. Table 10 shows some of the technologies that may affect correctness, performance, availability, scalability, and reliability. These technologies can help us to further understand these quality attributes. We will now focus on the main quality attributes.

Attributes Techniques
Correctness Fault-tolerance mechanism
Performance Parallel architecture, Multicloud cross-layer cloud monitoring framework, Cache, Model-driven architecture, Write buffer, Performance analysis model
Availability Multicloud cross-layer cloud monitoring framework, Fault-tolerance mechanism, BigQueue
Scalability Flexible data analytic framework, distributed storage system, bloat-aware design and so on
Reliability Fault-tolerance mechanism, Heterogeneous NoSQL databases, Condition monitoring
Table 10: Techniques for Addressing Quality Attributes
  • Correctness: Correctness measures the probability that big data applications can ‘get things right’. If the big data application cannot guarantee the correctness, then it will have no value at all. For example, a weather forecast system that always provides the wrong weather is obviously not of any use. Therefore, correctness is the first attribute to be considered in big data applications. The papers [P25] and [P23] provide the fault tolerance mechanism to guarantee the normal operation of applications. If the big data application runs incorrectly, it will cause inconvenience or even loss to the user. The papers [P40] and [P43] provide the testing method to check the fault of big data applications to assure the correctness.

  • Performance: Performance refers to the ability of big data applications to provide timely services, specifically in three areas, including the average response time, the number of transactions per unit time and the ability to maintain high-speed processing. However, volume, variety and velocity attributes of big data attributes have an impact on these three aspects. Due to the large amounts of data, performance is a key topic in big data applications. In Table 11, we show the many relative papers that refer to the performance of big data applications. The major purpose of focusing on the performance problem is to handle big data with limited resources in big data applications. To be precise, the processing performance of big data applications under massive data scenarios is its major selling point and breakthrough. According to the relevant literature, we can see that common performance optimization technologies for big data applications are generally divided into two parts [P57], [P56], [P52], [P26], [P25], [P14]. The first one is hardware and system-level observations to find specific bottlenecks and make hardware or system-level adjustments. The second one is to achieve optimization mainly through adjustments to specific software usage methods.

  • Availability: Availability refers to the ability of big data applications to run without any issue for a long time. The rapid growth of data has made it necessary for big data applications to manage data streams and handle an impressive volume, and since these data types are complex (variety), the operation process may create different kinds of problems. Consequently, it is important to ensure the availability of big data applications.

  • Scalability: Scalability refers to the ability of large data applications to maintain service quality when users and data volumes increase. The volume of big data attributes will inevitably bring about the scalability issue of big data applications. Specifically, the scalability of big data applications includes system scalability and data scalability. For a continuous stream of big data, processing systems, storage systems, etc. should be able to handle these data in a scalable manner. Moreover, the system would be very complex for big data applications. For better improvement, the system must be scalable. Paper P7 proposes a flexible data analytic framework for big data applications, and the framework can flexibly handle big data with scalability.

  • Reliability: Reliability refers to the ability of big data applications to apply the specified functions within the specified conditions and within the specified time. Reliability issues are usually caused by unexpected exceptions in the design and undetected code defects. For example, paper [P58] uses a monitoring technique to monitor the operational status of big data applications in real time so that failures can occur in real time and developers can effectively resolve these problems.


Figure 7: The Frequency Distribution of the Quality Attributes
Attributes Studies
Correctness [P40], [P23], [P32], [P33], [P25], [P43], [P28], [P45], [P48], [P50], [P63], [P71], [P72], [P73], [P74]
Performance [P39], [P41], [P55], [P26], [P57], [P66], [P34], [P35], [P8], [P1], [P36], [P4], [P64], [P11], [P19], [P62], [P70], [P75], [P77], [P78]
Availability [P52], [P14], [P27], [P15], [P4], [P6], [P12], [P13], [P17], [P20], [P29], [P37], [P44], [P46], [P47], [P69], [P76], [P79], [P80]
Scalability [P7], [P2], [P59], [P21], [P60]
Reliability [P56], [P58], [P16], [P36], [P10], [P5], [P18], [P22], [P30], [P31], [P38], [P49], [P51], [P61], [P68], [P74]
Others [P24], [P54], [P67], [P9], [P3], [P81], [P82], [P83]
Table 11: Distribution of the Quality Attributes

Although big data applications have many other related quality attributes, the most important ones are the five mentioned above. Therefore, these five quality attributes are critical to ensuring the quality of big data applications and are the main focus of this survey.

4.4 Technologies for Assuring the QA of Big Data Applications (RQ3)

This section answers RQ3 (which kinds of technologies are used to guarantee the quality of big data applications?). We extracted the quality assurance technologies used in the primary articles. In Fig. 8, we show the distribution of papers for these different types of QA technologies. These technologies cover the entire development process for big data applications. According to the papers we collected, we identified eight technologies in our study, i.e., specification, analysis, model-driven architecture (MDA), fault tolerance, testing, verification, monitoring, and fault and failure prediction.


Figure 8: Distribution of Different QA Technologies

In fact, quality assurance is an activity that applies to the entire big data application process. The development of big data applications is a system engineering task that includes requirements, analysis, design, implementation, and testing. Accordingly, we mainly divide the QA technologies into design time and runtime. Based on the papers we surveyed and the development process of big data applications, we divide the quality assurance technologies for big data applications into the above eight types. In Table 12, we have listed the simple descriptions of different types of QA technologies for big data applications. Detailed explanations of the eight technologies are provided in Appendix A.

QA technologies Description References
Specification A specification refers to a type of technical standard to ensure the quality in the design time. [P1], [P2], [P3], [P4], [P5], [P6]
Analysis This technique can analyze the performance and other quality attributes of big data applications to determine the main factors that affect their quality. [P7], [P8], [P9], [P10], [P11], [P12], [P13], [P75]
Model-Driven Architecture (MDA) MDA provides a way (through related tools) to standardize a platform-independent application and select a specific implementation platform for the application, transforming application specifications to a specific implementation platform. [P14], [P15], [P16], [P17], [P18], [P19], [P20], [P21], [P22], [P76], [P77], [P78], [P81], [P82], [P83]
Fault tolerance Fault tolerance serves as an effective means to address reliability and availability concerns of big data applications. [P23], [P24], [P25], [P26], [P27], [P28], [P29], [P30], [P31], [P32]
Verification The purpose of verification is to verify that the design of the output ensures that the design phase of the input requirements is met. [P33], [P34], [P35], [P36], [P37], [P38]
Testing The purpose of testing is not only acknowledging application levels of correctness, performance and other quality attributes but also checking the testability of big data applications. [P39], [P40], [P41], [P42], [P43], [P44], [P45], [P46], [P47], [P48], [P49], [P50], [P51], [P79]
Monitoring Monitoring can detect failures or potential anomalies at runtime and is an effective method to guarantee the quality of big data applications. [P52], [P53], [P54], [P55], [P56], [P57], [P58], [P59], [P60], [P61], [P62], [P63], [P64], [P80]
Fault and Failure Prediction The goal is to achieve the prediction of the performance status and potential anomalies of big data applications and to provide the important and abundant information for real-time control. [P65], [P66], [P67], [P68], [P69], [P70], [P71], [P72], [P73], [P74]
Table 12: Identified QA Technologies

4.5 Existing Strengths and Limitations (RQ4)

The main purpose of RQ4 (what are the advantages and limitations of the proposed technologies?) is to comprehend the strengths and limitations of QA approaches. To answer RQ4, we first compare quality assurance technologies from six aspects. Finally, we discuss the strengths and limitations of each technique.

4.5.1 Comparison

We compare the quality assurance technologies from six different aspects, including suitable stage, application domain, quality attribute, effectiveness, usability and efficiency, according to the study published by Patel and Hierons Patel2017A .

Suitable stage refers to the possible stage using these quality assurance technologies, including design time, runtime or both. Application domain means the specific area that big data applications belong to, such as recommendations systems and prediction systems. Quality attribute identifies the most used quality attributes being addressed by the corresponding technologies. Effectiveness means that big data application quality assurance technologies can guarantee the quality of big data applications to a certain extent. Usability refers to the extent to which quality assurance technology can guarantee the quality of big data applications in the quality assurance environment of big data applications to achieve specific goals, such that effectiveness, efficiency, and satisfaction are achieved. Efficiency refers to the improvement of the quality of big data application through the QA technologies. For better comparisons, we present the results in Table 13444the QA are those presented in Table 11..

Technologies Stage Application Domain Quality Attributes Effectiveness Usability Efficiency
Specification Design time General Efficiency, Performance, Scalability Establishing detailed functional and behavioral descriptions, performance requirements, and so on to ensure the quality. Ensure that all quality requirements of the user are met during the design phase. Normally, specification can achieve highly efficient results.
Analysis Design time General Performance, Scalability, Flexibility Analyzes the main factors that affect the quality, provides a basis for testing of big data applications. Analyze the quality impact factors of a given application as much as possible According to the analysis results, different levels of efficiency can be achieved.
MDA Design time Hadoop Framework or MapReduce, Database application Reliability, Efficiency, Performance, Availability Use models to guide the design, development, and maintenance of systems, providing ease of big data application quality. The model determines the quality of the follow-up. It can achieve high efficiency due to the big data application framework.
Fault-tolerance Design and Runtime

Hadoop, Distributed Storage System, Artificial Intelligence System

Performance, Correctness, Reliability, Scalability Allow big data applications within a certain range to allow or tolerate the occurrence of mistakes. Even if a minor error occurs, the big data application can still offer stable operation. Allows the system to make a small number of errors, effectively guaranteeing quality. For any big data application with fault tolerance, its quality is bound to be better.
Verification Design and Runtime Big Data Classification System, MapReduce and so on Correctness, Reliability, Performance Verification is based on the test of the entire system analysis, especially functional analysis. Identify features that should not be implemented and reduce the complexity of big data applications. Depends on the verification approaches, different approaches achieve different efficiency effects.
Testing Design and Runtime Large Databases, Geographic Security Control System and Bioinformatics Software Correctness, Scalability, Performance, Scalability Under specified conditions, it can operate applications to detect errors, measure software quality, and evaluate whether they meet design requirements. Detect the big data application error created in the development stage. Depends on the testing method.
Monitoring Runtime Hadoop, MapReduce, and General Performance, Availability, Reliability Monitoring the occurrence of errors in big data applications and providing alerts that allow managers to spot errors and fix them. Effectively monitor the occurrence of errors Monitoring is only an approach that provides aid and can also lead to a high load.
Fault and Failure Prediction Runtime Cloud platforms, Automated IT System and so on Reliability, Performance Predicting errors that may occur in the operation of big data applications, so that errors can be prevented in advance. May predict errors that were not discovered before. Based on the prediction results, varying degrees of effectiveness can be achieved.
Table 13: Comparison of QA Technologies

From Table 13, we can see that specification, analysis, verification and MDA are used at design time. Testing, monitoring and fault and failure prediction are used at runtime. Fault tolerance covers both design time and runtime. Design-time technologies are commonly used in MapReduce, which is a great help when designing big data application frameworks. The runtime technologies are usually used after the generation of big data applications, and their application is very extensive, including intelligent systems, storage systems, cloud computing, etc.

For quality attributes, while most technologies can contend with performance and reliability, some technologies focus on correctness, scalability, etc. To a certain extent, these eight quality assurance technologies assure the quality of big data applications during their application phase, although their effectiveness, usability, and efficiency are different. To better illustrate these three quality parameters, we carry out a specific analysis through two quality assurance technologies. Specification establishes complete descriptions of information, detailed functional and behavioral descriptions, and performance requirements for big data applications to ensure the quality. Therefore, it can guarantee the integrity of the function of big data applications to achieve the designated goal of big data applications and guarantee satisfaction. Although it does not guarantee the quality of the application runtime, it guarantees the quality of the application in the initial stage. As a vital technology in the QA of big data applications, the main function of testing is to test big data applications at runtime. As a well-known concept, the purpose of testing is to detect errors and measure quality, thus ensuring effectiveness and usability. In addition, the efficiency of testing is largely concerned with testing methods and testing tools. The detailed description of other technologies is shown in Table 13.

4.5.2 Strengths and Limitations

  • Specification: Due to the large amounts of data generated for big data applications, suitable specifications can be used to select the most useful data at hand. This technology can effectively improve the efficiency, performance and scalability of big data applications by using UML [P5], ADL [P4] and so on. The quality of the system can be guaranteed at the design stage. In addition, the specification is also used to ensure that system functions in big data applications can be implemented correctly. However, all articles are aimed at a specific application or scenario, and do not generalize to different types of big data applications [P4], [P2].

  • Analysis: The main factors that affect the big data applications’ quality analysis are the size of the data, the speed of data processing, and data diversity. Analysis technologies can analyze major factors that may affect the quality of software operations during the design phase of big data applications. Current approaches only focus on analyzing performance attributes [P8], [P7]. There is a need to develop approaches for analyzing other quality attributes. In addition, it is impossible to analyze all quality impact factors for big data applications. The specific conditions should be specified before analysis.

  • MDA: MDA uses a single model to generate and export most codes for big data applications and can greatly reduce human error. To date, MDA metamodels and model mappings are only targeted for very special kinds of systems, e.g., MapReduce [P15], [P14]. Metamodels and model mapping approaches for other kinds of big data applications are also urgently needed.

  • Fault tolerance: Fault tolerance is one of the staple metrics of quality of service in big data application. Fault-tolerant mechanisms permit big data applications to allow or tolerate the occurrence of mistakes within a certain range. If a minor error occurs, the big data application can still offer stable operation [P30], [P24]. Nevertheless, fault tolerance cannot always be optimal. Furthermore, fault tolerance can introduce performance issues, and most current approaches neglect this problem.

  • Verification: Due to the complexity of big data applications, there is no uniform verification technology in general. Verification technologies verify or validate quality attributes by using logical analysis, theorem proving and model checking. There is a lack of formal models and corresponding algorithms to verify attributes of big data applications [P35], [P34]. Due to the existence of big data attributes, traditional software verification standards are no more meet the quality requirementsHussain2016Collect .

  • Testing: In contrast to verification, the testing technique is always performed during the execution of big data applications. Due to the large amounts of data, automatic testing is an efficient approach for big data applications. Current research always applies traditional testing approaches in big data applications [P45]. However, novel approaches for testing big data attributes are urgently needed because testing focuses are different between big data application testing and conventional testing. Conventional testing focuses on diverse software errors regarding structures, functions, UI, and connections to the external systems. In contrast, big data application testing focuses on involute algorithms, large-scale data input, complicated models and so on. Furthermore, conventional testing and big data application testing are different in the test input, the testing execution and the results. As an example, learning-based testing approaches [P73] can test the velocity attribute of big data applications.

  • Monitoring: Monitoring can obtain accurate status and behavior information for big data applications in a real operating environment. For big data applications running in a complex and variable network environment, their operating environment will affect the operation of the software system and produce some unexpected problems. Therefore, monitoring technologies will be more conducive to the timely response to the emergence of anomalies to prevent failures [P59], [P52]. A stable and reliable big data application relies on monitoring technologies that not only monitor whether the service is alive or not but also monitor the operation of the system and data quality. The high velocity of big data engendered the challenge of monitoring accuracy issues and may produce overhead problems for big data applications.

  • Fault and Failure Prediction

    : Prediction technologies can predict errors that may occur in the operation of big data applications so that errors can be prevented in advance. Due to the complexity of big data applications, the accuracy of prediction is still a substantial problem that we need to consider in the future. Deep learning-based approaches [P28], [P26], [P25] can be combined with other technologies to improve prediction accuracy due to the large amounts of data.

4.6 Empirical Evidence (RQ5)

The goal of RQ5 (what are the real cases of using the proposed technologies?) is to elicit empirical evidence on the use of QA technologies. We organize the discussion along the QA technologies discussed in Section 4.4.

Ref
Techniques
Case Type
Domain
Metric Experimental Results
[P4]
Specification
Small Case
Traffic forecasting system
Delay time
Rigorous, easy and expressive
[P7]
Analysis
Real-world
Case
Scientific data compression
and remote visualization
Latency, Throughput
Obtain two factors
[P8]
Analysis
Large Case
MapReduce application
Processing time, Job
turnaround, Hard disk
bytes written
Improve the performance
[P15]
MDA
Small Case
Analytics-intensive big
data applications
Accidental complexities,
Cycle
The effectiveness of MDD
for accidental complexities
[P18]
MDA
Small Case
Not mentioned
CPU, memory, network
utilization levels
Improve the scalability
[P19]
MDA
Small Case
Word Count application
Not mentioned
High degree of automation
[P26]
Fault tolerance
Real-world
Case
Join bidirectional data
streams
Time Consuming,
Recover Ratio
Improve the performance of
joining two-way streams
[P28]
Fault tolerance
Real-world
Case
MapReduce data
computing applications
CPU utilization, Memory
footprint, Disk throughput,
Network throughput
Transparently enable fault
tolerance for applications
[P41]
Testing
Small Case
Network public opinion
monitoring system
Response time
No specific instructions
[P50]
Testing
Small Case
Image Processing
Error detection rate
Detects all the embedded
mutants
[P33]
Verification
Small Case
Cell Morphology Assay
MRs
Its effectiveness for testing
ADDA
[P56]
Monitoring
Real-world
Case
Cloud monitoring system
Insert operation time
Achieves a response time of a
few hundred
[P55]
Monitoring
Real-world
Case
Big data public opinion
monitoring platform
Accuracy Rate, Elapsed
time /MS
High accuracy and meeting
the requirements of real time
[P53]
Monitoring
Large Case
No specific instructions
Throughput, Read latency,
Write latency
Improve the performance
by changing various tuning
parameters
[P58]
Monitoring
Small Case
Big data-based
condition monitoring of
power apparatuses
No specific instructions
Improve the accuracy of
condition monitoring
[P67]
Prediction
Small Case
Cloud computing system
False Positive Rate, true
positive rate
Achieve high true positive
rate, low false positive
rate for failure prediction
[P65]
Prediction
Small Case
Different applications
Prediction accuracy
New prediction system is
accurate and efficient
Table 14: Experimental summary and statistics
  • Specification. The approach in [P4] is explained by a case study of specifying and modeling a Vehicular Ad hoc NETwork (VANET). The major merits of the posed method are its capacity to take into consideration big data attributes and cyber physical system attributes through customized concepts and models in a strict, simple and expressive approach.

  • Analysis. The experiments in [P7] show that the two factors that are the most important concern the quality of scientific data compression and remote visualization, which are analyzed by latency and throughput. Experiments in [P8] were conducted to analyze the connection between the performance measures of several MapReduce applications and performance concepts, such as CPU processing time. The consequences of performance analysis illustrate that the major performance measures are processing time, job turnaround and so on. Therefore, in order to improve the performance of big data applications, we must take into consideration these measures.

  • MDA. In [P15], the investigators demonstrate the effectiveness of the proposed approach by using a case study. This approach can overcome accidental complexities in analytics-intensive big data applications. Paper P18 conducts a series of tests using Amazon’s AWS cloud platform to evaluate performance and scalability of the observable architecture by considering the CPU, memory, and network utilization levels. Paper P19 uses a simple case study to evaluate the proposed architecture and a metamodel in the Word Count Application.

  • Fault tolerance. The experiments in paper [P26] show that DAP architecture can improve the performance of joining two-way streams by analyzing the time consumption and recover ratio. In addition, all data can be reinstated if the newly started VMs can be reinstated in a few seconds while nonadjacent nodes fail; meanwhile, if neighboring nodes fail, some data can be reinstated. Through analyzing the CPU utilization, memory footprint, disk throughput and network throughput, experiments in paper [P28] show that the performance of all cases (MapReduce data computing applications) can be significantly improved.

  • Verification. In [P33], the author uses CMA(cell morphology assay) as an example to describe the design of the framework. Verifying and validating datasets, software systems and algorithms in CMA demonstrates the effectiveness of the framework.

  • Testing. In [P41], the authors use a number of virtual users to simulate real users and observe the average response time and CPU performance in a network public opinion monitoring system. In [P50], the experiment verifies the effectiveness and correctness of the proposed technique in alleviating the Oracle problem in a region growth program. The testing method successfully detects all the embedded mutants.

  • Monitoring. The experiments in [P56] show that a large queue can increase the write speed and that the proposed framework supports a very high throughput in a reasonable amount of time in a cloud monitoring system. The authors also provide comparative tests to show the effectiveness of the framework. In [P55], the comparison experiment shows that the method is reliable and fast, especially with the increase of the data volume, and the speed advantage is obvious.

  • Fault and Failure Prediction. In [P67], the authors implement the proactive failure management system and test the performance in a production cloud computing environment. Experimental results show that the approach can reach a high true positive rate and a low false positive rate for failure prediction. In [P65], the authors provide emulation-based evaluations for different sets of data traces, and the results show that the new prediction system is accurate and efficient.

5 Discussion

The key findings are already provided in Table 1. In this Section, we mainly discuss the cross-cutting findings and existing challenges of this review.

5.1 Cross-cutting Findings

This subsection discusses some cross-cutting findings deduced from the key findings.

  • Relations between big data attributes and quality attributes. The collected results based on main findings show that big data attributes sometimes have contradictory impacts on quality attributes. Some big data attributes are found to improve some quality attributes and weaken others. These findings lead to a conclusion that big data attributes do not always improve all quality attributes of big data applications. To a certain extent, this conclusion matches the definition of big data attributes stated by most of the researchers involved in a study regarding the challenges and benefits between big data attributes and quality attributes in practice.

  • Relations among big data attributes, quality attributes and big data applications. In this study, researchers have proposed some quality attributes to effectively assess the impact of big data attributes on applications. Therefore, we believe that it is incorrect to limit the research on the quality of big data applications to a certain big data property, obtain some negative results, and then state a general conclusion that comprehensive consideration of big data attributes causes big data applications’ quality to weaken. For example, considering that the data that the system need to process has a large volume, fast velocity, and huge variety, a number of companies have built sophisticated monitoring and analyzing tools that go far beyond simple resource utilization reports. The continuous improvement of monitoring and analyzing tools enables big data application systems to occupy more resources, which means longer response times and lower performance[P53]. Consequently, most big data applications that take into account big data attributes can cause the system to be more complex. We believe that it is incorrect to draw a general conclusion that comprehensive consideration of big data attributes negatively affects big data applications’ quality. Moreover, such a conclusion does not consider other quality attributes such as reliability, scalability, and correctness.

  • Relations between quality attributes and QA technologies. It is important to note that researchers may also use different QA technologies when considering the same quality attributes. That is, there may be some empirical experience enlisted in practice. It can be inferred that the relations between quality attributes and QA technologies is not one-to-one. For example, correctness can be achieved through a variety of QA technologies, including testing, fault tolerance, verification, monitoring, and fault and failure prediction, as analyzed from Tables 11 and 12. On the other hand, when using the same QA technology, different researchers design different methods and evaluation indicators. Therefore, when a study finds that there is a negative or a positive relation between quality attributes and QA technologies, we cannot conclude a specific finding regarding the relation between them. We need to consider investigating this problem for various types of big data applications. For example, for big data applications concerned with privacy, monitoring technologies can improve the reliability of the system, but in some common big data applications, an excessive emphasis on monitoring technologies may degrade the performance of the system. Therefore, the specification of QA technologies and the relationship between QA technologies and quality attributes need to be further studied.

5.2 Existing Challenges

Based on the key findings and cross-cutting findings aforementioned, we discuss some research challenges in this subsection.

  • Challenge 1: The existing problems brought by big data attributes.

Despite that many technologies have been proposed to address big data attributes, existing technologies cannot provide adequate scalability and face major difficulties. Based on the SLR results, Table 15 summarizes the challenges and the possible solutions for 3V attributes. For example, the distributed file system has a high fault tolerance, high throughput and other excellent characteristics. It can use multiple storage servers to share the storage load to store a large amount of data and support linear expansion. When the storage space is insufficient, it can use hot swap to increase storage devices and expand conveniently. These capabilities address the storage and scalability challenges of big data applications caused by the volume attribute. Many studies Elkafrawy2017HDFSX ; Radha2014Efficient show that distributed file systems can handle large-scale data very well.

For large-scale optimization and high-speed data transmission of big data applications, a decomposition-based distributed parallel programming algorithm Ke2016On is proposed and an online algorithm is designed to dynamically adjust data partitioning and aggregation. Dobre et al. Dobre2014Parallel review various parallel and distributed programming paradigms, analyze how they fit into the big data era, and present modern emerging paradigms and frameworks. Consequently, parallel programming is particularly effective in big data applications, especially for addressing thevelocity of data. In addition, the NoSQL Reniers2017On database is created to solve the challenges brought by the multiple data types of large-scale data collection, especially the big data application problems. NoSQL’s flexible storage structure fundamentally solves the problem of variety and unstructured data storage [P23]. At the same time, distributed file systems solve the problem of data storage and greatly reduce costs. It can be seen that these technologies can be combined with existing QA technologies for big data applications in the future.

Properties Challenge Possible Solutions
Volume
Storage/Scale
Distributed File Systems
Velocity
Fast Processing
Parallel Programming
Variety
Heterogeneity
NoSQL Databases
Table 15: Properties, Challenges and Technologies
  • Challenge 2: Lack of the awareness and good understanding of QA technologies for big data applications.

As mentioned in Section 5.1, because different professional skills and understanding of the field exist, big data practitioners tend to use different QA technologies when considering the same quality attributes; therefore, the QA technologies that are chosen according to experience may not be the most appropriate. Moreover, an incorrect application of QA technologies may cause extensive losses. For example, because of an incorrect transaction algorithm, the electronic trading system led to the purchase of 150 different stocks at a low price by the United States KCP (Knight Capital Group) financial companies, resulting in the company suffering a loss of 440 million US dollars with the day shares falling 62%555https://dealbook.nytimes.com/2012/08/02/knight-capital-says-trading-mishap-cost-it-440-million/. Therefore, a clear understanding of QA technologies can reduce the implementation of incorrect algorithms and technologies in big data applications, thus avoiding huge losses. Nevertheless, the variety and diversity of big data applications makes it difficult to enact a theory of QA technologies to normalize them, which creates the challenge regarding a lack of awareness of QA technologies. In general, fully understanding the capabilities and limitations of QA technologies can address the specific needs of big data applications. Consequently, researchers are advised to fill this gap by deeply exploring theoretical research, considering more mature QA technologies, and making use of the studies frequently applied in practice.

  • Challenge 3: Lack of quantitative models and algorithms to measure the relations among big data attributes, data quality parameters and software quality attributes.

The SLR results show that big data attributes are related to the quality of software. However, big data attributes should first affect multiple data quality parameters; then, the quality of data attributes affects the quality of software. Figure 9 shows our primary study on the relations among big data attributes, data quality parameters, and software quality attributes. However, the change of an attribute is often accompanied by the change of multiple attributes. More detailed theories, models and algorithms are needed to precisely understand the different kinds of relations. To specify quality requirements in the context of big data applications, paper P1 presents a novel approach to address some unique requirements of engineering challenges in big data to specify quality requirements in the context of big data applications. The approach intersects big data attributes with software quality attributes, and then it identifies the system quality requirements that apply to the intersection. Nevertheless, the approach is still in the early stages and has not been applied to the development environment of big data applications. Hence, it is still a considerable challenge and a trending research issue.


Figure 9: Relations among Big Data Properties, Data Quality Parameters and Software Quality Attributes
3V properties
Software quality
attributes
Technologies
Velocity, Variety, Volume
Reliability
Specification
Velocity, Volume
Performance
Analysis
Volume
Performance,
Scalability
MDA
Variety, Volume
Performance,
Scalability
Fault tolerance
Volume, Variety
Performance,
Reliability
Verification
Variety, Velocity
Availability,
Performance
Testing
Variety, Velocity
Performance,
real time
Monitoring
Variety
Performance,
Dependability
Fault and
Failure Prediction
Table 16: 3V Properties, Software Quality Attributes and Techniques of Big Data Application
  • Challenge 4: Lack of mature tools for QA technologies for big data applications.

In Section 4.4, we have summed up eight QA technologies for big data applications based on the selected 83 primary studies. Nevertheless, many authors discussed existing limitations and needed improvements. Therefore, existing technologies can solve quality problems to a certain extent. From Table 16, we can see that the 3V properties will result in software quality issues, and the corresponding technologies can partially address those problems.

However, with the wide application of machine learning in the field of big data application, the quality attributes of big data application gradually appear some new non functional attributes, such as fairness and interpretability. Processing a large amount of data needs to consume more resources of the application system. The performance of big data application is an urgent matter to be considered. Fair distribution of the resources of big data application can greatly improve the quality of big data application. For example, in distributed cloud computing, storage and bandwidth resources are usually limited, and such resources are usually very expensive, so collaborative users need to use resources fairly [P81]. In [P82], an elastic online scheduling framework is proposed to guarantee big data applications fairness. Another attribute is interpretability. Interpretability is a subjective concept which is hard to reach a consensus. Considering the two different dimensions of semantics and complexity, when dealing with big data, researchers often pay attention to the performance indicators of big data applications, such as accuracy, but these indicators can only explain some problems, and the black box part of big data applications is still difficult to explain clearly. Due to the rapid increase of the amount of data to be processed as time goes by, the structure of big data application will gradually become more complex, which increases the difficulty of interpretation system. At this time, it is very important to study the interpretability of big data applications[P83]. However, we have collected few papers about these non functional attributes so far, and the research is still in its infancy, lacking mature tools or technologies.

In addition, although there are many mature QA tools for traditional software systems, none of the surveyed approaches discusses any mature tools that are dedicated to big data applications. Indeed, if practitioners want to apply QA technologies for big data applications today, they would have to implement their own tool, as there are no publicly available and maintained tools. This is also a very significant obstacle for the widespread use of QA technologies for big data applications in empirical research as well as in practice.

6 Threats to Validity

In the design of this study, several threats are encountered. Similar to all SLR studies, a common threat to validity regards the coverage of all relevant studies. In the following, we discuss the main threats of our study and the ways we mitigated them.

External validity: In the data collection process, most of the data are collected by three researchers; this may lead to incomplete data collection, as some related articles may be missing. Although all authors have reduced the threat by determining unclear questions and discussing them together, this threat still exists. In addition, each researcher may be biased and inflexible when he extracts the data, so at each stage of the study, we have ensured that at least two other reviewers have reviewed the work. Another potential threat is the consideration of studies which are only published in English. However, since the English language is the main used language for academical papers, this threat is considered as as minimal reasonably.

Internal validity: This SLR may have missed some related novel research papers. To alleviate this threat, we have searched for papers in big data-related journals/conferences/workshops. In total, 83 primary studies are selected by using the SLR. The possible threat is that QA technologies are not clearly shown for the selected primary studies. In addition, we endeavored as much as possible to extract information to analyze each article, which helps to avoid missing important information. This approach can minimize the threats as much as possible.

Construct validity: This concept relates to the validity of obtained data and the research questions. For the systematic literature review, it mainly addresses the selection of the main studies and how they represent the population of the questions. We have taken steps to reduce this threat in several ways. For example, the automatic search is performed on several electronic databases so as to avoid the potential biases.

Reliability: Reliability focuses on ensuring that the results are the same if our review would be conducted again. Different researchers who participated in the survey may be biased in collecting and analyzing data. To solve this threat, the two researchers simultaneously extracted and analyzed data strictly according to the screening strategy, and further discussed the differences of opinions in order to enhance the objectivity of the research results. Nevertheless, the background and experience of the researchers may have produced some prejudices and introduced a certain degree of subjectivity in some cases. This threat is also related to replicating the same findings, which in turn affects the validity of the conclusion.

7 Conclusions and Future Work

In this paper, we conducted a systematic literature review on QA technologies for big data applications. It mainly discusses the state-of-art technologies to ensure the quality of big data applications. Based on this, we present five research questions. To reach our goal, we applied a database search approach to identify the most relevant studies on the topic of study, and 83 primary studies are selected. Finally, we analyze the data collected from these studies to answer the research questions presented earlier in Section 3.1.

Using the SLR, a list of eight QA technologies has been identified. These technologies not only play an important role in the research of big data applications but also impact actual big data applications. Although researchers have proposed these technologies to ensure quality, the research on big data quality is, however, still in its infancy stage, and problems regarding quality still exist in big data applications. The results of this study are useful for future research in QA technologies for big data applications. Based on our discussions, the following topics may be part of our future work:

  • Considering quality attributes with big data properties together to ensure the quality of big data applications.

  • Understanding and tapping into the limitations, advantages and applicable scenarios of QA technologies.

  • Researching quantitative models and algorithms to measure the relations among big data properties, data quality attributes and software quality attributes.

  • Developing mature tools to support QA technologies for big data applications.

References

References

  • (1) https://www.i-scoop.eu/big-data-action-value-context/big-data-2020-future-growth-challenges-big-data-industry/, - IDC.
  • (2) H.-N. Dai, R. C.-W. Wong, H. Wang, Z. Zheng, A. V. Vasilakos, Big data analytics for large-scale wireless networks: Challenges and opportunities, ACM Computing Surveys (CSUR) 52 (5) (2019) 1–36.
  • (3) C. P. Chen, C.-Y. Zhang, Data-intensive applications, challenges, techniques and technologies: A survey on big data, Information Sciences 275 (2014) 314–347.
  • (4) Z. Allam, Z. A. Dhunny, On big data, artificial intelligence and smart cities, Cities 89 (2019) 80–91.
  • (5)

    C. Tao, J. Gao, Quality assurance for big data applications: Issues, challenges, and needs, in: The Twenty-Eighth International Conference on Software Engineering and Knowledge Engineering, 2016.

  • (6) B. Jan, H. Farman, M. Khan, M. Imran, I. U. Islam, A. Ahmad, S. Ali, G. Jeon, Deep learning in big data analytics: a comparative study, Computers & Electrical Engineering 75 (2019) 275–287.
  • (7) M. Hilbert, Big data for development: A review of promises and challenges, Development Policy Review 34 (1) (2016) 135–174.
  • (8) N. Laranjeiro, S. N. Soydemir, J. Bernardino, A survey on data quality: Classifying poor data, in: IEEE Pacific Rim International Symposium on Dependable Computing, 2015.
  • (9) I. Anagnostopoulos, S. Zeadally, E. Exposito, Handling big data: research challenges and future directions, Journal of Supercomputing 72 (4) (2016) 1494–1516.
  • (10) V. N. Gudivada, R. Baezayates, V. V. Raghavan, Big data: Promises and problems, Computer 48 (3) (2015) 20–23.
  • (11) S. Montagud, S. Abrahão, E. Insfran, A systematic review of quality attributes and measures for software product lines, Software Quality Journal 20 (3-4) (2012) 425–486.
  • (12) A. Nguyen-Duc, D. S. Cruzes, R. Conradi, The impact of global dispersion on coordination, team performance and software quality–a systematic literature review, Information and Software Technology 57 (2015) 277–294.
  • (13) H. Wang, Z. Xu, H. Fujita, S. Liu, Towards felicitous decision making: An overview on challenges and trends of big data, Information Sciences 367-368 (2016) 747–765.
  • (14) S. Bagriyanik, A. Karahoca, Big data in software engineering: A systematic literature review 6 (1).
  • (15) G. G. Schulmeyer, J. I. McManus, Handbook of software quality assurance, Van Nostrand Reinhold Co., 1992.
  • (16) J. Gao, C. Xie, C. Tao, Big data validation and quality assurance – issuses, challenges, and needs, in: Service-Oriented System Engineering, 2016, pp. 433–441.
  • (17) S. T. Lai, F. Y. Leu, Data Preprocessing Quality Management Procedure for Improving Big Data Applications Efficiency and Practicality, Springer International Publishing, 2016.
  • (18) H. Zhou, J. G. Lou, H. Zhang, H. Lin, H. Lin, T. Qin, An empirical study on quality issues of production big data platform, in: IEEE/ACM IEEE International Conference on Software Engineering, 2015, pp. 17–26.
  • (19) S. Juddoo, Overview of data quality challenges in the context of big data, in: International Conference on Computing, Communication and Security, 2016.
  • (20) P. Zhang, X. Zhou, J. Gao, C. Tao, A survey on quality assurance techniques for big data applications, in: IEEE BigDataService 2017 - International Workshop on QUALITY ASSURANCE AND VALIDATION FOR BIG DATA APPLICATIONS, 2017.
  • (21) M. Ge, V. Dohnal, Quality management in big data, Informatics 5 (2).
  • (22) J. Liu, J. Li, W. Li, J. Wu, Rethinking big data: A review on the data quality and usage issues, Isprs Journal of Photogrammetry & Remote Sensing 115 (2016) 134–142.
  • (23) B. Kitchenham, R. Pretorius, D. Budgen, O. P. Brereton, M. Turner, M. Niazi, S. Linkman, Systematic literature reviews in software engineering: A tertiary study, Information & Software Technology 51 (1) (2009) 7–15.
  • (24) B. Kitchenham, Procedures for performing systematic reviews, Keele 33.
  • (25) V. R. Basili, G. Caldiera, H. D. Rombach, The goal question metric approach, Encyclopedia of Software Engineering.
  • (26) B. A. Kitchenham, T. Dyba, M. Jorgensen, Evidence-based software engineering, in: International Conference on Software Engineering, 2004. ICSE 2004. Proceedings, 2004, pp. 273–281.
  • (27) H. Zhang, M. Ali Babar, On searching relevant studies in software engineering, in: International Conference on Evaluation and Assessment in Software Engineering, 2010.
  • (28) L. A. Goodman, Snowball sampling, Annals of Mathematical Statistics 32 (1) (1961) 148–170.
  • (29) B. Kitchenham, S. Charters, Guidelines for performing systematic literature reviews in software engineering.
  • (30) A. Aggarwal, Identification of quality parameters associated with 3v’s of big data, in: International Conference on Computing for Sustainable Global Development, 2016.
  • (31) D. Fasel, Potentials of big data for governmental services, in: First International Conference on Edemocracy and Egovernment, 2014.
  • (32) D. Becker, T. D. King, B. McMullen, Big data, big data quality problem, in: Big Data (Big Data), 2015 IEEE International Conference on, IEEE, 2015, pp. 2644–2653.
  • (33) R. Clarke, Big data, big risks, Information Systems Journal 26 (1) (2016) 77–90.
  • (34) I. IT center, Big data in the cloud: Converging technologies.
  • (35) K. Patel, R. M. Hierons, A mapping study on testing non-testable systems, Software Quality Journal (6) (2017) 1–41.
  • (36) M. Hussain, M. B. Al-Mourad, S. S. Mathew, Collect, scope, and verify big data – a framework for institution accreditation, in: International Conference on Advanced Information Networking and Applications Workshops, 2016.
  • (37) P. M. Elkafrawy, A. M. Sauber, M. M. Hafez, Hdfsx: Big data distributed file system with small files support, in: Computer Engineering Conference, 2017, pp. 131–135.
  • (38) k. R. Radha, S. Karthik, Efficient handling of big data volume using heterogeneous distributed file systems, International Journal of Computer Trends and Technology 15 (4).
  • (39) H. Ke, P. Li, S. Guo, M. Guo, On traffic-aware partition and aggregation in mapreduce for big data applications, IEEE Transactions on Parallel and Distributed Systems 27 (3) (2016) 818–828.
  • (40) C. Dobre, F. Xhafa, Parallel programming paradigms and frameworks in big data era, International Journal of Parallel Programming 42 (5) (2014) 710–738.
  • (41) V. Reniers, D. V. Landuyt, A. Rafique, W. Joosen, On the state of nosql benchmarks, in: Acm/spec on International Conference on PERFORMANCE Engineering Companion, 2017, pp. 107–112.