Role of Apache Software Foundation in Big Data Projects

by   Aleem Akhtar, et al.
SEECS Orientation

With the increase in amount of Big Data being generated each year, tools and technologies developed and used for the purpose of storing, processing and analyzing Big Data has also improved. Open-Source software has been an important factor in the success and innovation in the field of Big Data while Apache Software Foundation (ASF) has played a crucial role in this success and innovation by providing a number of state-of-the-art projects, free and open to the public. ASF has classified its project in different categories. In this report, projects listed under Big Data category are deeply analyzed and discussed with reference to one-of-the seven sub-categories defined. Our investigation has shown that many of the Apache Big Data projects are autonomous but some are built based on other Apache projects and some work in conjunction with other projects to improve and ease development in Big Data space.



There are no comments yet.


page 1

page 2

page 3

page 4


REBD:A Conceptual Framework for Big Data Requirements Engineering

Requirements engineering (RE), as a part of the project development life...

BEANS - a software package for distributed Big Data analysis

BEANS software is a web based, easy to install and maintain, new tool to...

Data Lakes for Digital Humanities

Traditional data in Digital Humanities projects bear various formats (st...

Big Data Generated by Connected and Automated Vehicles for Safety Monitoring, Assessment and Improvement, Final Report (Year 3)

This report focuses on safety aspects of connected and automated vehicle...

Optimization meets Big Data: A survey

This paper reviews recent advances in big data optimization, providing t...

CMS Analysis and Data Reduction with Apache Spark

Experimental Particle Physics has been at the forefront of analyzing the...

Open Source and Sustainability: the Role of Universities

One important goal in sustainability is making technologies available to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Last decade has seen an explosion of Data. Huge amount of data is being produced at very hight rate from Internet sites, Government records, scientific experiments, sensor networks, and many other sources like online transactions, images, audio, videos, posts, health records, emails, logs, click streams, social networks, mobile phones, and their apps  [41][59]. Such data cannot be managed or processed in reasonable amount of time by traditional set of database tools, therefore, Big Data term was introduced for such data. Until 2005, 5 Exabyte of Data was generated but now 2.5 quintillion bytes of data is produced in a single day [55]. 2.72 zeta bytes of data was generated by digital world till 2012, and after doubling every year it reached to 8 zeta byte in 2015 [44] and by the end of 2020 it is expected to reach 44 zettabytes, or 44 trillion gigabytes [67]. As per SINTEF report in 2013, 90% of this data was produced in just two years [34][60]. Genome decryption process used to take nearly 10 years in past, now is done in less than a week [33]. Multimedia data increased by 7% by 2013 [54]. With servers in millions, Google is largest Internet Company. More than 10 billion text messages are sent by 7 billion mobile subscribers’ every day. Movies and the sharing platforms are expected to have nearly 50 billion movies connected by the end of this year.

This amount of information expected to increase by 50 times in next decade with technology experts to keep up with all data are expected to increase by 1.5 times [64]. This huge amount of data is increasing on daily basis with no end in sights. The need to store, process, and analyse this data is stronger than ever. Many tools are specifically built to store, process and analyse this data are being developed.

1.1 Big Data

A revolutionary step required by Big Data from traditional analytics, defined three main components called three V’s of Big Data: variety, velocity and volume as shown in fig 1 [45][37][59][60].

Figure 1: Big Data Three V’s
  • Volume: it defines size of data generally larger than terabytes. Traditional store and analysis techniques are outstripped by this grand scale of data [41][53].

  • Variety: It defines how data elements are related to each other. In structured data, tags are present so data elements can easily be separated, whereas in unstructured data, due to randomness, it is very difficult to analyze. Semi-structured data does not have fixed fields but can easily be separated [41][60].

  • Velocity: It defines speed of data at which it is being generated. It can be real-time, streamed or in batches [41][53].

In many literature, there is fourth component ‘Verification’ also discussed. Due to intensity of information security feature is required as controlling large data is not easy.

1.2 Open Source Software

Many open source software in the Big Data field is very crucial and many Big Data projects are being made open and free to the general public. Major dominating industry in Big Data solutions is open-source software and giants like IBM, Oracle and Microsoft are now following the footsteps to make their proprietary software as open-source software. There is rapid change in innovation of Big Data field and solutions due to development in open-source software.

Richard Stallman started open-source movement in 1983 with development of GNU project [62]. Information science research has well portion of open-source software and open-source communities are using development methods which are proven quite successful. A very important factor for success of any open-source project is community around that project which mainly gives development and innovation of project software solutions that are diverse and robust are generally supported by well-functioning diverse communities.

2 Apache Software Foundation

The Apache Software Foundation’s history is connected to the Apache HTTP Server, which began in February 1993. A team of eight developers –later called as Apache Group– started working to expand the NCSA HTTPd Daemon. The Apache Software Foundation was established on March 25, 1999 [43]. On April 13, 1999, the Apache software Foundation’s first official meeting was held. The early members of the Apache Software Foundation were: Miguel Gonzales, Ken Coar, Brian Behlendorf, Mark Cox, Ralf S. Engelschall, Paul Sutton, Marc Slemko, Lars Eilebrecht, Dean Gaudet, Sameer Parekh, Roy T. Fielding, Cliff Skolnick, Jim Jagielski, Ben Hyde, Alexei Kosut, Martin Kraemer, Doug MacEachern, Ben Laurie, , Aram Mirzadeh, William (Bill) Stoddard, Dirk-Willem van Gulik, and Randy Terbush [65]. Board members were elected after a series of other meetings. After dealing with other legal issues related to the formation of the company, June 1, 1999 was set as the effective date of the Apache Software Foundation [66].

Software development activities at Apache are divided into semi-autonomous areas known as “top-level projects” with some of them comprised of sub-projects (previously called “Project management committee in the bylaws [25]). Dissimilar to other free-and-open-source projects hosting organizations, project is to be licensed to ASF before it is hosted at Apache with contributor or grant agreements [18]. As such, ASF acquire the right of intellectual property, needed to develop and distribute all of its projects [2].

2.1 Powered By Apache

Every Internet-connected country of the world is using Apache software. ASF projects serve as the backbone for some of most widely used and visible applications of the world such as Big Data, Deep Learning & Artificial Intelligence, Cloud Computing, DevOps, build management, IoT and Edge computing, content management, servers, mobile and web frameworks, and among many other similar fields

[58]. List of applications that are ”Powered by Apache” include:

  • NASA: powering Ocean Science and Big Earth data analytics;

  • NASA Jet Propulsion Laboratory: accessing content across multi-mission, multi-instrument science data systems;

  • Panama Papers: document, search and library management tools used in the nearly 3TB Pulitzer Prize-winning investigation;

  • IBM Watson: advancing semantics capabilities and data intelligence to win first-ever ”Man vs. Machine” competition on Jeopardy!

  • Facebook: requests processing at 300PB data warehouse, connecting more than 2 billion active users;

  • Twitter: processing and analyzing of more than 200B annual tweets in Zettabytes;

  • Adobe: powering core of Experience Manager and I/O Runtime;

  • Netflix: data ingestion pipeline and stream processing 3 trillion events each day;

  • Minecraft: libraries bundling for modification of the all time second most popular video game;

  • Amazon Music: 16M+ subscribers and tuning recommendations;

  • AOL: ingesting more than 20 Terabyte of daily data;

  • Formula 1, Daimler, and Audi: real time data streaming in vehicles;

  • Pinterest: processing more than 800 billion daily events;

  • Uber: handling 1M writes per second for 99.99% availability to users and drivers;

  • Mobile app developers: unifying mobile application development across iOS, Android, Windows Mobile and Blackberry operating systems;

  • European Space Agency: powering next-generation simulators infrastructure and new mission control system;

  • US Federal Aviation Administration: system-wide information management to enable every airplane take off and land in US airspace;

3 ASF Big Data Projects

Apache Software Foundation list down projects in three main categories: active projects are those which are currently available for downloads and constantly being updated. Projects which are no longer provided with Apache support but are still governed by Apache license are sent in to the attic and are retired. Finally, Apache Incubator service provides an entry path for codebases and projects desiring to become part of the Apache Software Foundation [26]. Currently there are 50 projects listed under Big Data category on Apache website with 44 of them being active, 03 retired and 03 currently in incubation stages [24]. Table 1 presents a list of Big Data projects.

Active Accumulo, Airavata, Ambari,Avro, Beam, Bigtop, BookKeeper, Calcite, Camel, CarbonData, CouchDB, Crunch, Drill, Flink, Flume, Fluo, Fluo Recipes, Fluo YARN, Giraph, Hama, Helix, Ignite, Kafka, Kibble, Knox, Kudu, Lens, MetaModel, OODT, Oozie, ORC, Parquet, Phoenix, PredictionIO, REEF, Samza, Spark, Sqoop, Storm, Tajo, Tez, Trafodion, VXQuery, Zeppelin

Apex, DirectMemory, Falcon

Daffodil, DataFu, Edgent
Table 1: List of ASF Big Data Projects

These projects are further divided into sub-categories based on services they provides. Analyzing each project in detail, a sub-category list is prepared in fig-2, followed by brief explanation of each sub-category.

Figure 2: Sub-Categories of Apache Big Data Projects

3.1 Frameworks

Apache Software Foundation is full of open-source projects that serve as Frameworks to efficiently manage resources, jobs, workflows, applications running on clusters. Apache Airavata [56] is a micro-service architecture based software framework for managing and executing computational workflows and jobs on distributed computing resources including commercial clouds, local clusters, national grids, supercomputers and academic clouds. Prevailing use of Airavata is to build web-based science gateways and assisting in composing, monitoring, execution and management of large-scale applications and workflows wrapped or composed of web-services.

The Apache Hama [61] is a scalable and efficient general-purpose Bulk Synchronous Parallel (BSP) computing engine which is used in Big Data Analytics with a purpose to speed-up diverse set of compute intensive analytics applications. The Apache Helix [13] is a general-purpose cluster management framework which is used to automatically manage resources that are replicated, partitioned and distributed on cluster of nodes. In case of cluster expansion, node failure and recover, or cluster reconfiguration, Helix automatically reassign resources. Apache Retainable Evaluator Execution Framework or simply Apache REEF [39] is a development framework that provides mechanism to simplify the development of Big Data applications on cloud platforms supporting Resource Manager Service like Apache Hadoop YARN or Apache Mesos.

3.2 Tools

There are variety of tools developed around Big Data projects to facilitate in processing and management of large amounts of data. The Apache Ambari initially started as sub-project of Hadoop to provide system administrators facilities of provision, monitoring and management of Hadoop clusters. Apache Ambari [69] is now a top-level project managed by its own community to facilitate integration of Hadoop with prevailing enterprise infrastructure. The Apache Flume [47] and Apache Sqoop [68] are two reliable, distributed and available systems to efficiently collect, move, and aggregate and huge chunks of log data from number of various sources to a centralized data store.

The Apache Fluo YARN [10] is a sub-project of Apache Fluo in the form of a tool to run Apache Fluo applications in Apache Hadoop YARN. The Apache Kibble [16] is a tools suite similar to Apache Flume to collect, visualise, and aggregate software projects activities, Apache Zeppelin [38] is a web-based tool for data scientists with similar features. To have Hadoop clusters a REST interactions Apache Knox [50] provide a REST API Gateway that provides important features helping to control, monitor, integrate and automate enterprise’s critical analytical and administrative requirements. Apache Lens [51] integrates traditional data warehouses and Hadoop to provide a single data view across optimal execution environment and multi-tiered data stores. To schedule Apache Hadoop jobs like Java Map-reduce, Hive, and Pig along with system specific jobs like shell scripts and Java programs, Apache Oozie [48] is integrated with Hadoop stack.

Apache PredictionIO [22] and Apache VXQuery [36]

are two very useful tools with one providing machine learning services and other being XML Query processor, respectively. PredictionIO let developers to deploy and manage production-ready predictive services for machine learning jobs. VXQuery uses cluster to evaluate queries on huge sets of comparatively small XML documents.

3.3 Programming Model

Apache Software Foundations provides a handful set of very useful programming models and runtime that provide facility of running data processing jobs on distributed and diverse execution engines. To run both stream and batch data processing, Apache Beam [31] provides a unified programming model. Users develop application programs in the form of pipelines using Apache Beam SDKs and Beam’s supported processing back-ends like Apache Spark, Apache Flink, and Apache Apex execute those pipelines. Apache Edgent [42] is an incubating project and still not a top-level project but concept behind this project is to provide a micro-kernel style runtime and programming model that can easily be embedded in edge devices to enable real-time, local, analytics on continuous stream of data originating from vehicles, equipment, appliances, systems, sensors and devices of all kinds like smart phones and Raspberry Pi.

3.4 Big Data Management

Big Data Management is an important aspect in Big Data field and Apache Software Foundation hosts open-source projects that provide comprehensive set of big data management features either by integrating on top of Hadoop cluster or just by simply providing query transformation rules. Apache Bigtop [5] and Apache Trafodion [30] are two main big data management projects that provide features of development of applications to run on Hadoop ecosystem. Trafodion extends Hadoop to provide transactional integrity for new applications to run on Hadoop whereas Bigtop let packaging and testing of Hadoop-related projects. Apache Phoenix [1] is another project that can easily integrate in the Hadoop ecosystem and other Apache products like Flume, Spark, Map Reduce and Pig.

Apache Calcite [32] and Apache Tajo [63] are somehow similar in providing frameworks to process web-scale data sets. Calcite uses transformation rules to convert relational algebra queries into efficient executable form with no initial cost models. While primary goal of Tajo is to use progressive query and cost-based optimization techniques to provide dynamic load-balancing and fault-tolerance to run long-queries. Apache Ignite [14] uses caching platform and In-Memory Database to deliver high performance.

3.5 Libraries

List of library projects providing different kinds of services at ASF is very long and libraries specifically designed for Big Data projects is also quite comprehensive, however, only those libraries which are widely used discussed here. Apache BookKeeper [6] is a highly available and scalable replicated log service that can turn any individual service into replicated service. Based on Enterprise Integration patterns, Apache Camel is another great integration library.

To parse fixed data, Apache Daffodil uses DFDL data specifications and output fixed format data into infoset in the form of JSON or XML. Other technologies working on JSON or XML can easily utilize this output. Daffodil can also serialize or reverse-parse JSON or XML infoset into fixed data format. Apache MetaModel [19] provides a query API and uniform connector to many datastore types, including: JSON files, XML files, CSV files, fixed with files, Excel spreadsheets, Apache Cassandra, Apache HBase, Apache CouchDB, MongoDB, Relational (JDBC) databases, SugarCRM, Plain Old Java Objects (POJO) and ElasticSearch. Computations happening at regular intervals generally have unnecessarily repetitions, therefore, Apache DatFu [8] reduces up to 95% of computational resources by making computations more efficient.

3.6 Database/Data Format

For fast processing of data, efficient database and data storage format are very important and ASF outlines a comprehensive list of such projects. There is a complete range of Apache databases for different use cases. However, Apache CouchDB [40] is one such project that can efficiently work with both web and mobile apps. Effective distribution of data using incremental replication is one of the Apache CouchDB’s main feature. Apache Accumulo [3] and Apache Drill [9] are two key projects connected with Google products with Accumulo based on Google’s BigTable design and Drill partially based on Google’s Dremel. Accumulo was built on top of Apache Hadoop, Apache Thrift and Apache Zookeeper with BigTable design improvements. A variety of NoSQL databases and file systems are supported by Drill, including HDFS, MapR-DB, MapR-FS, MongoDB, HBase, Azure Blob Storage, Amazon S3, NAS, Swift, Google Cloud Storage and local files. Unlike Accumulo, Apache Avro [4] is developed within Hadoop to provide data serialization and row-oriented remote procedure call framework. Data types and protocols are defined using JSON and serialized in compact binary format.

Apache CarbonData [7], Apache Kudo [17], Apache ORC [20] and Apache Perquet [21] are some of the open-source projects that work with columnar storage file format. CarbonData uses advanced index, encoding and compression techniques to improve computing efficiency. To support Apache Hadoop platform, Kudo is developed as columnar storage manner whereas to efficiently optimize large streaming workloads at Hadoop, ORC is designed as type-aware columnar file format. Apache Parquet another Hadoop supported generic columnar storage format which can be used with any data model, processing framework, or programming language.

3.7 Data Processing

Big Data Analytics is one of the most important objective sought after by academic institutions, researchers, scientists, and companies. For this purpose, data processing projects developed and maintained by Apache are frontrunners and are used by many large enterprises. As specified in introduction section, one component of Big Data is velocity, and data can be produced in batch, stream or run-time, therefore, data processing framework must be able to efficiently process it for best results. For this purpose, some Apache projects process data only in batches, some in streams, and some in both formats. Apache Flink [35] works with data in large batches and it combines the programing flexibility and scalability of distributed MapReduce-like platforms with the query optimization, out-of-core execution, and efficiency capabilities found in parallel databases. Apache Fluo [11] is another distributed batch processing system built on Apache Accumulo. With Fluo, new data can easily be joined with large existing data without reprocessing of entire data. Apache Fluo Recipe is built on Apache Fluo but with additional features and is maintained separately with independent releases.

Apache Spark [70], Apache Samza [23], Apache Storm [28] and Apache Kafka [15] are all open-source stream processing platforms with Spark providing features of batch processing as well. High Level APIs are provided by Spark in Scala, Java, R and Python for fast data processing along with libraries for graph analytics, machine learning and stream processing. Kafka developed by LinkedIn and donated to ASF provides low-latency, high throughput, unified platform for handling real-time feeds. Kafka stream data is used by Samza for processing with the use of messages. Apache Storm provides general primitives for processing real-time data.

Apache Giraph [12] and Apache Tez [29] are two graph processing systems used for data processing. Social graph formed by users at Facebook are analyzed using Giraph. Whereas, to process complex directed-acyclic-graphs (DAGs) of data-processing, Tez is widely used.

4 Discussion

Apache Software Foundation (ASF) [27] is a non-profit open-source software foundation, which is considered a very important organisation in Big Data space. There is a diverse software development at the foundation, and many widely used software projects are placed at it, with user community expanded to all over the world. Key reason behind success of ASF is importance of community above other requirements. Because of this, an agile and flexible environment is provided at ASF for development of open-source projects. To maintain successful projects, legal framework and infrastructure is also provided. In last few years, a large chunk of successful Big Data projects have been attracted towards ASF.

Github [46] is another leading open-source software platform for Big Data projects. Unlike ASF, Github is a git-based code repository and does not provide organisational and legal framework. Github is an ad-hoc platform and for successful projects, communities are formed around projects in ad-hoc manner. Github is used for hosting open-source projects by Universities, foundations like ASF, and companies like Netflix and LinkedIn. LinkedIn [52] and Netflix [57] are first companies to make their code open-source to public. Large software companies like Facebook, Yahoo and Twitter create projects and donate to public through open-source software foundations. Both original software developer and community get benefitted by this process. Products evolve very quickly and mature fast when software creators expose their code to diverse communities. Product become resilient by being battle tested in all kinds of scenarios for free. Trust and high credibility is created among peer developers for the leaders of open-source software projects is one of the most rewarding thing about making software open-source.

Apache Software Foundation is an all-volunteer community with more than 700 individual members and nearly 7,000 committers working on 200+ million lines of code in more than 350 open source projects that are used by billions of the users and developers across the globe. There are more than 350 projects that ASF has provided free to the public. Nearly 30 million page visits per week by developers and users are recorded at ASF’s official website and its sub-domains. Excluding convenient binaries, source code from Apache Mirrors have been downloaded about 9M+ times.

First Big Data project launched in January, 2008 was Apache Hadoop while first Big Data project to retire was Apache DirectMemory in July, 2015. Fig 4 presents a timeline graph of Apache Committees evolution with Apache HTTP Server being first committee to launch in 1995 while Apache Druid being latest committee to launch in last month of 2019. Fig 4 presents an evolution of apache incubating projects.

Figure 3: Timeline of Apache Committees Evolution
Figure 4: Timeline of Apache Incubating Projects Evolution
Figure 3: Timeline of Apache Committees Evolution

Fig 6 presents language distribution of Apache projects. Java being the major language for most of the projects with nearly 58% projects being developed in Java. There are many projects which are provided in more than one language like Apache Spark which is available in Java, Scala and Python. Fig 6 gives a brief project categories distribution with nearly 21% of the Apache projects being libraries followed by Big Data [27].

Figure 5: Apache Projects Language Distribution

Figure 6: Projects Categories

5 Conclusion, Related Work, and Future Work

5.1 Conclusion

In this report, we deeply investigated Apache Software Foundation for open-source projects which are listed under Big Data category. Each project was studied to know its sub-category. Seven sub-categories were identified and projects were investigated to understand relationship among each of them. Frameworks, Tools, Programming Models, Big Data Management, Libraries, Database/Data Format, and Data Processing are main sub-categories studied and presented in this report. Our investigation showed that, many projects work independently while there are some of the projects that either utilize services of other Apache projects or provide services to some other projects for better performance and ease of use.

Some of the projects discussed in this report, have support of large competing technology organizations. Even with this fact, projects are using and complimenting each other and co-exist to provide an exceptional open development environment in the big data space for advanced and state-of-the-art projects. Many successful and important open projects are now permanent member of Apache, and newer projects are attracted towards Apache with an increasing pace.

5.2 Related Work

There is a lot of research done on individual Apache projects especially on data processing frameworks identifying performance, use-cases, potential issues, and future targets. There is also research available that compare and highlight performance between two or more similar Apache projects. However, there is not much research done on complete Big Data projects list. Kamburugamuve provided a similar research report as part of PhD qualifying exam in which he presented Apache Big Data projects in the form of a layered architecture [49]. This report was presented in 2013, and many new big data projects were launched after that. In this report, we covered a bigger set of projects.

5.3 Future Work

Principal Analyst at RedMonk Stephen O’Grady appreciated ASF by saying that “The Apache Software Foundation has been one of the few institutions that have been crucial for growth and advancement of Open Source projects in the last two decades. A neutral environment is provided to developers with different backgrounds to work together, which has mainly played a very important role in open source success and ASF is looks determined to continue to play similar role in next decade.” In this report, we only covered projects that are listed under Big Data tag on Apache official project website. Future work, will include more detailed analyses of other project categories that overlap somehow with Big Data field but are not mentioned in Big Data category. Another future research perspective is to deeply investigate data processing models/frameworks and provide a comprehensive comparison based on performance, ease-of-use, and other key metrics.


  • [1] S. Akhtar and R. Magham (2016) Pro apache phoenix: an sql driver for hbase. Apress. External Links: Link Cited by: §3.4.
  • [2] K. S. Amant (2008) Handbook of research on open source software: technological, economic, and social perspectives. CHOICE 45 (8). Cited by: §2.
  • [3] (2020-01) Apache accumulo. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [4] (2020-01) Apache avro. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [5] (2020-01) Apache bigtop. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.4.
  • [6] (2020-01) Apache bookkeeper. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.5.
  • [7] (2020-01) Apache carbondata. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [8] (2020-01) Apache datafu. Note: Last Access: [15 Jan, 2020] External Links: Link Cited by: §3.5.
  • [9] (2020-01) Apache drill. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [10] (2020-01) Apache fluo-yarn. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.2.
  • [11] (2020-01) Apache fluo. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.7.
  • [12] (2020-01) Apache giraph. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.7.
  • [13] (2020-01) Apache helix. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.1.
  • [14] (2020-01-15)(Website) Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.4.
  • [15] (2020-01) Apache kafka. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.7.
  • [16] (2020-01) Apache kibble. External Links: Link Cited by: §3.2.
  • [17] (2020-01) Apache kudo. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [18] (2020-01) APACHE licenses. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §2.
  • [19] (2020-01) Apache metamodel. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.5.
  • [20] (2020-01) Apache orc. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [21] (2020-01) Apache parquet. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.6.
  • [22] (2020-01) Apache predictionio. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.2.
  • [23] (2020-01) Apache samza. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.7.
  • [24] (2020-01) Apache software foundation big data projects. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.
  • [25] (2020-01) Apache software foundation bylaws. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §2.
  • [26] (2020-01) Apache software foundation projects list. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.
  • [27] (2020-01) Apache software foundation. External Links: Link Cited by: §4, §4.
  • [28] (2020-01) Apache storm. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.7.
  • [29] (2020-01) Apache tez. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §3.7.
  • [30] (2020-01) Apache trafodion. External Links: Link Cited by: §3.4.
  • [31] A. Beam (2017) Apache beam programming guide. External Links: Link Cited by: §3.3.
  • [32] E. Begoli, J. Camacho-Rodríguez, J. Hyde, M. J. Mior, and D. Lemire (2018) Apache calcite: a foundational framework for optimized query processing over heterogeneous data sources. In Proceedings of the 2018 International Conference on Management of Data, pp. 221–230. External Links: Link Cited by: §3.4.
  • [33] P. Bhardwaj, A. Gupta, M. Sharma, M. Gupta, and S. Singhal (2016) A survey on comparative analysis of big data tools. International Journal of Computer Science and Mobile Computing 5 (5), pp. 789–793. Cited by: §1.
  • [34] P. B. Brandtzæg Big data, for better or worse: 90. Cited by: §1.
  • [35] P. Carbone, A. Katsifodimos, S. Ewen, V. Markl, S. Haridi, and K. Tzoumas (2015) Apache flink: stream and batch processing in a single engine. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering 36 (4). Cited by: §3.7.
  • [36] E. P. Carman Jr, T. Westmann, V. R. Borkar, M. J. Carey, and V. J. Tsotras (2015) Apache vxquery: a scalable xquery implementation. arXiv preprint arXiv:1504.00331. External Links: Link Cited by: §3.2.
  • [37] I. I. Center (2012) Planning guide: getting started with hadoop. Steps IT Managers Can Take to Move Forward with Big Data Analytics. Cited by: §1.1.
  • [38] Y. Cheng, F. C. Liu, S. Jing, W. Xu, and D. H. Chau (2018) Building big data processing and visualization pipeline through apache zeppelin. In Proceedings of the Practice and Experience on Advanced Research Computing, pp. 57. External Links: Link Cited by: §3.2.
  • [39] B. Chun, T. Condie, Y. Chen, B. Cho, A. Chung, C. Curino, C. Douglas, M. Interlandi, B. Jeon, J. S. Jeong, et al. (2017) Apache reef: retainable evaluator execution framework. ACM Transactions on Computer Systems (TOCS) 35 (2), pp. 5. External Links: Link Cited by: §3.1.
  • [40] A. CouchDB Apache couchdb. Cited by: §3.6.
  • [41] J. Dean and S. Ghemawat (2008) MapReduce: simplified data processing on large clusters. Communications of the ACM 51 (1), pp. 107–113. Cited by: 1st item, 2nd item, 3rd item, §1.
  • [42] A. Edgent (2017) V1. 1.0. External Links: Link Cited by: §3.3.
  • [43] R. T. Fielding (1999-03)(Website) Note: Last Accessed [15 Jan, 2020] External Links: Link Cited by: §2.
  • [44] D. Garlasu, V. Sandulescu, I. Halcu, G. Neculoiu, O. Grigoriu, M. Marinescu, and V. Marinescu (2013) A big data implementation based on grid computing. In 2013 11th RoEduNet International Conference, pp. 1–4. Cited by: §1.
  • [45] B. Gerhardt, K. Griffin, and R. Klemann (2012) Unlocking value in the fragmented world of big data analytics. Cisco Internet Business Solutions Group 7. Cited by: §1.1.
  • [46] (2020-01) GitHub. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §4.
  • [47] S. Hoffman (2013) Apache flume: distributed log collection for hadoop. Packt Publishing Ltd. External Links: Link Cited by: §3.2.
  • [48] M. K. Islam and A. Srinivasan (2015) Apache oozie: the workflow scheduler for hadoop. ” O’Reilly Media, Inc.”. External Links: Link Cited by: §3.2.
  • [49] S. Kamburugamuve, G. Fox, D. Leake, and J. Qiu (2013) Survey of apache big data stack. Indiana University, Tech. Rep.. Cited by: §5.2.
  • [50] A. Knox (2019) REST api and application gateway for the apache hadoop ecosystem. External Links: Link Cited by: §3.2.
  • [51] K. Koitzsch (2017) Relational, nosql, and graph databases. In Pro Hadoop Data Analytics, pp. 63–76. External Links: Link Cited by: §3.2.
  • [52] (2020-01) LinkedIn data. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §4.
  • [53] S. Madden (2012) From databases to big data. IEEE Internet Computing 16 (3), pp. 4–6. Cited by: 1st item, 3rd item.
  • [54] J. Manyika (2011) Big data: the next frontier for innovation, competition, and productivity. MGI Research Technology and Innovation. Cited by: §1.
  • [55] B. MarrForbes (Ed.) (2018-05-21)(Website) External Links: Link Cited by: §1.
  • [56] S. Marru, L. Gunathilake, C. Herath, P. Tangchaisin, M. Pierce, C. Mattmann, R. Singh, T. Gunarathne, E. Chinthaka, R. Gardler, et al. (2011) Apache airavata: a framework for distributed applications and computational workflows. In Proceedings of the 2011 ACM workshop on Gateway computing environments, pp. 21–28. External Links: Link Cited by: §3.1.
  • [57] (2020-01) Netflix open source software center. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §4.
  • [58] Sally (2019-03) The apache software foundation celebrates 20 years of community-led development ”the apache way”. Note: Last Accessed: [15 Jan, 2020] External Links: Link Cited by: §2.1.
  • [59] R. Schneider (2012) Custom hadoop for dummies, special edition. John Wiley & Sons Incorporated. Cited by: §1.1, §1.
  • [60] S. Seo, E. J. Yoon, J. Kim, S. Jin, J. Kim, and S. Maeng (2010) Hama: an efficient matrix computation with the mapreduce framework. In 2010 IEEE Second International Conference on Cloud Computing Technology and Science, pp. 721–726. Cited by: 2nd item, §1.1, §1.
  • [61] K. Siddique, Z. Akhtar, E. J. Yoon, Y. Jeong, D. Dasgupta, and Y. Kim (2016) Apache hama: an emerging bulk synchronous parallel computing framework for big data applications. IEEE Access 4, pp. 8879–8887. External Links: Link Cited by: §3.1.
  • [62] R. M. Stallman and G. E. Manual (1986) Free software foundation. El proyecto GNU–Fundación para el software libre. Cited by: §1.2.
  • [63] A. Tajo (2013) A big data warehouse system on hadoop. External Links: Link Cited by: §3.4.
  • [64] C. Tankard (2012) Big data security. Network security 2012 (7), pp. 5–8. Cited by: §1.
  • [65] (1999-04) The apache software foundation board of directors meeting minutes. Note: Last Accessed: [15 Jan, 2020] Cited by: §2.
  • [66] (1999-06) The apache software foundation board of directors meeting minutes. Note: Last Accessed: [15 Jan, 2020] Cited by: §2.
  • [67] V. Turner, J. F. Gantz, D. Reinsel, and S. Minton (2014) The digital universe of opportunities: rich data and the increasing value of the internet of things. IDC Analyze the Future 16. Cited by: §1.
  • [68] D. Vohra (2016) Using apache sqoop. In Pro Docker, pp. 151–183. External Links: Link Cited by: §3.2.
  • [69] S. Wadkar and M. Siddalingaiah (2014) Apache ambari. In Pro Apache Hadoop, pp. 399–401. External Links: Link Cited by: §3.2.
  • [70] M. Zaharia, R. S. Xin, P. Wendell, T. Das, M. Armbrust, A. Dave, X. Meng, J. Rosen, S. Venkataraman, M. J. Franklin, et al. (2016) Apache spark: a unified engine for big data processing. Communications of the ACM 59 (11), pp. 56–65. Cited by: §3.7.