Auto Tuning of Hadoop and Spark parameters

11/04/2021
by   Tanuja Patanshetti, et al.
0

Data of the order of terabytes, petabytes, or beyond is known as Big Data. This data cannot be processed using the traditional database software, and hence there comes the need for Big Data Platforms. By combining the capabilities and features of various big data applications and utilities, Big Data Platforms form a single solution. It is a platform that helps to develop, deploy and manage the big data environment. Hadoop and Spark are the two open-source Big Data Platforms provided by Apache. Both these platforms have many configurational parameters, which can have unforeseen effects on the execution time, accuracy, etc. Manual tuning of these parameters can be tiresome, and hence automatic ways should be needed to tune them. After studying and analyzing various previous works in automating the tuning of these parameters, this paper proposes two algorithms - Grid Search with Finer Tuning and Controlled Random Search. The performance indicator studied in this paper is Execution Time. These algorithms help to tune the parameters automatically. Experimental results have shown a reduction in execution time of about 70 50 and Controlled Random Search, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro