Reducing Honeypot Log Storage Capacity Consumption – Cron Job with Perl-Script Approach

11/06/2019
by   Iman Hazwam Abd Halim, et al.
0

Honeypot is a decoy computer system that is used to attract and monitor hackers' activities in the network. The honeypot aims to collect information from the hackers in order to create a more secure system. However, the log file generated by honeypot can grow very large when heavy traffic occurred in the system, such as Distributed Denial of Services' (DDoS) attack. The DDoS possesses difficulty when it is being processed and analyzed by the network administrator as it required a lot of time and resources. Therefore, in this paper, we propose an approach to decrease the log size that is by using a Cron job that will run with a Perl-script. This approach parses the collected data into the database periodically to decrease the log size. Three DDoS attack cases were conducted in this study to show the increasing of the log size by sending a different amount of packet per second for 8 hours in each case. The results have shown that by utilizing the Cron job with Perl-script, the log size has been significantly reduced, the disk space used in the system has also decreased. Consequently, this approach capable of speeding up the process of parsing the log file into the database and thus, improving the overall system performance. This study contributes to providing a pathway in reducing honeypot log storage using the Cron job with Perl-Script.

READ FULL TEXT
research
07/08/2023

Compression Performance Analysis of Different File Formats

In data storage and transmission, file compression is a common technique...
research
12/01/2018

A Big Data Architecture for Log Data Storage and Analysis

We propose an architecture for analysing database connection logs across...
research
04/10/2023

Extension of Dictionary-Based Compression Algorithms for the Quantitative Visualization of Patterns from Log Files

Many services today massively and continuously produce log files of diff...
research
03/02/2023

Improved Algorithms for Monotone Moldable Job Scheduling using Compression and Convolution

In the moldable job scheduling problem one has to assign a set of n jobs...
research
12/03/2018

Hoard: A Distributed Data Caching System to Accelerate Deep Learning Training on the Cloud

Deep Learning system architects strive to design a balanced system where...
research
06/06/2021

Towards Logging Noisiness Theory: quality aspects to characterize unwanted log entries

Context: Logging tasks track the system's functioning by keeping records...
research
06/15/2023

A Multi-Level, Multi-Scale Visual Analytics Approach to Assessment of Multifidelity HPC Systems

The ability to monitor and interpret of hardware system events and behav...

Please sign up or login with your details

Forgot password? Click here to reset