Resource Management in Edge and Fog Computing using FogBus2 Framework

08/02/2021
by   Mohammad Goudarzi, et al.
0

Edge/Fog computing is a novel computing paradigm that provides resource-limited Internet of Things (IoT) devices with scalable computing and storage resources. Compared to cloud computing, edge/fog servers have fewer resources, but they can be accessed with higher bandwidth and less communication latency. Thus, integrating edge/fog and cloud infrastructures can support the execution of diverse latency-sensitive and computation-intensive IoT applications. Although some frameworks attempt to provide such integration, there are still several challenges to be addressed, such as dynamic scheduling of different IoT applications, scalability mechanisms, multi-platform support, and supporting different interaction models. FogBus2, as a new python-based framework, offers a lightweight and distributed container-based framework to overcome these challenges. In this chapter, we highlight key features of the FogBus2 framework alongside describing its main components. Besides, we provide a step-by-step guideline to set up an integrated computing environment, containing multiple cloud service providers (Hybrid-cloud) and edge devices, which is a prerequisite for any IoT application scenario. To obtain this, a low-overhead communication network among all computing resources is initiated by the provided scripts and configuration files. Next, we provide instructions and corresponding code snippets to install and run the main framework and its integrated applications. Finally, we demonstrate how to implement and integrate several new IoT applications and custom scheduling and scalability policies with the FogBus2 framework.

READ FULL TEXT VIEW PDF

Authors

page 20

page 21

page 22

page 23

page 24

page 25

page 27

page 28

08/08/2021

Master Graduation Thesis: A Lightweight and Distributed Container-based Framework

Edge/Fog computing is a novel computing paradigm that provides resource-...
11/29/2018

FogBus: A Blockchain-based Lightweight Framework for Edge and Fog Computing

The requirement of supporting both latency sensitive and computing inten...
01/10/2021

Con-Pi: A Distributed Container-based Edge and Fog Computing Framework for Raspberry Pis

Edge and Fog computing paradigms overcome the limitations of Cloud-centr...
06/15/2018

VIoLET: A Large-scale Virtual Environment for Internet of Things

IoT deployments have been growing manifold, encompassing sensors, networ...
05/22/2019

ElfStore: A Resilient Data Storage Service for Federated Edge and Fog Resources

Edge and fog computing have grown popular as IoT deployments become wide...
02/04/2020

A Scalable IoT-Fog Framework for Urban Sound Sensing

Internet of Things (IoT) is a system of interrelated devices that can be...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The rapid advancements of hardware, software, and communication technologies enable the Internet of Things (IoT) to offer a wide variety of intelligent solutions in every single aspect of our lives. Therefore, IoT-enabled systems such as smart healthcare, transportation, agriculture, and entertainment, just to mention a few, have been attracting ever-increasing attention in academia and industry. IoT applications generate a massive amount of data which requires processing and storage, while IoT devices often lack sufficient processing and storage resources. Cloud computing offers infrastructure, platform, and software services for IoT-enabled systems, through which IoT applications can process, store, and analyze their generated data in surrogate Cloud Servers (CSs) [1, 2]. There are different Cloud Service Providers (CSPs) with a wide variety of services, where each CSP provides a particular set of services such as computing, database, and data analysis in an optimized way. Hence, no CSP can satisfy the full functional requirements of different IoT applications in an optimized manner [3]. As a result, each IoT application can be particularly serviced by a specific CSP or simultaneously by different CSPs, which is often called hybrid cloud computing [3]. Although hybrid cloud computing platform provides IoT devices with unlimited and diverse computing and storage resources, CSs are residing multi-hops away from IoT devices, which incurs high propagation and queuing latency. Thus, CSs cannot solely provide the best possible services for latency-critical and real-time IoT applications (e.g., intelligent transportation, smart healthcare, emergency, and real-time control systems) [4, 5]. Besides, forwarding the huge amount of data generated by distributed IoT devices to CSs for processing and storage may overload the CSs [6]. To overcome these issues, edge and fog computing has emerged as a novel distributed computing paradigm.

In edge and fog computing environments, the geographically distributed heterogeneous Edge Servers (ESs) (e.g., access points, smartphones, Raspberry-Pis), situated in the vicinity of IoT devices, can be used for processing and storage of IoT devices’ data. These ESs can be accessed with lower latency, which makes them a potential candidate for latency-critical IoT applications, and reduce the traffic of the network’s backbone [7]. However, the computing and storage resources of ESs are limited compared to CSs, so that they cannot efficiently execute computation-intensive tasks. Therefore, to satisfy the resource and Quality of Service (QoS) requirements of diverse IoT-enabled systems, a seamlessly integrated computing environment with heterogeneous edge/fog and different cloud infrastructures is required, as depicted in Figure 1.

Figure 1: Heterogeneous Computing Environment containing Multiple Cloud Servers, Edge/Fog Servers, and IoT Devices

The computing and storage resources in such an integrated environment are highly heterogeneous in terms of their architecture, processing speed, RAM capacity, communication protocols, access bandwidth, and latency, just to mention a few. Furthermore, there are a wide variety of IoT-enabled systems with various QoS and resource requirements. Accordingly, to satisfy the requirements of IoT applications in such an integrated environment, scheduling and resource management techniques are required to dynamically place incoming requests of IoT applications on appropriate servers for processing and storage [8]. In order to develop, test, deploy, and analyze different IoT applications and scheduling and resource management techniques in real-world scenarios, lightweight and easy-to-use frameworks are required for both researchers and developers. There are some existing frameworks for integrating IoT-enabled systems with edge and fog computing such as [9, 10, 8, 11, 12, 13, 14]. However, they only focus on one aspect of IoT-enabled systems in edge and fog computing, such as scheduling, implementation of a new type of IoT application, or resource discovery. In this chapter, we provide a tutorial on FogBus2 framework [15] which offers IoT developers a suite of containerized IoT applications, scheduling and scalability mechanisms, and different resource management policies in an integrated environment, consisting of multi-cloud service providers, edge and fog computing servers, and IoT devices. Furthermore, we extend this framework by new resource management techniques such as a new scheduling policy. In addition, new types of IoT applications, either real-time or non-real-time, are implemented and integrated with the FogBus2 framework.

The rest of the chapter is organized as follows: We start with discussion on the FogBus2 framework, its main components, and respective communication protocols. Next, we describe how to install and run the current functionalities of FogBus2, considering different IoT applications. Finally, we provide a guideline presenting how to develop and integrate new IoT applications and new policies into the FogBus2.

2 FogBus2 Framework

FogBus2 [15] is a new container-based framework based on docker containers, developed in Python. To enable the integration of various IoT application scenarios in highly heterogeneous computing environments, FogBus2’s components can be simultaneously executed on one or multiple distributed servers in any computing layer. This feature significantly helps researchers and developers in the development and testing phases because they can develop, test, and debug their desired IoT applications, scheduling, and resource management policies on one or a small number of servers. Furthermore, in the deployment phase, they can run and test their IoT applications, scheduling, and resource management techniques on an unlimited number of servers.

2.1 Main Components

FogBus2 consists of five containerized components, namely User, Master, Actor, Task Executor, and Remote Logger. Among these components, the User should run on IoT devices or any servers that directly interact with users’ sensory or input data. The rest of the components can run on any servers with sufficient resources. Each of these containerized components contains several sub-components (sub-C) with specific functionalities. Figure 2 presents FogBus2’s main components and their respective sub-Cs. Since the components of the FogBus2 can run on geographically distributed servers, a message handler Sub-C is embedded in each component to handle sending and receiving of messages. In what follows, we briefly describe the main functionalities and sub-Cs of each component.

Figure 2: FogBus2 Main Components, Sub-components, and their Interactions
  • User: This component controls the IoT device’s requests for surrogate resources and contains two main sub-Cs, namely sensor and actuator, alongside with message handler. The sensor is responsible for capturing and management of raw sensory data and configuring sensing intervals based on IoT application scenarios. Besides, the actuator’s main function is collecting the incoming processed data and executing a respective action. The actuator can be configured by its users to perform real-time actions based on incoming processed data or periodic actions based on a batch of processed data. Researchers and developers can configure the sensor and actuator to implement different application scenarios.

  • Remote Logger: The main functionality of this component is to collect and store the contextual information of servers, IoT devices, IoT applications, and networking. It contains the logger manager sub-C that can connect to different databases, receives logs of other components, and stores logs in persistent storage. By default, the Remote Logger connects to databases to store logs, which is easier to manage and maintain. However, logs can be stored in files as well.

  • Master: In a real-world computing environment, one or multiple Master components may exist. This component contains four main sub-Cs, called profiler, scheduler scaler, registry, and resource discovery, alongside with the message handler. When the Master starts, the resource discovery sub-C periodically search the network to find available Remote Logger, Master, and Actor components in the network. If new components can be found in the network, the resource discovery advertise itself to those components, so that they can send a request and register themselves in this Master. If the Master receives any requests for registration or placement requests from IoT devices (i.e., User components), the registry sub-C will be called. This sub-C records the information of IoT devices and other components and assigns them a unique identifier. Besides, when the incoming message is a placement request from User components, it initiates the sub-C. The sub-C receives the placement request from the registry sub-C, the contextual profiling information of all available servers, and networking information from the profiler sub-C. Next, if it has enough resources to run the scheduling policy and its placement queue is not very large (configurable queue size), it runs one of the scheduling policies implemented in the FogBus2 framework to assign tasks/containers of the IoT application on different servers for the execution. According to the outcome of scheduling policy, the Master component forwards required information to the selected Actors

    to execute tasks/containers of the IoT application. Currently, three scheduling policies are embedded in the FogBus2 framework, namely Non-dominated Sorting Genetic Algorithm 2 (

    NSGA2) [16], Non-dominated Sorting Genetic Algorithm 3 (NSGA3) [17], and Optimized History-based Non-dominated Sorting Genetic Algorithm (OHNSGA) [15]. If due to any reason the Master component cannot run its scheduling policy, it runs the scalability mechanism to forward the placement request to other available Master components, or it initiates a new Master component on of the available servers. In the rest of this chapter, we describe how to use current scheduling and scalability policies. Furthermore, we also present how to implement new scheduling and scalability policies and integrate them into the FogBus2 framework.

  • Actor: The main responsibility of this component is to start different Task Executor components on the server on which it is running. To illustrate, available surrogate servers in the environment should run Actor component. Then, these Actor components will be automatically discovered and registered by one or several Master components in the environment. The Actor component profiles the hardware and networking condition of the server on which it is running using the profiler sub-C. Besides, when a Master component assigns a task of an IoT application to an Actor for the execution, it calls the task executor initiator sub-C which initiates different Task Executor components on the server according to different IoT applications. This sub-C also defines the destination to which the result of each Task Executor should be forwarded based on the dependency model of the IoT application. Finally, in order to scale Master components in the environment, each Actor is embedded with a master initiator sub-C. When an Actor receives a scaling message from one of the available Master components in the environment, the master initiator sub-C will be called. This sub-C starts a Master component on the server, which can independently serve incoming IoT application requests. In addition, it can be seen that each server simultaneously can run different components (e.g., Master, Actor, Task Executor, etc) and play different roles.

  • Task Executor: IoT applications can be represented as a set of dependent or independent tasks or services. In the rest of this chapter, tasks and services are used interchangeably. In the dependent model, the execution of tasks has constraints and each task can be executed when its predecessor tasks are properly executed. In FogBus2, each Task Executor component is responsible for the execution of specific task; i.e., each task or service can be containerized as a Task Executor. To illustrate, an IoT application with three decoupled tasks should have three separate Task Executor components, so that each Task Executor corresponds to one IoT application’s task. Considering the granularity level (e.g., task, service) of IoT applications in FogBus2, an application can be deployed on distributed servers for execution. The Task Executor consists of two sub-Cs, called executor and local logger. The former Sub-C initiates the execution of one task and forwards the results to the next Task Executor components if the IoT application is developed using the dependent model. It is crystal clear that in the independent model, the results will be forwarded to the Master component for the aggregation or directly to the corresponding User component. Besides, the local logger sub-C records the contextual information of this task, such as its execution time.

2.2 Interaction Scenario

Figure 3: FogBus2 Sequence Diagram

Considering the FogBus2 framework is in a ready state, Figure 3 depicts the interaction of IoT users with the framework as a sequence diagram. The IoT device runs a specific User component for each IoT application, configuring and controlling the sensing interval and aggregation of sensory data. The User component sends a placement request to the Master component. The Master checks the IoT device and requested IoT application, and accordingly assigns it a unique identifier and registers it in its record. Next, the Master calls its scheduling scaler sub-C to handle the current placement request. The scheduling scaler sub-C has the contextual information of available Actors, Task Executor components, IoT application, and the networking condition. Accordingly, it runs the scheduling and scaling policies to find the best possible configuration of constituent parts of an IoT application. Based on the outcome of scheduling scaler, two scenarios may happen. In the first scenario, if there exist no available Task Executor components to be reused for this new request, the Master sends the placement request to the Actor components, selected by the scheduling mechanism. Then, the Actor components who receive this message initiate corresponding Task Executor components on the servers on which they are assigned. In the second scenario where there are some corresponding Task Executor components in the cooling-off period, the Master directly reuses those Task Executor components, which reduces the service ready time of IoT application. When all corresponding Task Executor components of the IoT application are ready, the Master sends a ready message to the User component. This message states that the service is ready, and the IoT device can start sending data. Hence, User component sends the sensory data to the Master, and this component forwards the sensory data to the corresponding Task Executor components. After Task Executor components finish their execution, the result will be forwarded to the Master. Finally, the Master component sends the respective logs to the Remote Logger component, and also forwards the results to the User component.

In addition, if the current Master component cannot handle the placement request, the request will be forwarded to other Master components in the environment, or a new Master component will be initiated on a new server. The rest of the steps for handling the placement request is the same as the above-mentioned process.

2.3 Communication Protocol

Different components of the FogBus2 framework can communicate together by passing messages. Therefore, understanding the communication protocol of FogBus2 is important, especially for the developers. The communication protocol of FogBus2 is implemented in JSON format and messages contain eight main elements, as depicted in Figure 4.

Figure 4: FogBus2 Communication Protocol Format

The source and destination are JSON objects containing the metadata of source and destination of one message, respectively. The sentAtSourceTimestamp and receivedAtLocalTimestamp elements are embedded to calculate the networking delay. Furthermore, each message can carry any types of information, stored in data. Besides there are three other elements, namely type, subType, and subSubType, which are used to categorize messages. There are 10 types of messages in the current version of FogBus2 framework, shown in Figure 4, where each type can be further divided into 41 subType and 5 subSubType. Hence, type, subType, and subSubType elements logically provides a hierarchical structure for the categorization of the messages. Due to the page limit we cannot describe all the messages here, however, the most important messages and their respective description are provided in Table 1. Also, code snippet 1 presents a sample FogBus2 message used for sending the log information () of server resources () from an Actor component () to the Remote Logger component (). Accordingly, the message contains the resources information in the data element.

Sender Receiver Type SubType SubSubType Description
Master Actor placement runTaskExecutor -
Master has finished the scheduling and sends
this message in a no-reuse scenario
TaskExecutor Master placement lookup -
Task Executor requests the address of its children
Task Executors (in the dependent model)
Master TaskExecutor placement lookup -
Master responds to the lookup message of
Task Executors
TaskExecutor Master acknowledgement ready -
Task Executor has received its children’s information,
and use this message to acknowledge the Master
that it is ready
Master User acknowledgement serviceReady -
When the service is ready and User can start sending
sensory data
User Master data sensoryData - sensoryData forwarded from the User
Master TaskExecutor data intermediateData -
Master sends sensory data to Task Executor(s) for
processing
TaskExecutor TaskExecutor data intermediateData -
Task Executor finishes its execution and send
intermediate data to other Task Executor(s)
TaskExecutor Master acknowledgement waiting -
Task Executor asks Master whether it can go
into the cool off period
Master TaskExecutor acknowledgement wait -
Master asks Task Executor to start its cool off
period immediately
Master TaskExecutor placement reuse -
Master has finished the scheduling and sends
this message in reuse scenario
TaskExecutor Master data finalResult - Task Executor sends final results to Master
Master User data finalResult - Master sends final results to User
Master A Master B scaling getProfiles -
Master A send request to get profiles from the
Master B
Master B Master A scaling profilesInfo - Master B sends profiles to Master B
Master Actor scaling initNewMaster - Master asks Actor to initiate a new Master
RemoteLogger Master log allResourcesProfiles -
This message is sent in response to requestProfiles
message of the Master
Master A Master B resourcesDiscovery requestActorsInfo -
Master A asks Master B the information of Actors
registered at Master B for further advertisement
Master B Master A resourcesDiscovery actorsInfo -
Master B sends its registered Actors’ information
to Master A
Master Actor resourcesDiscovery advertiseMaster - Master advertises itself to Actor
Any Components Any Components resourcesDiscovery probe try
Any component receiving probe message should
provide its component role, such as Master, Actor,
etc to the sender
Any Components Any Components resourcesDiscovery probe result
The response to the probe message received from
one component
Table 1: Important Communication Messages
1{’data’: {’resources’: {’cpu’: {’cores’: 8,  // Message type is log, subtype is hostResources. Thus, data contains resources
2                  ’frequency’: 2400.0,
3                  ’utilization’: 0.052,
4                  ’utilizationPeak’: 1.0},
5                  ’memory’: {’maximum’: 17179869184,
6                  ’utilization’: 0.075,
7                  ’utilizationPeak’: 1.0}}},
8 ’destination’: {’addr’: [’127.0.0.1’, 5000],
9                 ’componentID’: ’?’,
10                 ’hostID’: ’HostID’,
11                 ’name’: ’RemoteLogger-?_127.0.0.1-5000’,
12                 ’nameConsistent’: ’RemoteLogger_HostID’,
13                 ’nameLogPrinting’: ’RemoteLogger-?_127.0.0.1-5000’,
14                 ’role’: ’RemoteLogger’},
15 ’receivedAtLocalTimestamp’: 0.0,
16 ’sentAtSourceTimestamp’: 1625572932123.89,
17 ’source’: {’addr’: [’127.0.0.1’, 50000],
18                  ’componentID’: ’2’,
19                  ’hostID’: ’127.0.0.1’,
20                  ’name’: ’Actor’,
21                  ’nameConsistent’: ’Actor_127.0.0.1’,
22                  ’nameLogPrinting’: ’Actor-2_127.0.0.1-50000_Master-?_127.0.0.1-5001’,
23                  ’role’: ’Actor’},
24 ’subSubType’: ’’,
25 ’subType’: ’hostResources’,
26 ’type’: ’log’}
Code Snippet 1: An Example of FogBus2 Message Format

2.4 Main Capabilities

In this section, we briefly describe the main capabilities of the FogBus2 framework.

  • Container-enabled: All components of the FogBus2 framework, alongside IoT applications, are containerized. Not only does this feature enable fast deployment of IoT applications, but it also leads to faster deployment of the framework’s components. Also, it brings fast portability as the containerized IoT applications and components of the framework can run smoothly on different servers.

  • Multi platform support: In a highly heterogeneous computing environment, a wide variety of servers and IoT devices with different platforms (e.g., Intel x86, AMD, ARM) exist. To fully utilize the potential of heterogeneous servers in the cloud and/or at the edge, the containerized framework should be compatible with different platforms. To achieve this, the FogBus2 framework uses multi-arch images. Such images are built and pushed to registries with multiple variants of operating systems or CPU architectures while the image name is the same for all. Accordingly, pulling images on a server with specific architecture results in a compatible image variant for that server.

  • Scheduling: Considering available resources of heterogeneous servers and various types of IoT applications with different levels of resource requirements, the scheduling of incoming requests of IoT applications is of paramount importance. As a result, the Master component of the FogBus2 framework is embedded with a scheduler scaler sub-C, which is integrated with different scheduling policies. The researchers and developers can either use the integrated policies or can develop their scheduling policies and integrate them with the scheduler scaler sub-C.

  • Dynamic scalability: The number of IoT devices and incoming requests varies at different times. If the number of incoming requests increases, the framework may become a bottleneck as the queuing time of incoming requests, which require scheduling and processing, increases. Hence, a dynamic scalability mechanism is embedded in the Master component of the FogBus2 framework to dynamically scale up the Master components as the number of incoming requests increases, which significantly reduce the queuing time of incoming requests from IoT applications. FogBus2 users can use the integrated scalability policy of the FogBus2, or develop their scalability policies.

  • Dynamic resource discovery: The highly heterogeneous and integrated computing environments, as depicted in Figure 1 are considerably dynamic. This indicates that new servers may join or leave the environment due to different reasons. Furthermore, each server may run different components of the FogBus2 framework at a specific time. Hence, the FogBus2 framework offers a dynamic resource discovery mechanism to discover available servers in the environment and the containers they are running. This feature ensures the last-minute information of available servers and their functionalities are always accessible.

  • Supporting different topology models for communication: IoT applications require different communication models such as client-server and peer-to-peer (P2P), just to mention a few. Accordingly, to efficiently manage the inter-component communications for different IoT applications, each containerized component of the FogBus2 framework contains a message handler sub-C which is responsible for sending and receiving messages to/from other components. Therefore, based on the distributed message handling mechanism of the FogBus2 framework, researchers and developers can implement different communication topology models based on their IoT application scenarios.

  • Virtual Private Network (VPN) support: In the highly heterogeneous computing environment, several servers with public and private IP addresses exist. Servers with public addresses can bi-directionally communicate with each other. However, servers with private addresses cannot bi-directionally communicate with servers with public and private IP addresses. As a result, the FogBus2 put forward a P2P VPN script, working based on the Wireguard 111https://www.wireguard.com/, as an optional feature for researchers and developers to set up a VPN among all desired servers. Among the most common VPN tools, the Wireguard has the least overhead, making it a suitable option for IoT applications, specifically real-time and latency-critical ones.

  • Supporting heterogeneous IoT applications: FogBus2 framework supports various types of IoT applications, ranging from latency-critical and real-time IoT applications to highly computation-intensive IoT applications. Besides, it provides several ready-to-use containerized and modularized IoT applications for its users. Hence, they can simply use the current embedded IoT applications, extend current IoT applications by modifying modules of current IoT applications or integration of new modules or define their desired IoT applications from scratch.

  • Distributed multi-database platform support: Fogbus2 framework is currently integrated with two different databases. First, it uses the containerized version of MariaDB 222https://mariadb.org/

    which is an open-source MySQL-based database developed by the original developers of MySQL. Besides, it is integrated with Oracle autonomous database (AutoDB)

    333https://www.oracle.com/au/autonomous-database/

    which is an intelligent cloud-based database. AutoDB uses machine learning to automate database tuning, security, backups, updates, and other routine management tasks without human intervention.

  • Reusability: Containerization significantly helps to reduce the deployment time of IoT applications compared to the traditional deployment techniques. However, as the number of incoming requests from different IoT devices increases, the startup time of containers, serving the IoT requests, may negatively affect the service ready time. Accordingly, the FogBus2 framework offers a configurable cooling-off period for the Task Executor components, during which containers keep waiting for the next incoming request of the same type before stopping. This feature significantly helps to reduce the service ready time of IoT applications, specifically when the environment is crowded.

  • Usability: FogBus2 offers a default setting for users, by which they can easily run the embedded IoT applications and test the functionality of all framework’s components. Besides, users can play with several embedded options to configure IoT applications and the framework’s components according to their desired scenario. In the rest of this chapter, we explain the most important options of this framework so that the users can efficiently configure the framework.

3 Installation of FogBus2 Framework

FogBus2 is a new containerized framework developed by the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne. As the FogBus2 framework targets both users and developers, we have provided two ways for building images of docker containers: 1) building from scratch and 2) pulling from docker hub. Accordingly, a straightforward way to install this framework is put forward.

3.1 Building From Scratch

IoT developers may want to extend and configure the FogBus2 framework and define their applications on top of this framework. Hence, it is required for them to know how to build the images from scratch. In what follows, we describe this process, which is tested on Ubuntu 18.04, Ubuntu 20.04, Ubuntu 21.04, and macOS Big Sur.

  1. Prepare the prerequisites:

    1. Install python3.9+

    2. Install pip3

    3. Install docker engine

    4. Install docker compose

  2. Clone/download the source code of FogBus2 framework from https://github.com/Cloudslab/FogBus2 to any desired location.

  3. go to the FogBus2 folder:

    1    $ cd fogbus2
    2    $ pwd
    3    /home/ubuntu/fogbus2
  4. Install the required dependencies:

    1    $ python3.9 -m pip install -r /containers/user/sources/requirements.txt
  5. Prepare and configure the database:

    1    $ cd containers/database/mariadb/
    2    $ python3.9 configure.py --create --init
  6. Build all docker images:

    1    $ pwd
    2    /home/ubuntu/fogbus2/demo
    3    $ python3.9 demo.py --buildAll

The demo.py file automatically starts building all docker images to simplify this process. This process may take a long time to complete based on the server on which you are building the images. Besides, after any changes the developers apply to the code, the images should be rebuilt. Moreover, in distributed application scenarios, where different components should run on different servers, the components’ images should be created or migrated on/to different servers. To do so, demo.py can be configured through command-line options to only build specific images rather than creating all images. Finally, developers who are interested in extending the framework or defining new applications can use this file to understand how to create and configure their images.

3.2 Pulling From Docker Hub

To simply use and test the latest features of the FogBus2 framework, the multi-arch images of different components of this framework are available in the docker hub to be pulled. Although it is a faster and simpler way to run and test the FogBus2 framework and its integrated applications, users who are interested in extending and modifying this framework should build the images from scratch. In what follows, we describe the required steps to install the FogBus2 framework using uploaded images to the docker hub.

  1. Prepare the prerequisites:

    1. Install docker engine

  2. Pull the docker images of Master, Actor, User, and RemoteLogger on desired servers using the following commands:

    1    $ docker pull cloudslab/fogbus2-remote_logger && docker tag cloudslab/fogbus2-remote_logge fogbus2-remote_logger
    2    $ docker pull cloudslab/fogbus2-master && docker tag cloudslab/fogbus2-master fogbus2-master
    3    $ docker pull cloudslab/fogbus2-actor && docker tag cloudslab/fogbus2-actor fogbus2-actor
    4    $ docker pull cloudslab/fogbus2-user && docker tag cloudslab/fogbus2-user fogbus2-user
  3. Install any desired applications by means of pulling respective docker images (i.e., Task Executor components) of that application. To illustrate, the following command is put forward:

    1. Install video-OCR application:

      1        $ docker pull cloudslab/fogbus2-ocr && docker tag cloudslab/fogbus2-ocr fogbus2-ocr

The video-OCR application consists of one Task Executor, called fogbus2-ocr. However, there exist other integrated applications, which each one contains several dependent Task Executor components. For such applications, all dependent Task Executor components should be pulled for the proper execution of the application.

4 Sample FogBus2 Setup

In this section, we describe how to configure the FogBus2 framework to run some of the currently integrated applications. We suppose that docker images are properly built or pulled on the servers, and they are ready to use.

Our sample integrated computing environment consists of six CSs, tagged by A to F, two ESs, tagged by G to H, and a device playing the role of an IoT device, tagged as I. We have used three Oracle Ampere A1 Instances  444https://www.oracle.com/au/cloud/compute/arm/555To reproduce this setup, you can use up to 4 Oracle Ampere A1 instances in always free Oracle Cloud Free Tier. and three Nectar instances 666ARDC’s Nectar Research Cloud is an Australian federated research cloud. to set up a multi-cloud environment. Besides, as ESs, we used Raspberrypi 4B777https://www.raspberrypi.org/products/raspberry-pi-4-model-b/ and Nvidia Jetson Nano888https://www.nvidia.com/en-au/autonomous-machines/embedded-systems/jetson-nano/ to set up an edge computing layer with heterogeneous resources. Our CSs have public IP addresses, while ESs do not hold public IP addresses. In this case, to integrate ESs and CSs in the FogBus2 framework VPN connection is required. Consequently, we provide a guideline and a script to simply establish a P2P VPN between all participating servers, either at the edge or at the cloud. It is crystal clear that in case all servers have public IP addresses or all components are running on one server, the VPN is not required. Table 2 shows the list of servers, their computing layer, public IP addresses, private IP addresses after the establishment of the VPN connection, and the FogBus2’s components running on each server. In the rest of this section, we describe how to set up a P2P VPN, assign private IP addresses to these servers, and how to run FogBus2 components on each server. As a prerequisite, make sure to open the required ports on servers.

1    # Required Ports for FogBus2 Components
2    REMOTE_LOGGER_PORT_RANGE=5000-5000
3    MASTER_PORT_RANGE=5001-5010
4    ACTOR_PORT_RANGE=50000-50100
5    USER_PORT_RANGE=50101-50200
6    TASK_EXECUTOR_PORT_RANGE=50201-60000
7
8    # Required Port for Wireguard
9    WG_PORT=4999
10
11    # Required Port for MariaDB Database
12    PORT=3306

4.1 P2P VPN Setup

We have used the Wireguard to set up a lightweight P2P VPN connection among all servers. In what follows, we describe how to install and configure VPN while all servers run Ubuntu as their operating system:

Server
Tag
Server
Name
Computing
Layer
Public IP
Address
Private IP
Address
Port
Component
Role
Environment
Preparation
A Oracle1 Cloud 168.138.9.91 192.0.0.1 5000
RemoteLogger,
Actor_1
docker and
docker-compose
B Oracle2 Cloud 168.138.10.94 192.0.0.2
automatically
assign
Actor_2
docker and
docker-compose
C Oracle3 Cloud 168.138.15.110 192.0.0.3
automatically
assign
Actor_3
docker and
docker-compose
D Nectar1 Cloud 45.113.235.222 192.0.0.4
automatically
assign
Actor_4
docker and
docker-compose
E Nectar2 Cloud 45.113.232.187 192.0.0.5
automatically
assign
Actor_5
docker and
docker-compose
F Nectar3 Cloud 45.113.232.245 192.0.0.6
automatically
assign
Actor_6
docker and
docker-compose
G
RPi 4B
4GB
Edge - 192.0.0.7
automatically
assign
Actor_7
docker and
docker-compose
H
Jetson Nano
4GB
Edge - 192.0.0.8 5001 Master
docker and
docker-compose
I
VM on a
Laptop
IoT - 192.0.0.9
automatically
assign
User Python3.9
Table 2: Sample Configuration of Servers in Integrated Computing Environment
  1. Install Wireguard on all servers:

    1    $ sudo apt update
    2    $ sudo apt install wireguard
    3    $ wg --version
    4    wireguard-tools v1.0.20200513 - https://git.zx2c4.com/wireguard-tools/
  2. Simply configure the Wireguard on servers using our auto-generating script:

    1. Specify server information on hostIP.csv:

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd config/host/
      4        $ cat config/host/hostIP.csv
      5        hostname, publicIP
      6        oracle1, 168.138.9.91
      7        oracle2, 168.138.10.94
      8        oracle3, 168.138.15.110
      9        nectar1, 45.113.235.222
      10        nectar2, 45.113.232.187
      11        nectar3, 45.113.232.245
      12        rpi-4B-2G,
      13        JetsonNano-4G,
      14        VM-laptop,
    2. Automatically generate Wireguard configuration files:

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd scripts/wireguard/
      4        $ python3.9 generateConf.py
      5        ...
      6        ==========================================
      7        hostname WireguardIP
      8        oracle1 192.0.0.1
      9        oracle2 192.0.0.2
      10        oracle3 192.0.0.3
      11        nectar1 192.0.0.4
      12        nectar2 192.0.0.5
      13        nectar3 192.0.0.6
      14        rpi-4B-2G-4B 192.0.0.7
      15        JetsonNano-4G 192.0.0.8
      16        VM-laptop 192.0.0.9
      17        ==========================================
      18        [*] Generated Wireguard config for oracle1: /path/to/proj/output/wireguardConfg/oracle1/wg0.conf
      19        ...
      20        ==========================================
    3. Copy obtained configuration files to /etc/wireguard/wg0.conf of each server, respectively.

    4. Run Wireguard on each server:

      1        $ sudo wg-quick up /etc/wireguard/wg0.conf && sudo wg
  3. Test the P2P VPN connection using Ping command and private IP addresses.

  4. If Ping command does not properly work, make sure to open the configured Wireguard port on all servers. In FogBus2, the default port of Wireguard is set to UDP 4999, which can be changed from /home/ubuntu/fogbus2/config/network.env.

    1    #Install, enable and configure Firewalld
    2    $ sudo apt update
    3    $ sudo apt install firewalld
    4    $ sudo systemctl enable firewalld
    5    $ sudo firewall-cmd --state
    6    $ sudo firewall-cmd --permanent --zone=public  --add-port=22/tcp --add-port=53/tcp --add-port=3306/tcp --add-port=4999/udp --add-port=5000-5010/tcp --add-port=5000-60000/tcp
    7    $ sudo firewall-cmd --reload

4.2 Running FogBus2 Components

Considering Table 2, the FogBus2 components should run on different servers. Also, each server may run several components simultaneously and play different roles, similar to the server A. In what follows, we describe how to run these components and provide respective commands.

  1. Starting RemoteLogger component:

    1. Configure database credentials in containers/remoteLogger/sources/.mysql.env

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cat containers/remoteLogger/sources/.mysql.env
      4        HOST=192.0.0.1
      5        PORT=3306
      6        USER=root
      7        PASSWORD=passwordForRoot
    2. Run RemoteLogger component on server A

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd containers/remoteLogger
      4        $ docker-compose run --rm --name TempContainerName fogbus2-remote_logger --bindIP 192.0.0.1 --containerName TempContainerName
  2. Starting Master component on server H. The schedulerName specifies the name of scheduling policy used by this Master component. Hence, in computing environments with multiple Master components, each Master component can be separately configured to run different scheduling policies:

    1. On server H, configure database credentials:

      1            $ pwd
      2            /home/ubuntu/fogbus2/containers/master/sources/
      3            cat .mysql.env
      4            HOST=192.0.0.1
      5            PORT=3306
      6            USER=root
      7            PASSWORD=passwordForRoot
    2. Run Master component on the server H:

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd containers/master
      4        $ docker-compose run --rm --name TempContainerName fogbus2-master --bindIP 192.0.0.8 --bindPort 5001 --remoteLoggerIP 192.0.0.1 --remoteLoggerPort 5000 --schedulerName OHNSGA --containerName TempContainerName
  3. Starting Actor components:

    1. Run Actor component on server A

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd containers/actor
      4        $ docker-compose run --rm --name TempContainerName fogbus2-actor --bindIP 192.0.0.1  --remoteLoggerIP 192.0.0.1 --remoteLoggerPort 5000 --masterIP 192.0.0.8 --masterPort 5001 --containerName TempContainerName
    2. Run Actor components on servers B to G using the above-mentioned command. You need to modify the value of bindIP option for each server and use the respective private IP address of that server. For instance, to run the Actor component on server B:

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd containers/actor
      4        $ docker-compose run --rm --name TempContainerName fogbus2-actor --bindIP 192.0.0.1  --remoteLoggerIP 192.0.0.2 --remoteLoggerPort 5000 --masterIP 192.0.0.8 --masterPort 5001 --containerName TempContainerName
  4. Considering the IoT application, the User component can run with different applicationName options. Current version of FogBus2 comes with two integrated IoT applications, called VideoOCR [15] and GameOfLifeParallelized [15], while in the rest of this book chapter we design and implement more IoT applications to describe how to define new IoT applications.

    1. Run User component for VideoOCR application on server I. The videoPath shows the the address of input video to feed into VideoOCR algorithm.

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd containers/user/sources
      4        $ python3.9 user.py --bindIP 192.0.0.9 --masterIP 192.0.0.8 --masterPort 5001 --remoteLoggerIP 192.0.0.1 --remoteLoggerPort 5000 --applicationName VideoOCR --applicationLabel 720 --videoPath /path/to/video.mp4
    2. Run User component for GameOfLifeParallelized application on server I.

      1        $ pwd
      2        /home/ubuntu/fogbus2
      3        $ cd containers/user/sources
      4        $ python user.py --bindIP 192.0.0.9 --masterIP 192.0.0.8 --masterPort 5001 --remoteLoggerIP 192.0.0.1 --remoteLoggerPort 5000 --applicationName GameOfLifeParallelized --applicationLabel 48

5 Extending FogBus2 Framework and New IoT Applications

In this section, we describe how to implement and integrate new IoT applications in the FogBus2 framework. Secondly, we put forward a new scheduling algorithm and demonstrate its integration procedure with this framework.

5.1 Implementation of New IoT Applications

Every containerized IoT application can be implemented and integrated with the FogBus2 framework. Alongside the implementation of a new IoT application, there are several required steps to follow in order to implement and integrate the new IoT application with the FogBus2 framework, such as building docker images and defining dependencies between different tasks. In what follows, we describe a straightforward mathematical application, how to implement it, and how to integrate it with the FogBus2 framework.

Figure 5: A Logical Model of a New Application

Figure 5 shows a new application to be implemented with the FogBus2 framework. This mathematical application contains three different tasks, called Part 0, Part 1, and Part 2, that can be executed in parallel, and it requires three inputs as a, b, c. To integrate this application into the FogBus2 framework, three tasks should be dockerized and prepared to be integrated as Task Executor components. Besides, we need a User component to receive inputs (using Sensor sub-C) and show outputs (using Actuator sub-C). The input will be forwarded to the Master component of the framework, and this component forwards inputs to corresponding Task Executor components based on the outcome of the scheduling algorithm. The following steps demonstrate how to implement and integrate the new application with the FogBus2 framework:

  1. Create three python files as three different tasks with the desired naming convention. We name these files as naiveFormula0.py, naiveFormula1.py, and naiveFormula2.py which contain the logic of tasks Part 0, Part 1, and Part 2, respectively.

    1        $ pwd
    2        /home/ubuntu/fogbus2
    3        $ cd containers/taskExecutor/sources/utils/taskExecutor/tasks
    4        $ > naiveFormula0.py
    5        $ > naiveFormula1.py
    6        $ > naiveFormula2.py
  2. Edit the corresponding python files of each task and insert the required logic. For each task, a unique identifier taskID is required.

    1. The logic of task naiveFormula0.py:

      1        $ nano naiveFormula0.py
      2            from .base import BaseTask
      3
      4            class NaiveFormula0(BaseTask):
      5                def __init__(self):
      6                    super().__init__(taskID=108, taskName=’NaiveFormula0’)
      7
      8                def exec(self, inputData):
      9                    a = inputData[’a’]
      10                    b = inputData[’b’]
      11                    c = inputData[’c’]
      12
      13                    result = a + b + c
      14                    inputData[’resultPart0’] = result
      15
      16                    return inputData
    2. The logic of task naiveFormula1.py:

      1        $ nano naiveFormula1.py
      2            from .base import BaseTask
      3
      4            class NaiveFormula1(BaseTask):
      5                def __init__(self):
      6                    super().__init__(taskID=109, taskName=’NaiveFormula1’)
      7
      8                def exec(self, inputData):
      9                    a = inputData[’a’]
      10                    b = inputData[’b’]
      11                    c = inputData[’c’]
      12
      13                    result = a * a / (b * b + c * c)
      14                    inputData[’resultPart1’] = result
      15
      16                    return inputData
    3. The logic of task naiveFormula2.py:

      1        $ nano naiveFormula2.py
      2            from .base import BaseTask
      3
      4            class NaiveFormula2(BaseTask):
      5                def __init__(self):
      6                    super().__init__(taskID=110, taskName=’NaiveFormula2’)
      7
      8                def exec(self, inputData):
      9                    a = inputData[’a’]
      10                    b = inputData[’b’]
      11                    c = inputData[’c’]
      12
      13                    result = 1 / a + 2 / b + 3 / c
      14                    inputData[’resultPart2’] = result
      15                    return inputData
    4. The return value of exec functions in the above mentioned tasks will be managed by Task Executor. If it is none, the return value will be ignored, otherwise, it will be forwarded to next Task Executor components based on the specified dependencies among tasks.

  3. Configure arguments:

    1. Configure __init__.py:

      1        $ pwd
      2        /home/ubuntu/fogbus2/containers/taskExecutor/sources/utils/taskExecutor/tasks
      3        $ nano containers/taskExecutor/sources/utils/taskExecutor/tasks/__init__.py
      4
      5        from .base import BaseTask
      6        ...
      7        from .naiveFormula0 import NaiveFormula0
      8        from .naiveFormula1 import NaiveFormula1
      9        from .naiveFormula2 import NaiveFormula2
      10        ...
    2. Configure initTask.py:

      1        $ pwd
      2        /home/ubuntu/fogbus2/containers/taskExecutor/sources/utils/taskExecutor/tools/initTask.py
      3        $ nano containers/taskExecutor/sources/utils/taskExecutor/tasks/__init__.py
      4
      5        from typing import Union
      6        from ..tasks import *
      7
      8        def initTask(taskName: str) -> Union[BaseTask, None]:
      9            task = None
      10            if taskName == ’FaceDetection’:
      11                task = FaceDetection()
      12            ...
      13            elif taskName == ’NaiveFormula0’:
      14                task = NaiveFormula0()
      15            elif taskName == ’NaiveFormula1’:
      16                task = NaiveFormula1()
      17            elif taskName == ’NaiveFormula2’:
      18                task = NaiveFormula2()
      19
      20            return task
  4. Prepare docker images:

    1. Prepare the required libraries:

      1            $ pwd
      2            /home/ubuntu/fogbus2/containers/taskExecutor/sources
      3            $ cat requirements.txt
      4
      5            psutil
      6            docker
      7            python-dotenv
      8            pytesseract
      9            editdistance
      10            six
    2. Create dockerfiles: For each of the tasks, a docker file should be created. Considering NaiveFormula0:

      1            $ pwd
      2            /home/ubuntu/fogbus2/containers/taskExecutor/dockerFiles/NaiveFormula0
      3
      4            $ nano Dockerfile
      5
      6            # Base
      7            FROM python:3.9-alpine3.14 as base
      8            FROM base as builder
      9
      10            ## Dependencies
      11            RUN apk update
      12            RUN apk add --no-cache \
      13                build-base clang clang-dev ninja cmake ffmpeg-dev \
      14                freetype-dev g++ jpeg-dev lcms2-dev libffi-dev \
      15                libgcc libxml2-dev libxslt-dev linux-headers \
      16                make musl musl-dev openjpeg-dev openssl-dev \
      17                zlib-dev curl freetype gcc6 jpeg libjpeg \
      18                openjpeg tesseract-ocr zlib unzip openjpeg-tools
      19
      20            RUN python -m pip install --retries 100 --default-timeout=600  --no-cache-dir --upgrade pip
      21            RUN python -m pip install --retries 100 --default-timeout=600  numpy --no-cache-dir
      22
      23            ## OpenCV Source Code
      24            WORKDIR /workplace
      25            RUN cd /workplace/ \
      26                && curl -L "https://github.com/opencv/opencv/archive/4.5.1.zip" -o opencv.zip \
      27                && curl -L "https://github.com/opencv/opencv_contrib/archive/4.5.1.zip" -o opencv_contrib.zip \
      28                && unzip opencv.zip \
      29                && unzip opencv_contrib.zip \
      30                && rm opencv.zip opencv_contrib.zip
      31
      32            ## Configure
      33            RUN cd /workplace/opencv-4.5.1 \
      34                && mkdir -p build && cd build \
      35                && cmake \
      36                    -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.5.1/modules \
      37                    -DBUILD_NEW_PYTHON_SUPPORT=ON \
      38                    -DBUILD_opencv_python3=ON \
      39                    -DHAVE_opencv_python3=ON \
      40                    -DPYTHON_DEFAULT_EXECUTABLE=$(which python) \
      41                    -DBUILD_TESTS=OFF \
      42                    -DWITH_FFMPEG=ON \
      43                    ../
      44
      45            ## Compile
      46
      47            RUN cd /workplace/opencv-4.5.1/build && make -j $(nproc)
      48            RUN cd /workplace/opencv-4.5.1/build && make install
      49
      50            ## Python libraries
      51            COPY ./sources/requirements.txt /install/requirements.txt
      52            RUN python -m pip install --retries 100 --default-timeout=600  \
      53                --prefix=/install \
      54                --no-cache-dir \
      55                -r /install/requirements.txt
      56
      57            ## Copy files
      58            FROM base
      59            COPY --from=builder /install /usr/local
      60            COPY ./sources/ /workplace
      61
      62            ## Install OpenCV
      63            COPY  --from=builder /usr/local/ /usr/local/
      64            COPY --from=builder /usr/lib/ /usr/lib/
      65
      66            # Hostname
      67            RUN echo "NaiveFormula0" > /etc/hostname
      68
      69            # Run NaiveFormula0
      70            WORKDIR /workplace
      71            ENTRYPOINT ["python", "taskExecutor.py"]
    3. Create docker files for NaiveFormula1 and NaiveFormula2 similar to NaiveFormula0, as described in step (b).

    4. Create docker-compose files: For each of the tasks, a docker-compose file should be created. Consideting NaiveFormula0:

      1            $ pwd
      2            /home/ubuntu/fogbus2/containers/taskExecutor/dockerFiles/NaiveFormula0
      3            $ nano docker-compose.yml
      4
      5            version: ’3’
      6
      7            services:
      8
      9              fogbus2-naive_formula0:
      10                image: fogbus2-naive_formula0
      11                build:
      12                  context: ../../
      13                  dockerfile: dockerFiles/NaiveFormula0/Dockerfile
      14                environment:
      15                  PUID: 1000
      16                  PGID: 1000
      17                  TZ: Australia/Melbourne
      18                network_mode:
      19                  host
    5. Create docker-compose files for NaiveFormula1 and NaiveFormula2 similar to NaiveFormula0, as described in step (d).

    6. Build docker images: The docker images corresponding to the tasks of new application can be built using the provided automated script (demo.py), similar to step (6) in section 3.1.

      1            $ pwd
      2            /home/ubuntu/fogbus2/demo
      3            $ python3.9 demo.py --buildAll
    7. Verify new docker images:

    1        $ docker images
    2
    3        REPOSITORY                TAG       IMAGE ID       CREATED              SIZE
    4        ...
    5        fogbus2-naive_formula1    latest    5e9ad6999801   2 minutes ago        xxx
    6        fogbus2-naive_formula0    latest    74cfbb128699   2 minutes ago        xxx
    7        fogbus2-naive_formula2    latest    924d6bc0f281   3 minutes ago        xxx
    8        ...
  5. Prepare User side code:

    1        $ pwd
    2        /home/ubuntu/fogbus2/containers/user/sources/utils/user/applications
    3        $ nano naiveFormulaParallelized.py
    4
    5        from time import time
    6        from pprint import pformat
    7        from .base import ApplicationUserSide
    8        from ...component.basic import BasicComponent
    9
    10
    11        class NaiveFormulaParallelized(ApplicationUserSide):
    12
    13            def __init__(
    14                    self,
    15                    videoPath: str,
    16                    targetHeight: int,
    17                    showWindow: bool,
    18                    basicComponent: BasicComponent):
    19                super().__init__(
    20                    appName=’NaiveFormulaParallelized’,
    21                    videoPath=videoPath,
    22                    targetHeight=targetHeight,
    23                    showWindow=showWindow,
    24                    basicComponent=basicComponent)
    25
    26            def prepare(self):
    27                pass
    28
    29            def _run(self):
    30                self.basicComponent.debugLogger.info(
    31                    ’Application is running: %s’, self.appName)
    32
    33                # get user input of a, b, and c
    34                print(’a = , end=’’)
    35                a = int(input())
    36                print(’b = , end=’’)
    37                b = int(input())
    38                print(’c = , end=’’)
    39                c = int(input())
    40
    41                inputData = {
    42                    ’a’: a,
    43                    ’b’: b,
    44                    ’c’: c
    45                }
    46
    47                # put it in to data uploading queue
    48                self.dataToSubmit.put(inputData)
    49                lastDataSentTime = time()
    50                self.basicComponent.debugLogger.info(
    51                    ’Data has sent (a, b, c): %.2f, %.2f, %.2f’, a, b, c)
    52
    53                # wait for all the 4 results
    54                while True:
    55                    result = self.resultForActuator.get()
    56
    57                    responseTime = (time() - lastDataSentTime) * 1000
    58                    self.responseTime.update(responseTime)
    59                    self.responseTimeCount += 1
    60
    61                    if ’finalResult’ in result:
    62                        break
    63
    64                for key, value in result.items():
    65                    result[key] = ’%.4f’ % value
    66                self.basicComponent.debugLogger.info(
    67                    ’Received all the 4 results: \r\n%s’, pformat(result))
  6. Define dependencies among tasks of a new application in the database. Considering MariaDB is running on 192.0.0.1 as an example:

    1. Connect to the database:

      1            $ mysql -h 192.0.0.1 -uroot -p
      2            Enter password:
    2. The EntryTasks contains the root tasks of this application, where the sensory data should be forwarded.

      1            mysql> SELECT entryTasks FROM FogBus2_Applications.applications WHERE name=’NaiveFormulaParallelized’;
      2
      3            [
      4                "NaiveFormula0",
      5                "NaiveFormula1",
      6                "NaiveFormula2"
      7            ]
    3. The TaskWithDependency contains the dependencies among tasks. For each task, we define an array of parents and children, representing predecessor and successor tasks.

      1            mysql> SELECT tasksWithDependency FROM FogBus2_Applications.applications WHERE name=’NaiveFormulaParallelized’;
      2
      3            {
      4            "NaiveFormula0": {
      5                "parents": [
      6                    "Sensor"
      7                ],
      8                "children": [
      9                    "Actuator"
      10                ]
      11            },
      12            "NaiveFormula1": {
      13                "parents": [
      14                    "Sensor"
      15                ],
      16                "children": [
      17                    "Actuator"
      18                ]
      19            },
      20            "NaiveFormula2": {
      21                "parents": [
      22                    "Sensor"
      23                ],
      24                "children": [
      25                    "Actuator"
      26                ]
      27            }
      28            }
    4. Considering the FogBus2 framework is running, the NaiveFormulaParallelized can be executed using the following command:

      1            $ pwd
      2            /home/ubuntu/fogbus2/containers/user/sources
      3
      4            $ python user.py --bindIP 192.0.0.9 --masterIP 192.0.0.2 --masterPort 5001 --remoteLoggerIP 192.0.0.1 --remoteLoggerPort 5000 --applicationName NaiveFormulaParallelized

Table 3 presents the list of all applications that have been currently implemented and integrated with FogBus2 framework. The VideoOCR and GameOfLifePyramid applications were implemented in the main paper, while the FaceDetection, ColorTracking, GameOfLifeSerialized, GameOfLifeParallelized, NaiveFormulaSerialized, and NaiveFormulaParallelized are implemented and integrated as an extension in this book chapter. Due to page-limit, we only described one of these applications (i.e., NaiveFormulaParallelized). The required steps for defining and integration of all applications are similar to what is described in this section, while the logic of each application is different.

Application Name Description
Application Logic
Tasks
FaceDetection
Detecting human face from video stream, either realtime or from
recorded files
face_detection
ColorTracking
Tracking colors from video stream, either realtime or from recorded
files. The target color can be dynamically configured via GUI.
color_tracking
VideoOCR
Recognizing text from a video file. It automatically picks up
key frames.
blur_and_p_hash,
ocr
GameOfLifeSerialized
Conway’s Game of Life. The tasks process grids
(with different sizes) one by one.
GameOfLife0 to
GameOfLife62
GameOfLifeParallelized
Conway’s Game of Life. The tasks process grids
(with different sizes) in parallel.
GameOfLife0 to
GameOfLife62
GameOfLifePyramid
Conway’s Game of Life. The tasks process grids
(with different sizes) in a pyramid dependency.
GameOfLife0 to
GameOfLife62
NaiveFormulaSerialized A Naive Formula. Tasks process different parts of formula one by one.
naive_formula0,
naive_formula1,
naive_formula2,
naive_formula3
NaiveFormulaParallelized A Naive Formula. Tasks process different parts of formula in parallel.
naive_formula0,
naive_formula1,
naive_formula2
Table 3: The List of All Implemented and Integrated Applications with FogBus2

5.2 Implementation of New Scheduling Policy

One of the most important challenges for resource management in edge and cloud data centers is the proper scheduling of incoming application requests. FogBus2 provides a straightforward mechanism for the scheduling of various types of IoT applications. Different scheduling policies can be implemented and integrated with the FogBus2 framework with different scheduling goals, such as optimizing application response time, energy consumption, the monetary cost of resources, or a combination of any of these goals, just to mention a few. As a guideline, we put forward a new scheduling policy and describe how to integrate it with the FogBus2 framework.

To simplify the process of new policy integration, a BaseScheduler class is provided in containers/master/sources/utils/master/scheduler/base.py. Users should inherit from BaseScheduler class and override the method based on their desired goals. Besides, if the utilization of the current Master component, which is responsible for the scheduling of IoT applications, goes beyond a threshold, the new application request should be forwarded to another Master component. The getBestMaster method handles this process and can be overridden with different policies for the selection of another Master. Finally, users, who are interested in augmenting scaling features to their technique, can implement a scaling policy using the prepareScaler method. The following steps describe how to define and integrate a new scheduling policy:

  1. Navigate to containers/master/sources/utils/master/scheduler/policies, and create a new file named schedulerRankingBased.py:

    1        $ pwd
    2        /home/ubuntu/fogbus2/containers/master/sources/utils/master/scheduler/policies
    3        $ > schedulerRankingBased.py
  2. Implement the policy in the schedulerRankingBased.py. The contains the logic of scheduling policy.

    1        $ cat schedulerRankingBased.py
    2
    3        from random import randint
    4        from time import time
    5        from typing import List
    6        from typing import Union
    7
    8        from ..base import BaseScheduler as SchedulerPolicy
    9        from ..baseScaler.base import Scaler
    10        from ..baseScaler.policies.scalerRandomPolicy import ScalerRandomPolicy
    11        from ..types import Decision
    12        from ...registry.roles.actor import Actor
    13        from ...registry.roles.user import User
    14        from ....types import Component
    15
    16
    17        class SchedulerRankingBased(SchedulerPolicy):
    18            def __init__(
    19                    self,
    20                    isContainerMode: bool,
    21                    *args,
    22                    **kwargs):
    23                """
    24                :param isContainerMode: Whether this component is running in container
    25                :param args:
    26                :param kwargs:
    27                """
    28                super().__init__(’RankingBased’, isContainerMode, *args, **kwargs)
    29
    30            def _schedule(self, *args, **kwargs) -> Decision:
    31                """
    32                :param args:
    33                :param kwargs:
    34                :return: A decision object
    35                """
    36                user: User = kwargs[’user’]
    37                allActors: List[Actor] = kwargs[’allActors’]
    38                # Get what tasks are required
    39                taskNameList = user.application.taskNameList
    40
    41                startTime = time()
    42                indexSequence = [’’ for _ in range(len(taskNameList))]
    43                indexToHostID = {}
    44
    45                # Ranking of tasks belonging to an application
    46                rankedTasksList = self.rankApplicationTasks(
    47                indexSequence, **kwargs)
    48                indexToHostID = self.tasksAssignment(
    49                rankedTasksList, allActors, **kwargs)
    50
    51                schedulingTime = (time() - startTime) * 1000
    52
    53                # Create a decision object
    54                decision = Decision(
    55                    user=user,
    56                    indexSequence=rankedTasksList,
    57                    indexToHostID=indexToHostID,
    58                    schedulingTime=schedulingTime
    59                )
    60                # A simple example of cost estimation
    61                decision.cost = self.estimateCost(decision, **kwargs)
    62                return decision
    63
    64            @staticmethod
    65            def estimateCost(decision: Decision, **kwargs) -> float:
    66                # You may develop your own with the following used values
    67                from ..estimator.estimator import Estimator
    68                # Get necessary params from the key args
    69                user = kwargs[’user’]
    70                master = kwargs[’master’]
    71                systemPerformance = kwargs[’systemPerformance’]
    72                allActors = kwargs[’allActors’]
    73                isContainerMode = kwargs[’isContainerMode’]
    74                # Init the estimator
    75                estimator = Estimator(
    76                    user=user,
    77                    master=master,
    78                    systemPerformance=systemPerformance,
    79                    allActors=allActors,
    80                    isContainerMode=isContainerMode)
    81                indexSequence = [int(i) for i in decision.indexSequence]
    82                # Estimate the cost
    83                estimatedCost = estimator.estimateCost(indexSequence)
    84                return estimatedCost
    85
    86            def getBestMaster(self, *args, **kwargs) -> Union[Component, None]:
    87                """
    88
    89                :param args:
    90                :param kwargs:
    91                :return: A Master used to ask the user to request when this Master is busy
    92                """
    93                user: User = kwargs[’user’]
    94                knownMasters: List[Component] = kwargs[’knownMasters’]
    95                mastersNum = len(knownMasters)
    96                if mastersNum == 0:
    97                    return None
    98                return knownMasters[randint(0, mastersNum - 1)]
    99
    100            def prepareScaler(self, *args, **kwargs) -> Scaler:
    101                # Create a scaler object and return
    102                scaler = ScalerRandomPolicy(*args, **kwargs)
    103                return scaler

    First, we retrieve the information of user and all available actors allActors (lines 36-37). Then, the tasks corresponding to the requested application are retrieved and stored in taskNameList (line 39). The rankApplicationTasks considers the dependency model of tasks (if any) and satisfies the dependency among tasks while defining an order for the tasks that can be executed in parallel. Several ranking policies can be defined for this method, however, in this version, we consider the average execution time of tasks on different servers as criteria for ranking. Hence, among tasks that can be executed in parallel, the tasks with higher execution time receive higher priority. This eventually helps to reduce the overall response time of the application (lines 46-47). Next, tasksAssignment method receives the ordered rankedTasksList and assigns a proper actor to each task to minimize its execution time (line 48-49). According to the scheduling decision, a decision object will be created, storing the ordered list of the application’s tasks, the list of server/host mapping, scheduling time, and the cost of scheduling, to be returned (lines 54-59). To illustrate how the execution cost of each task and overall response time of one application can be estimated, a estimateCost method is defined (lines 65-84). As mentioned above, the getBestMaster and prepareScaler also can be defined in schedulerRankingBased.py. To reduce the complexity, these methods are working based on random policy.

  3. The new scheduling policy can be added to the schedulerName options by manipulating the initSchedulerByName method of the containers/master/sources/utils/master/scheduler/tools/initSchedulerByName.py. There exist names of several scheduling policies currently integrated with the FogBus2 framework. The name of new scheduling policies can be added after the names of existing scheduling policies.

    1            $ pwd
    2            /home/ubuntu/fogbus2/containers/master/sources/utils/master/scheduler/tools
    3            $ nano initSchedulerByName.py
    4
    5            def initSchedulerByName(
    6                    knownMasters: Set[Address],
    7                    minimumActors: int,
    8                    schedulerName: str,
    9                    basicComponent: BasicComponent,
    10                    isContainerMode: bool,
    11                    parsedArgs,
    12                    **kwargs) -> Union[BaseScheduler, None]:
    13                if schedulerName == ’OHNSGA’:
    14                    # hidden to save space
    15                    pass
    16                elif schedulerName == ’NSGA2’:
    17                    # hidden to save space
    18                    pass
    19                elif schedulerName == ’NSGA3’:
    20                    # hidden to save space
    21                    pass
    22                # New Added Block
    23                elif schedulerName == ’RankingBased’:
    24                    from ..policies.schedulerRankingBased import \
    25                        RankingBasedPolicy
    26                    scheduler = SchedulerRankingBased(isContainerMode=isContainerMode)
    27                    return scheduler
    28
    29                return None
  4. The Master component can be executed using the following command while the schedulerName option shows the name of the selected scheduling policy:

    1            $ pwd
    2            /home/ubuntu/fogbus2/containers/master
    3            $ docker-compose run --rm --name TempContainerName fogbus2-master  --containerName TempContainerName --bindIP 192.0.0.8 --schedulerName RankingBased

5.3 Evaluation Results

To evaluate the performance of the FogBus2 framework, an integrated computing environment consisting of multiple cloud instances and edge/fog servers is prepared. Table2 depicts the full configuration of servers and corresponding running FogBus2 components.

Figure 6: Average Docker Image size of FogBus2 Components

Figure 6 represents the average docker size of FogBus2 components in compressed and uncompressed formats. The compressed docker image size is obtained from the average size of docker images stored in the docker hub for multiple architectures, while uncompressed docker image size is obtained from the average size of extracted docker images on instances. The size of the compressed docker image shows that FogBus2 components are lightweight to be downloaded on different platforms, ranging from few megabytes to roughly 100 MB at maximum. Besides, the uncompressed docker image size proves that FogBus2 components are not resource-hungry and do not occupy the storage. The reason why the image sizes of User and Task Executor components are not provided is that the docker image sizes of these components heavily depend on the IoT applications.

Figure 7: Average Run-time RAM usage of FogBus2 Components

Figure 7 represents the average run-time RAM usage of FogBus2 components in different architectures. It illustrates that the average resource usage of the FogBus2 components on different architectures are low, ranging from 25 MB to 45 MB.

Figure 8: Average Startup Time of FogBus2 Components

Figure 8 demonstrates the average startup time of FogBus2 components on different architectures. It contains the amount of time required to start containers until they become in a completely functional state for serving incoming requests. Therefore, the FogBus2 framework only requires few seconds to enter into its fully functional state. It significantly helps IoT developers in the development and testing phase as they require to re-initiate the framework several times to test and debug their applications. Furthermore, in the deployment phase, it greatly helps scenarios where scalability is important.

Figure 9: Average Response Time of IoT Applications

Figure 9 depicts the average response time of some of the recently implemented IoT applications in FogBus2 framework.

6 Summary

In this chapter, we highlighted key features of the FogBus2 framework alongside describing its main components. Besides, we described how to set up an integrated computing environment, containing multiple cloud service providers and edge devices, and establish a low-overhead communication network among all resources. Next, we provided instructions and corresponding code snippets to install and run the main framework and its integrated applications. Finally, we demonstrated how to implement and integrate new IoT applications and custom scheduling policies with this framework.

Software Availability

The source code of the FogBus2 framework and newly implemented IoT applications and scheduling policies are accessible from the CLOUDS Laboratory GitHub webpage: https://github.com/Cloudslab/FogBus2.

References

  • [1] Hu P, Dhelim S, Ning H, et al. Survey on fog computing: architecture, key technologies, applications and open issues. Journal of Network and Computer Applications. 2017;98:27–42.
  • [2] Goudarzi M, Movahedi Z, Nazari M. Efficient multisite computation offloading for mobile cloud computing. In: Proceedings of the 2016 International IEEE Conferences on Ubiquitous Intelligence and Computing (UIC). IEEE; 2016. p. 1131–1138.
  • [3] Li Q, Wang Zy, Li Wh, et al. Applications integration in a hybrid cloud computing environment: Modelling and platform. Enterprise Information Systems. 2013;7(3):237–271.
  • [4] Schulz P, Matthe M, Klessig H, et al. Latency critical IoT applications in 5G: Perspective on the design of radio interface and network architecture. IEEE Communications Magazine. 2017;55(2):70–78.
  • [5] Goudarzi M, Wu H, Palaniswami M, et al. An Application Placement Technique for Concurrent IoT Applications in Edge and Fog Computing Environments. IEEE Transactions on Mobile Computing. 2021;20(4):1298 – 1311.
  • [6] Mahmud R, Buyya R. Modelling and simulation of fog and edge computing environments using iFogSim toolkit. Fog and edge computing: Principles and paradigms. 2019;p. 1–35.
  • [7] Goudarzi M, Palaniswami M, Buyya R. A fog-driven dynamic resource allocation technique in ultra dense femtocell networks. Journal of Network and Computer Applications. 2019;145:102407.
  • [8] Merlino G, Dautov R, Distefano S, et al. Enabling workload engineering in edge, fog, and cloud computing through OpenStack-based middleware. ACM Transactions on Internet Technology (TOIT). 2019;19(2):1–22.
  • [9] Tuli S, Mahmud R, Tuli S, et al. FogBus: A blockchain-based lightweight framework for edge and fog computing. Journal of Systems and Software. 2019;154:22–36.
  • [10] Yousefpour A, Patil A, Ishigaki G, et al. FogPlan: a lightweight QoS-aware dynamic fog service provisioning framework. IEEE Internet of Things Journal. 2019;6(3):5080–5096.
  • [11] Nguyen DT, Le LB, Bhargava VK. A market-based framework for multi-resource allocation in fog computing. IEEE/ACM Transactions on Networking. 2019;27(3):1151–1164.
  • [12] Bellavista P, Zanni A. Feasibility of fog computing deployment based on docker containerization over raspberrypi. In: Proceedings of the 18th international conference on distributed computing and networking; 2017. p. 1–10.
  • [13] Ferrer AJ, Marques JM, Jorba J. Ad-hoc edge cloud: A framework for dynamic creation of edge computing infrastructures. In: Proceedings of the 28th International Conference on Computer Communication and Networks. IEEE; 2019. p. 1–7.
  • [14] Noor S, Koehler B, Steenson A, et al. IoTDoc: A Docker-Container based Architecture of IoT-enabled cloud system.

    In: Proceedings of the 3rd IEEE/ACIS International Conference on Big Data, Cloud Computing, and Data Science Engineering. Springer; 2019. p. 51–68.

  • [15] Deng Q, Goudarzi M, Buyya R. FogBus2: a lightweight and distributed container-based framework for integration of IoT-enabled systems with edge and cloud computing. In: Proceedings of the International Workshop on Big Data in Emergent Distributed Environments; 2021. p. 1–8.
  • [16] Deb K, Pratap A, Agarwal S, et al. A fast and elitist multiobjective genetic algorithm: NSGA-II.

    IEEE Transactions on Evolutionary Computation. 2002;6(2):182–197.

  • [17] Deb K, Jain H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Transactions on Evolutionary Computation. 2013;18(4):577–601.