The world is often stricken by catastrophic disasters including earthquakes, hurricanes, and tsunamis [13, 8]. Telecommunication services become unavailable in disaster-stricken areas because of destruction of facilities such as fiber cables or disruption of energy supply. It has been a significant issue to quickly recover telecommunication networks in such areas for disaster-response and life-saving operations. Although recovery schemes using surviving facilities by establishing wireless bypass routes were proposed [5, 6, 7], it is difficult to employ such approaches if massive power outage occurs. A promising solution for post-disaster monitoring is the use of drone-mounted wireless networks because many drones can be flexibly deployed on-demand in disaster-stricken areas. On-demand drone-mounted networks are suitable for grasping the current situation in the target area for leveraging disaster-response operations such as finding injured people.
The concept of drone empowered wireless networks has been a hot research topic to deploy public safety networks [11, 10]. A stochastic geometry based design of flying cellular networks was proposed in  for post-disaster situations. The effectiveness of flying networks was demonstrated in this work; the ground coverage can be enhanced by optimally selecting the number of drones and their corresponding altitudes. Also, drone-mounted LTE femtocell base stations were investigated in  to enhance saturated existing ground infrastructure. They presented initial results to show the validity of the flying base stations although a large number of drones are required to cover all users in a city. The concept of VLC on drones was studied in  to simultaneously provide flexible communication and illumination. The locations of nodes and cell associations were optimized to minimize the power consumption under illumination and communication constraints. However, the existing drone-mounted wireless networks did not focus on constructing ad hoc networks by many drones for post-disaster monitoring.
To address this problem, this paper proposes a concept and preliminary results of a visible light ad hoc network using multiple drones and an image sensor. The idea behind the proposed approach is to utilize on-board LED lights for both lighting and communication in a blackout area. Multiple drones are deployed on-demand in a disaster-stricken area to monitor the ground and continuously send image data to a camera with image sensor-based visible light communication (VLC) links. This paper also proposes a positioning algorithm for multiple drones to avoid interference among VLC links. This is because the camera receives optical signals from multiple drone-mounted LEDs, and thus the drones must move to avoid overlap satisfying the requirements for filming, i.e. the recognizable range of drones is determined by conditions including focal length of the camera. The proposed idea is demonstrated with the proof-of-concept (PoC) implemented with drones that are equipped with LED panels and a 4K camera.
The rest of this paper is organized as follows. Section II introduces related work on VLC and the contribution of this work. Section III describes the proposed scheme. Section IV presents the proposed positioning algorithm and simulation results. Section V describes the experimental results with the implemented PoC. Finally, the conclusion is provided in Section VI.
Ii Related work
Free space optical (FSO) communication using drones has been intensely studied in the recent past. For rapid event response and flexible deployment, an edge-computing-empowered radio access network architecture with drones was proposed in  where the fronthaul and backhaul links were established with FSO communication between drones and ground nodes. The turbulence induced signal fades in ad hoc mesh FSO network was experimentally measured in  to obtain knowledge on the effect of the attenuation and phase distortion of atmospheric channel. An FSO based drone assisted mobile access networks was investigated in  for disaster-response purpose. The deployment of drones and mobile user association were jointly optimized to maximize the number of served users in a disaster-stricken area. However, the significant difficulty in FSO-based communication is configuring the light axis between the transceiver and the receiver. If the light axis is mismatched, the received optical power decreases drastically. This characteristic makes FSO suitable for communication between fixed ground nodes. Since most of the drones unstably move in the air due to wind, it is difficult to dynamically adjust the light axis.
To address this issue, the concept of a camera system receiving the optical signal has attracted much attention. In , a VLC-based vehicle-to-vehicle (V2V) communication system was developed, where LED transmitters and video cameras are mounted on vehicles. As regards VLC between drone-mounted LED lights and a camera, PoC test results using a drone and a ground base station were reported in . However, the major limitations of this work are 1) to measure the signal quality between one drone and one camera, and 2) not to recognize the position of the drone in an image. Therefore, the contribution of this paper is to propose the concept of a one-to-many image sensor-based VLC between a camera and multiple drone-mounted LED lights with a drone-positioning algorithm to avoid interference among VLC links. Also, we present the preliminary results for the proposed scheme.
Iii Proposed scheme
The conceptual system architecture of the VLC between drone-mounted LED lights and a camera is depicted in Fig. 1. Multiple drones are deployed in a disaster-stricken area to compose an ad hoc network for post-disaster monitoring. Each drone films the ground with an on-board camera. The drones transfer the recorded image to a ground camera with VLC using on-board LED lights. The camera films the drones and the LED lights mounted on them and sends the data to an edge server. The edge server detects the drones from received images and demodulates the signals of LED lights. Note that the edge server can demodulate the signals of LED lights only after the drone-detection.
The goal of the proposed drone positioning algorithm is to achieve efficient data transmission with the VLC links. The trajectorys of drones are determined to avoid overlap while the requirements for filming are satisfied and the recognizable range of drones is ensured considering external conditions. The drone positioning model for VLC is defined in the following, and the positioning algorithm is proposed based on the model. The proposed scheme enables real-time monitoring of the disaster-stricken area for leveraging recovery operations. The use of visible light is suitable for post-disaster monitoring because of the wide coverage of target area and the multi-purpose use.
Iii-B Image processing sequence
This section introduces the sequence of the image processing at the edge server. Fig. 2 depicts an example of an image filmed by the camera. The edge server first detects the drones with DNNs. Since multiple drones are captured in the image, it then crops the image to extract each drone from the original file. For each cropped image, the signals of drone-mounted LED lights are detected and demodulated. Note that the demodulation process can be executed only if the drone-detection succeeded, otherwise the edge server cannot distinguish on-board LEDs from other light sources such as street lights. A drone which is located too far can be too small to be recognized.
Iii-C1 Variable definition
The variables used in the proposed model are summarized in Table I. The detail of each variable is explained in the following. In the proposed model, we assumed that a drone is approximated as a sphere without loss of generality.
|Set of drones|
|Drone identifier in|
|Position of drone in space|
|Distance between th drone and camera|
|Radius of drone|
|Optical zoom magnification of camera|
|Focal length of camera|
Iii-C2 Coordinates of drones
Let denote the set of drones, and and denote identifiers for them. Assuming that each drone measures its current position using sensors such as Global Positioning System (GPS) sensors. The origin represents the position and the y-axis denotes the direction of the camera, respectively. The distance is calculated from .
Iii-C3 Detectable range
Let us define the optical zoom magnification of the camera as . Let and denote the width and the height of a drone, assuming that the size of all the drones in is the same. With , we have
where and denote the width and height of the imaged drone in the camera, respectively.
Here, we have the general equation for a lens:
where is the distance between the drone and the lens, is the distance between the imaging point and the lens, and is the focal length of the camera. Fig. 3 shows the relationship between them. Using , (2) is transformed as
The ranges of and are determined by the specification of the camera. With (3), the distance between the drone and the imaging point is formulated as
Thus, the range of focusable distance is determined by the parameters and . Note that a drone located within a certain range from the distance can also be recognized. This detectable range is defined as , i.e. a drone located at can be detected.
Iii-C4 Overlap of drones
Here we model the overlap of drones in the filmed image, because overlapping drones cannot be separately recognized. The following model is explained in the x-z plane, because the y-axis represents the direction of the camera. We define a no-entry area for avoiding overlap of drones. Fig. 4 depicts the no-entry area for th drone generated by th drone. The coordinates of th drone which is projected in the th drone’s x-z plane is formulated as
The radius of the projected th drone is computed as
The distance between th drone and th drone in the th drone’s x-z plane is calculated as
Thus, the constraint that there is no overlap of th and th drones in the filmed image is described as
Iii-C5 Constraints for drone positioning
Based on the above model, the constraints for drone positioning are summarized as the following.
All the drones in are within the detectable range:
All the drones in are not mutually overlapped in the image:
Iv Drone positioning algorithm
This section describes the proposed drone positioning algorithm and evaluation results with computer simulations.
This paper makes the following assumption on the movement of drones.
The destination of each drone is given; each drone moves to its own destination for the monitoring purpose.
The moving distance is preferred to be short considering the energy consumption.
Each drone has the information on the destination coordinates and the current coordinates of itself and other drones.
The goal of the proposed algorithm is to reduce the moving distance of drones satisfying the constraints formulated in section III-C5. The flowchart of the proposed algorithm is shown in Fig. 5. The destination coordinates are given for the drones. Each drone gets its current coordinates using GNSS and reports to others via wireless communication. Then, the no-entry area is updated with the constraints described in section III-C5 based on their current coordinates. Basically, a drone moves in the shortest path to its destination. When a drone is entering the no-entry area, it identifies the opponent to overlap. Let and denote the y-coordinate of this drone and the opponent, respectively. If is satisfied, the drone moves to avoid the no-entry area to reduce the length of bypass route. Otherwise, the drone stops to wait for the opponent to avoid the no-entry area. Then, it restarts to move if the no-entry area in the shortest path is eliminated. If the drone has to wait more than a certain period of time, it starts to move avoiding the no-entry area. As a result, all the drones can arrive at their destinations without overlap in the filmed images.
Iv-C Simulation results
The performance of the proposed algorithm was confirmed with a self-developed simulator written in Python3. We deploy drones with m in a m square area. They moved with a speed of m/s from start positions to destinations, which were both randomly determined. The flying heights ranged from m to m. The simulations were iterated for times.
Fig. 6 shows the distribution of increase in the moving distance of drones. Note that only the drones that moved to avoid no-entry areas were counted. The increased distance was under one meter in about % of cases. Fig. 7 summarizes the wait time of drones. The increase in the wait time was within one second in most of cases. From these results, it was confirmed that drones efficiently move to avoid overlap with the proposed algorithm.
Furthermore, we demonstrate an example of avoiding movement. The proposed algorithm was implemented with Unity 2019.4.12. Fig. 8 shows an example case of two drones where each sphere represents a drone. The drones moved straight to get closer. Then, a drone avoided the no-entry area, while the other stopped to wait. Finally, the stopped drone restarted to move. As a consequence, overlap in the filmed image was successfully avoided.
V Experimental results
This section provides the experimental results of drone recognition. We obtained
datasets of drone-mounted LED lights at different environments. Then, the recognition accuracy was evaluated with a convolutional neural network (CNN).
V-a Experimental condition
Here we explain the experimental conditions. We employed two Mavic Pro drones, launched by DaJiang Innovations Science and Technology (DJI) Co., Ltd. The size of the drone was approximated as cm. As regards LED lights, we used WSB serial LED panels with LED lights placed in square, which were produced by World semi Co., Limited, Nine serial LED panels were mounted on each drone as the transmitter. An Arduino UNO micro controller was connected to the serial LED panels to control them. The experimental device is shown in Fig. 9. The brightness of the serial LEDs was set to the maximum value to ensure sufficient transmission distance. As regards the receiver side, a Sony Xperia XZ Premium camera was employed. The resolution was , the pixel count was mega pixels, and the frame rate was fps. The height of the camera was set to m. The focal length was mm, and the zoom magnification was set to .
The transmitter mounted on a drone sent continuous light to be filmed by the camera. The recorded video was sent to an edge server so that it was divided into a series of static pictures. We employed YOLOv3, which is a famous machine learning model using a CNN, to recognize the transmitter from the received pictures at the edge server.
To ensure the detection accuracy of drone-mounted LED lights, we collected pictures as the training data and marked the positions of the drones. In this process, the resolution of the pictures was resized to . Also, the datasets were obtained by changing the distance between the camera and the drones to improve the recognition accuracy under different conditions. This is because the number of pixels representing the drone decreases in accordance with the increase in the distance.
The results for recognition accuracy is summarized in Table II. Each value is the mean of the recognition rates calculated with test images for each distance. It was confirmed from this result that drone-mounted LED lights can be accurately and stably recognized regardless of the distance between the camera and the drone. Also, we confirmed the feasibility of demodulation of data signals from the transmitter.
The validity of the proposed positioning algorithm was confirmed with two drones. Fig. 10 depicts an example recognition result of two drones in a picture. In this example, the distances between the camera to each drone were both m. From the constraints formulated in section III-C5, the no-entry area was computed to avoid overlap. Since these constraints were satisfied in Fig. 10, two drones were separately recognized. Thus, the feasibility of the proposed model and algorithm was confirmed through this result.
|distance [m]||accuracy [%]|
The color of LED lights was set to red in the experiment as shown in Fig. 9. This determination was the result of consideration on the trees and sky in the background. Since the color which is easy to identify depends on the outer environments, the robust color setting should be investigated. Also, experiments using more drones is further study.
This paper proposed the concept of a one-to-many image sensor-based VLC between a camera and multiple drone-mounted LED lights. The proposed idea enables the one-to-many VLC system using unstably moving drones. With the proposed scheme, multiple drones are deployed on-demand in a disaster-stricken area to monitor the ground and continuously send image data to a ground camera via VLC links. The on-board LED lights can be utilized for both lighting and communication in a blackout area. In this paper we also presented a drone-positioning algorithm to avoid interference among VLC links. This is because the camera receives optical signals from multiple drones, and thus the drones must move to avoid overlap within the detectable range determined by the size of the drone and the focal length of the camera. The performance of the proposed algorithm was confirmed with computer simulations. Furthermore, the feasibility of the proposed system was demonstrated with the PoC implemented with drones equipped with LED panels and a K camera. It constitutes the future work to evaluate the performance of the proposed idea with experimental results using many drones.
A part of this work This work was supported by JST, ACT-I, Grant Number JPMJPR18UL and GMO Foundation, Japan.
-  (2020) Evaluating LED-camera communication for drones. In Proceedings of the Workshop on Light Up the IoT, pp. 18–23. Cited by: §II.
-  (2016) Emergency ad-hoc networks by using drone mounted base stations for a disaster scenario. In 2016 IEEE 12th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pp. 1–7. Cited by: §I.
-  (2018) An edge computing empowered radio access network with UAV-mounted FSO fronthaul and backhaul: key challenges and approaches. IEEE Wireless Communications 25 (3), pp. 154–160. Cited by: §II.
-  (2016) Drone empowered small cellular disaster recovery networks for resilient smart cities. In 2016 IEEE international conference on sensing, communication and networking (SECON Workshops), pp. 1–6. Cited by: §I.
-  (2016) Wired and wireless network cooperation for quick recovery. In 2016 IEEE International Conference on Communications (ICC), pp. 1–6. Cited by: §I.
-  (2017) Wired and wireless network cooperation for wide-area quick disaster recovery. IEEE Access 6, pp. 2410–2424. Cited by: §I.
-  (2018) Recovery node layout planning for wired and wireless network cooperation for disaster response. In 2018 IEEE International Conference on Communications Workshops (ICC Workshops), pp. 1–6. Cited by: §I.
-  (2016) Comparison of functional damage and restoration processes of utility lifelines in the 2016 Kumamoto earthquake, Japan with two great earthquake disasters in 1995 and 2011. JSCE Journal of Disaster FactSheets FS2016-L-0005 (), pp. 1–9. Cited by: §I.
-  (2014) Experimental characterization and mitigation of turbulence induced signal fades within an ad hoc FSO network. Optics express 22 (3), pp. 3208–3218. Cited by: §II.
-  (2018) A drone fleet model for last-mile distribution in disaster relief operations. International Journal of Disaster Risk Reduction 28, pp. 107–112. Cited by: §I.
-  (2015) Drone applications for supporting disaster management. World Journal of Engineering and Technology 3 (03), pp. 316. Cited by: §I.
-  (2014) Optical vehicle-to-vehicle communication system using LED transmitter and camera receiver. IEEE photonics journal 6 (5), pp. 1–14. Cited by: §II.
-  (2005) Hurricane Katrina: social-demographic characteristics of impacted areas. CRS report for Congress RL33141 (), pp. . Cited by: §I.
-  (2019) An FSO-based drone assisted mobile access network for emergency communications. IEEE Transactions on Network Science and Engineering. Cited by: §II.
-  (2019) Power efficient visible light communication with unmanned aerial vehicles. IEEE Communications Letters 23 (7), pp. 1272–1275. Cited by: §I.