Elephant-Human Conflict Mitigation: An Autonomous UAV Approach

Elephant-human conflict (EHC) is one of the major problems in most African and Asian countries. As humans overutilize natural resources for their development, elephants' living area continues to decrease; this leads elephants to invade the human living area and raid crops more frequently, costing millions of dollars annually. To mitigate EHC, in this paper, we propose an original solution that comprises of three parts: a compact custom low-power GPS tag that is installed on the elephants, a receiver stationed in the human living area that detects the elephants' presence near a farm, and an autonomous unmanned aerial vehicle (UAV) system that tracks and herds the elephants away from the farms. By utilizing proportional-integral-derivative controller and machine learning algorithms, we obtain accurate tracking trajectories at a real-time processing speed of 32 FPS. Our proposed autonomous system can save over 68

READ FULL TEXT VIEW PDF

Authors

08/13/2020

A Vision-Based Control Method for Autonomous Landing of Vertical Flight Aircraft On a Moving Platform Without Using GPS

The paper discusses a novel vision-based estimation and control approach...
04/10/2019

Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for Autonomous Structure Inspection under GPS-denied Environment

UAVs have been widely used in visual inspections of buildings, bridges a...
03/11/2020

Keyfilter-Aware Real-Time UAV Object Tracking

Correlation filter-based tracking has been widely applied in unmanned ae...
02/27/2019

Real-Time detection, classification and DOA estimation of Unmanned Aerial Vehicle

The present work deals with a new passive system for real-time detection...
05/23/2020

15 Apr 2020 A SMART SEMI-AUTONOMOUS FIRE EXTINGUISH QUADCOPTER: FUTURE OF BANGLADESH

A fire has been one of the key problems that have not yet been resolved ...
11/21/2021

Optimized Deployment of Unmanned Aerial Vehicles for Wildfire Detection and Monitoring

In recent years, increased wildfires have caused irreversible damage to ...
09/22/2019

Towards Explainability for a Civilian UAV Fleet Management using an Agent-based Approach

This paper presents an initial design concept and specification of a civ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Abstract

Elephant-human conflict (EHC) is one of the major problems in most African and Asian countries. As humans overutilize natural resources for their development, elephants’ living area continues to decrease; this leads elephants to invade the human living area and raid crops more frequently, costing millions of dollars annually. To mitigate EHC, in this paper, we propose an original solution that comprises of three parts: a compact custom low-power GPS tag that is installed on the elephants, a receiver stationed in the human living area that detects the elephants’ presence near a farm, and an autonomous unmanned aerial vehicle (UAV) system that tracks and herds the elephants away from the farms. By utilizing proportional–integral–derivative controller and machine learning algorithms, we obtain accurate tracking trajectories at a real-time processing speed of 32 FPS. Our proposed autonomous system can save over 68% cost compared with human-controlled UAVs in mitigating EHC.

2 Introduction

Elephant-human conflict (EHC) has been one of the most significant problem in most African and Asian countries. EHC is extremely prevalent because nearly 1.2 billion people in the world live in African and Asian elephant range countries [9]. Crop raiding is one of the most common types of EHC [10]. C. Mackenzie, et al. found that elephants around Kibale National Park, Uganda, damage various crops, such as maizes, beans, sweet potatoes, and so on, and lead to over US$3500 of total economic loss in a village of 145 households during 6 months [6]. These crops are the main source of income for these villagers in the village, where the median household capital asset wealth was only US$5033. Crop raiding causes huge economical loss to not only small vegetation, but also large plantation. Riau, the largest palm oil producing province in Indonesia, loses millions of dollars during to crop-raiding [7]. Besides the tremendous economic loss, EHC also leads to casualties of both human and elephants. EHC in India leads to approximately 400 human deaths and 100 elephant deaths annually [2].

Traditional methods of mitigating EHC are limited to human guarding, fire, beating drums, scare shooting, dogs, and so on [11]. These methods are not cost-effective and efficient because almost all these traditional methods require expensive human labor. In addition, elephants begin to get used to these methods after people have used them many times. Recently, researchers in [8] founds that UAVs can mimic honeybees’ humming sound, which is known to annoy elephants. These small bees love to sting elephants’ sensitive areas such as eyes, ears, and noses [8]. N. Hahn, et al. have also proven the effectiveness of drones by hiring rangers to manually control UAVs to herd elephants away from the farms [3], in this paper we build on this idea and propose an autonomous UAV system.

(a) Elephant at the border
(b) Elephant invades the farm
(c) UAV navigates to the elephant
(d) Camera turns on
(e) Elephant herded away
Figure 1: Our Autonomous UAV Illustration

2.1 Paper Contributions

Our contribution in this paper can be summarized as follows:

  • Our solution comprises of three parts: a compact custom low-power GPS tag installed on the elephants, a receiver stationed in the human living area that detects the elephants’ presence near a farm or village, and an autonomous UAV system that tracks and herds the elephants away from the farms.

  • By utilizing PID controller and machine learning techniques, we obtain accurate tracking trajectories at a real-time processing speed of 32 FPS.

  • Our proposed autonomous system can save over 68% cost compared with human-controlled UAVs in mitigating EHC.

3 Overall Approach

Our autonomous UAV solution can be summarized as follows:

  • To begin with, we place a GPS tag on the elephant (ankle or neck) by tranquilizing it. And then, a ground base is set up on the farm, where protection is needed from elephants’ invasions. The ground base will emit signals that the GPS tag will pick up when the signal is strong enough. So the range of the signal from the ground base defines the perimeter of the protected area we want to prevent elephants from entering.

  • When the elephant with the GPS tag goes into the range of the signal of the ground base, the GPS tag will be woken up from the sleep mode, and it will start to send out real-time coordinates of the elephant.

  • Once the UAV (situated in the farm) receives the coordinates of the elephants, it navigates to the location of the elephant.

  • When the UAV is closer to the coordinates of the elephant, the onboard camera of the UAV is enabled, and the live video feed is then processed by vision algorithms on NVIDIA Jetson Nano to track the elephant effectively.

  • Finally, the drone will herd the elephant away from the ground base and prevent elephants from damaging crops since the UAV emits sounds similar to honeybees that elephants are afraid of. Elephants tend to stay away from bees because these small bees love to sting elephants’ sensitive areas, such as eyes and ears [12].

Figure 1 illustrates the process of our proposed autonomous UAV system in more details. The yellow shaded circle represents the region of the signal covered by the ground station. And the three little houses in the middle of the circle represent the ground station. In addition, the yellow shaded circle is the region where the signal from the base station will trigger and wake up the GPS tag on the elephant. Initially, when the elephant is outside the yellow region, which is the protected region of the farm, the drone is on the ground and ready to take off.

When the elephant comes into the protected region around the ground base, the Xbee module receives the signal from the ground base, which triggers and wakes up the GPS tag on the elephant. Then the GPS coordinate of the elephant is sent out by the Xbee module on the GPS tag and is then received by the Xbee module on the drone. The drone immediately takes off and uses waypoint navigation to fly to the coordinate of the elephant.

When the drone is close enough to the position of the elephant, the onboard camera will turn on. GPS signals only provide a coarse location of the elephant; the vision algorithm enables the drone to track the elephant’s movement in a more fine-grained fashion. Finally, the drone maneuvers and herds the elephants until they are outside the protected area.

4 Hardware Design

Figure 2: System Block Diagram

In this section, we discuss our hardware building blocks shown in Figure 2, which include the custom UAV and the GPS tag installed on elephants.

Our UAV is a standard hexacopter with six motors and electronic speed controllers (ESC) complemented by an onboard computer, a camera, and a communication module.

We select Jetson Nano as the onboard processor due to its low power consumption and compactness. Further, its integrated NVIDIA GPU allows us to perform neural network inference with hardware acceleration. Figure 

2(a) shows our completed UAV prototype, with the DJI N3 flight controller, Xbee RF module, and the camera mounted. DJI N3 has multiple built-in sensors such as GPS and IMU that support stable waypoint navigation and control. The Xbee RF module is responsible for receiving the location of the elephants sent by the GPS tag. To ensure our UAV can complete the round-trip herding task, we utilize a high capacity 12000 mAh LiPo battery to power the entire UAV system. The LiPo battery allows flight time as long as 20 minutes and enables the UAV to travel up to 4 miles without recharging.

As shown in Figure 2(b), our GPS tags are made of four major components: a Xbee RF module, a GPS module, a SD card reader and a ATmega328P micro-controller. Further, we also include a button cell for the GPS module. This button cell prevents the GPS module from shutting down completely. It takes more than 10 minutes for the GPS module to find a fixed location after shutting down completely. Thus, the included button cell helps the GPS module to find a fixed and accurate location coordinate quickly.

The GPS tag is a 4-layer board of 83 mm 83 mm and weighs only 26.3 g. Thus, elephants will barely notice the extra weight if we install our GPS tags on them. The electric power consumption of our GPS tag is 10.5 mW. If we power our GPS tag with a 10000 mAh battery bank, our GPS tags can last for about 200 days on the elephants.

(a) UAV Prototype
(b) GPS Tag PCB
Figure 3: Hardware Design

5 Algorithm Design

In this section, we discuss our algorithms that run onboard the UAV and how we meet the real-time processing constraint. Our onboard processing has two major subcomponents, visual tracking and control. When we launch our control program, it will spawn the tracker as a separate process and communicate with it through a socket. The tracker will send the bounding box coordinates of the elephant to the control program at each frame. Upon receiving the bounding box coordinates, the control program will generate control signals for the UAV.

5.1 Detection

The rapid advancement of Convolutional Neural Network (CNN) has brought object detection to a new level. SORT

[1], a similar framework to ours, adopts Faster-RCNN, an accurate yet expensive two-stage model. To deal with the compute constraint, we utilize SSD [5], a single-stage and highly efficient detector. We select

pixels as the input size and Inception V2 as the backbone. The advantage of SSD is that when quantized using TensorRT, we obtained near real-time inference speed (around 25 FPS) in our environment. Since we are interested in persons, vehicles, and wildlife, we trained the model on COCO

[4], a widely used detection dataset.

Our experiments found that even though SSD has comparable accuracy with two-stage detectors on large and medium-sized objects, it struggles to detect objects with smaller pixel-size consistently. The reason is that SSD only regresses bounding boxes on downsampled feature maps in the last few layers of the CNN backbone. However, it is impractical for us to increase the input size or tweak the SSD architecture due to the hardware constraint.

Therefore, we utilize tiling as a workaround. We divide the video frame into 6 tiles. The tiles overlap with each other by to cover objects near the edges. Ideally, we should be able to process all 6 tiles in parallel through batching. However, we are once again limited by the compute constraint of our environment. As a result, we have to process the tiles sequentially, and we introduce a simple attention mechanism by weighting the tiles containing a large number of objects more than empty tiles. To refrain from starvation, we also adopt an aging mechanism that keeps track of the number of frames since the tile was last processed. Lastly, we set the confidence threshold of SSD to to filter out possible false positives.

5.2 Tracking

(a) Our tracking module
(b) Our detection module
Figure 4: Tracker Design

SORT [1]

assumes that the detector runs at every frame and only relies on the Kalman filter to process the detections, which is not realistic on an edge device. Furthermore, we can process a tile

of the time on average, or worse, when the detector cannot achieve real-time inference because of sequential tile processing. Therefore, we need to track objects from frame to frame when SSD does not process a specific tile for a large number of frames.

Most correlation and CNN-based trackers are thus unfit for this purpose since their speed scales poorly with the number of objects. Instead, we utilize optical flow, a lightweight classical algorithm that tracks feature points on the objects. For each object, we use ShiTomashi corners and sample a fixed density of them inside the bounding box. We model bounding box transformation as affine and compute a transformation matrix from optical flow feature matches to estimate a new bounding box for the next frame. We also estimate a homography transformation from background feature matches, which is helpful to compensate for camera motion during Kalman filtering

[13].

5.3 Motion Model

We adopt the standard Kalman filter and a constant velocity model to alleviate optical flow’s undesirable performance when the targets are partially occluded. Since Kalman filter models the motion of objects, the next bounding box locations can be roughly predicted based on velocity when track measurements are incorrect (occluded). Our Kalman filter state for each object is defined on the eight dimensional state space:

(1)

where , represent horizontal and vertical pixel position of the top left corner of the bounding box, , represent horizontal and vertical pixel position of the bottom right corner of the bounding box. are the corresponding velocities. The track bounding boxes are used as measurements to update the state of Kalman filter. Note that our measurement space only consists of pixel positions, .

Using only the pixel positions of bounding box corners instead of its width and height like [1] and [14] enables us to easily transform Kalman filter state to compensate for camera motion. However, the two corners of the bounding box are modeled independently, which can cause the bounding box to drift away from its groundtruth size over time. To tackle this issue, we update the positions of the two corners jointly when applying the constant-velocity prediction:

(2)
(3)

where is the coupling factor that measures the correlation between the two corners. We update and in a similar fashion. Through empirical experiment, we set .

Figure 5: Unrolled Tracker Pipeline ()
Figure 6: PID Control Errors

5.4 Data Association

To associate detections to tracks, we make use of both the state and the estimated covariance of the state . We partially follow the approach in Deep SORT [14] by computing the squared Mahalanobis distance between the Kalman filter state and the detection:

(4)

where is the i-th track state projected onto the measurement space and

is the j-th bounding box detection. The Mahalanobis distance factors in state uncertainty by measuring how many standard deviations the detection is away from the mean track position.

We filter out unlikely associations with large and threshold Intersection Over Union (IOU) at 0.3. IOU measures the percentage of overlap between the detection and track bounding boxes. The Hungarian algorithm is used obtain the optimal assignment. Finally, the remaining unassociated detections are registered as new tracks.

Item
Unit Cost (US$)
Annual Cost (US$)
Custom UAV Kits
Camera 23.5 () 70.5
Flight Controller 419 () 1257
Jetson Nano 100 () 300
Frame 170 () 510
Motors 180 () 540
ESCs 126 () 378
Propellers 60 () 180
LiPo Battery 83 () 249
Subtotal Kits 1161.5 3484.5
Custom GPS Tags
GPS Module 13 () 130
Xbee Module 25 () 250
SD Card 14 () 140
Atmega328p 2.5 () 25
Battery Bank 15 () 150
Subtotal GPS Tags 69.5 695
Elephant Anesthesia 500
Total 4679.5
Table 1: Cost Summary

5.5 Technical Approach

In this section, we provide a detailed overview of our visual tracking algorithm. Figure 3(a) shows our tracking module, which is run at almost every frame. For each target, optical flows takes as input the previous frame, the current frame, and Kalman filter’s last state to estimate a bounding box for the target in the current frame and outputs it as a new track. Each track is associated with a track ID to identify the track. After Kalman filter predicts a new state based on the constant-velocity model, the new track will be used to update (correct) Kalman filter’s new state to produce a final state output, .

Figure 3(b) shows our detection module, which is run every frames due to its large latency. The SSD detector takes as input the current frame and outputs a set of detections. Each detection will be associated to a track using the approach in 5.4 Similarly, each track output by association will be used to update Kalman filter’s predicted state to produce a final state output, .

Figure 5 shows our unrolled tracker pipeline that combines both modules, where the detection module is inserted every frames starting from .

5.6 Control

We incorporate proportional–integral–derivative (PID) controllers for yaw, pitch, and roll velocities based on 2D bounding boxes output by our tracker. In Figure 6, we define three control errors, , , and . and are the horizontal and vertical distances, respectively, between the centroid of the bounding box and that of the entire video frame. We also define a reference bounding box (blue) with a fixed area. is the area difference between the current bounding box and the reference bounding box, which roughly indicates a change in spatial distance. We use to control yaw and roll, while and are used to perform PID on the pitch of the UAV.

6 Results

The proposed tracker achieves high tracking precision at 32 FPS on Jetson Nano, which is fast enough to control the UAV in real-time. To evaluate the control algorithm, in Figure 7, we simulate the flight trajectories in the DJI Flight Simulator. To retrieve the ground truth 3D trajectory, we project the 2D image coordinates of the elephant of interest (green bounding box) in the video to the ground plane. As a result, the trajectory of the UAV has 80% match with the ground truth trajectory given an error margin of a 1-meter radius.

Figure 7: Simulation Result

N. Hahn et al. investigated the cost of human-controlled UVAs on the borders of Tanzanian Parks, which is about US$15,000 annually [3]. To cover the area of interest with our solution, we estimate 10 GPS tags and 3 UAVs are required on average per year. According to Table 1, our proposed autonomous system only costs US$4679.5 annually, which saves over 68% cost compared with human-controlled UAVs in mitigating EHC.

7 Conclusions

In this paper, we proposed an autonomous system that uses telemetry to track and herd elephants away from the villages. We design and implement all three parts of our solution: a compact custom low-power GPS tag, a receiver stationed in the human living area, and an autonomous UAV. Our real-time tracking algorithm achieves 32 FPS on the mobile edge device, NVIDIA Jetson Nano. And our proposed autonomous system is highly cost-effective compared with human-controlled solutions, saving more than 68% per year.

References

  • [1] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft (2017) Simple online and real time trackin. Technical report External Links: 1602.00763v2, ISBN 1602.00763v2, Link Cited by: §5.1, §5.2, §5.3.
  • [2] S. Gulati, K. K. Karanth, N. A. Le, and F. Noack (2021) Human casualties are the dominant cost of human–wildlife conflict in india. Proceedings of the National Academy of Sciences 118 (8). Cited by: §2.
  • [3] N. Hahn, A. Mwakatobe, J. Konuche, N. de Souza, J. Keyyu, M. Goss, A. Chang’a, S. Palminteri, E. Dinerstein, and D. Olson (2017) Unmanned aerial vehicles mitigate human–elephant conflict on the borders of tanzanian parks: a case study. Oryx 51 (3), pp. 513–516. Cited by: §2, §6.
  • [4] T. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dolí Microsoft COCO: Common Objects in Context. Technical report External Links: 1405.0312v3, Link Cited by: §5.1.
  • [5] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2015-12) SSD: Single Shot MultiBox Detector. External Links: Document, 1512.02325, Link Cited by: §5.1.
  • [6] C. A. Mackenzie and P. Ahabyona (2012) Elephants in the garden: financial and social costs of crop raiding. Ecological economics 75, pp. 72–82. Cited by: §2.
  • [7] B. Perera (2009) The human-elephant conflict: a review of current status and mitigation methods. Gajah 30, pp. 41–52. Cited by: §2.
  • [8] D. Raya Islam, A. Stimpson, and M. Cummings (2017) Small uav noise analysis. Technical report Tech. rep., Humans and Autonomy Laboratory, Durham, NC, USA. Cited by: §2.
  • [9] L. J. Shaffer, K. K. Khadka, J. Van Den Hoek, and K. J. Naithani (2019) Human-elephant conflict: a review of current management strategies and future directions. Frontiers in Ecology and Evolution 6, pp. 235. Cited by: §2.
  • [10] N. W. Sitati, M. J. Walpole, R. J. Smith, and N. Leader-Williams (2003) Predicting spatial aspects of human–elephant conflict. Journal of applied ecology 40 (4), pp. 667–677. Cited by: §2.
  • [11] N. W. Sitati and M. J. Walpole (2006) Assessing farm-based measures for mitigating human-elephant conflict in transmara district, kenya. Oryx 40 (3), pp. 279–286. Cited by: §2.
  • [12] F. Vollrath and I. Douglas-Hamilton (2002) African bees to control african elephants. Naturwissenschaften 89 (11), pp. 508–511. Cited by: 5th item.
  • [13] J. H. White and R. W. Beard The Homography as a State Transformation Between Frames in Visual Multi-Target Tracking. Technical report External Links: Link Cited by: §5.2.
  • [14] N. Wojke, A. Bewley, and D. Paulus Simple online and real time tracking with a deep association metric. Technical report External Links: 1703.07402v1 Cited by: §5.3, §5.4.