Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles

04/06/2018
by   Aaron McFadyen, et al.
0

This paper presents the design and implementation of an assisted control technology for a small multirotor platform for aerial inspection of fixed energy infrastructure. Sensor placement is supported by a theoretical analysis of expected sensor performance and constrained platform behaviour to speed up implementation. The optical sensors provide relative position information between the platform and the asset, which enables human operator inputs to be autonomously adjusted to ensure safe separation. The assisted control approach is designed to reduced operator workload during close proximity inspection tasks, with collision avoidance and safe separation managed autonomously. The energy infrastructure includes single vertical wooden poles and crossarm with attached overhead wires. Simulated and real experimental results are provided.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

09/03/2020

Evaluation of a Skill-based Control Architecture for a Visual Inspection-oriented Aerial Platform

The periodic inspection of vessels is a fundamental task to ensure their...
07/31/2019

Aerial Robot Control in Close Proximity to Ceiling: A Force Estimation-based Nonlinear MPC

Being motivated by ceiling inspection applications via unmanned aerial v...
09/06/2021

A Virtual Reality-based Training and Assessment System for Bridge Inspectors with an Assistant Drone

Over 600,000 bridges in the U.S. must be inspected every two years to id...
04/11/2018

Design and Control of an Aerial Manipulator for Contact-based Inspection

Manipulator dynamics, external forces and moments raise issues in stabil...
05/29/2020

MAV Development Towards Navigation in Unknown and Dark Mining Tunnels

The Mining industry considers the deployment of MAV for autonomous inspe...
03/26/2018

Visual-Inertial Teach and Repeat for Aerial Inspection

Industrial facilities often require periodic visual inspections of key i...
04/10/2019

A Data Fusion Platform for Supporting Bridge Deck Condition Monitoring by Merging Aerial and Ground Inspection Imagery

UAVs showed great efficiency on scanning bridge decks surface by taking ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The manual operation of unmanned aerial vehicles (UAV) for the purpose of close proximity inspection of infrastructure is becoming increasingly common. However, precise manual piloting of these platforms when undertaking inspections is a challenging task due to the pilot’s difficulty in maintaining awareness and precise offset control of the UAV relative to the infrastructure. This requires highly skilled pilots to be located in close proximity to the asset. These aspects limit the efficiency and impedes potential value that could be realised from a UAV-based approach for infrastructure inspection. To address pilot limitations, attention is shifting towards robotic solutions mainly to provide the capability of obstacle avoidance and path planning.

The majority of approaches to collision avoidance focus on full autonomy. However, in practice, an assisted autonomy solution is more palatable to regulators and also more desirable to end users. From the regulatory point of view, an assisted autonomous control system will have a human-in-the-loop which defines responsibilities and it is considered as a fail-safe against full autonomous control failure. From the end user point of view, it allows an expert operator to focus on hard-to-detect defects in the infrastructure without worrying about collisions.

This work addresses the task of inspecting electrical pole infrastructure using an unmanned aerial vehicle (UAV), Fig. 1. The aim is to maintain a safety buffer zone to the electrical pole in order to increase the safety of the operation and improve the quality of collected data and also to reduce the pilot cognitive load and reduce the level of pilot skill required for close proximity pole inspection.

The contributions of this paper are:

  1. A theoretical analysis of expected sensor performance and constrained platform behaviour.

  2. A light weight vision only, computationally efficient approach to electrical pole detection and avoidance.

Fig. 1: Example Flight trial showing the multirotor (DJIS800) and sensor rig (Intel NUC, Two RealSense cameras) inspecting an unenergized power pole, crossarm and wire configuration in Queensland, Australia.

This paper is structured as follows. Section II provides some background on UAVs for autonomous inspection. Section III presents the operational concept for assisted inspection of power infrastructure. Section IV provides an overview of the hardware and software selection for perception. ection V describes the assisted control approach whilst Section VI provides simulated and real experimental results for a multirotor inspecting a fixed power pole. Section VII offers concluding remarks.

Ii Background

The majority of work on infrastructure inspection focus on the autonomous aspect of the problem, as shown in recent survey by Máthé et al. [1]. The assumption is that the data recorded during the autonomous operation can be processed offline by experts as in the case of railway semaphore inspection [2], wind turbine inspection [3] and power line inspection [Li2008KnowledgebasedPL]. However, an autonomous inspection requires the use of an expert system onboard the UAV that replaces the human expert and can steer the platform towards potentially defective parts of the asset. This is possible in the case of simple structures, such as power lines [4], but not always possible when the structure is complex or the type of defect is unknown. In these cases a human expert is needed to steer the platform.

Assisted autonomy of UAV focuses on reducing the possibility of collisions during missions. This can be done in a passive or an active way. Passively, the assistance is performed by directing the pilot using external forces through a haptic control device as demonstrated in [5, 6, 7]. Whereas the active approach works by altering the direction and magnitude of the operators input velocity [8], usually to ensure collision free motion when operating in close proximity to obstacles (e.g power infrastructure in the case of this work). Typically, the motivation behind this is so that the UAV maintains smooth and continuous motion during operation leading to improved data collection.

However, in an assisted inspection mission, the operator often wants to position the platform such that a particular view of the infrastructure can be examined clearly, without accidentally causing a collision. In this case, it is undesirable to alter the direction of the controller input, instead we simply modify the magnitude such that the robot never breaches a predefined inner collision boundary. This paper proposes an approach to achieve this type of assisted autonomy for infrastructure inspection.

Iii Preliminaries

Iii-a Notation

A position vector is denoted by a single point

, where , and are the scalar displacements of the point from an earth fixed reference coordinate frame with origin . A tilde is used to denote a requested control input from the operator such that and represent vector and scalar control inputs respectively. Unless otherwise stated, SI units are assumed.

The bounding polygon of a finite point set is the set of points in the closure of , not belonging to the interior of . The bounding polyhedra of a finite point set is the set of points in the closure of , not belonging to the interior of .

Fig. 2: 3D pole, crossarm and wire () geometry and parameters including example protection zones () and arbitrarily located platform.

Iii-B Infrastructure Parameters

Assumptions regarding the infrastructure are as follows:

  1. Each pole has a height and radius such that and . If a pole is not cylindrical, it can be modelled as a cylinder. The approximating cylinder encloses all edges of the pole and provides an upper bound of the pole size. Each pole is located at the position which denotes the point where the centreline of the approximating cylinder intersects the ground plane. Pole tilt is no more than from the axis in .

  2. Each pole has a single cross arm attached to the top of the pole at point such that the centre of the crossarm and the pole coincide. The crossarm has a radius measured from to either end of the crossarm such that . The crossarm resides at height and has an orientation measured from the axis of . The crossarm is assumed to be perpendicular to the pole with unknown orientation.

  3. Each pole has an circular crossarm region centred on with radius where the crossarm may be located. Crossarm and pole detection regions are defined as the intersection of the sensor field of view and the circular crossarm region .

  4. Incoming and outgoing linear wires where are attached to the crossarm. The wires run parallel to each other such that for and perpendicular to the crossarm such that . Incoming wires have the same outgoing direction and are located in the same plane as and .

  5. Each pole, crossarm and wire configuration has a restricted volume or protection zone . The protected zone is made up from two convex polyhedra consisting of a regular cylinder with radius centred on the pole and a rectangular polyhedron of infinite length, height and width . The resulting non-convex polyhedron encloses the pole, crossarm and wires such that a vertical cross section resembles the letter ‘T’ (see Fig. 2 and 3).

Fig. 3: 2D (-planar view) pole, crossarm and wire parameters including example protection zones () and crossarm region ().

Iii-C Platform Parameters

Assumptions regarding the platform are as follows:

  1. The platform is a modified 6 rotor DJI S800 platform with maximum takeoff weight (MTOW) less than 10kg including payload. The platform payload consists of a computer (Intel NUC), application sensors (RealSense cameras), autopilot (including GPS receiver, IMU and barometer) and all supporting telemetry and control links.

  2. The platform position defines the location of the platform centre of mass in . The platform position relative to the infrastructure is defined by the horizontal distance to the pole (or projection of the pole centreline to ) and the vertical distance from the -plane located at the same altitude as the crossarm where is below and is above the crossarm.

  3. The platform is controlled using velocity commands in the platforms , , and (yaw) directions. The platform axis assumes an NED system, whereby the axis runs along the platform arm that intersects the FOV of both onboard camera sensors (see Remark 1 above). The platform axis points down from the platform centre of mass such that its direction is aligned with the gravity vector in stable hover only.

Iv Perception

To ensure adequate perception for the assisted control, both hardware and software aspects are tackled. The hardware aspect involve sensor choice and placement whilst the software aspect consider algorithms for detection and data extraction.

Fig. 4: Example crossarm region coverage for , and . The RealSense field of view () using and crossarm region () are shown along with the sensor coverage of the -plane at pole altitude (), sensor coverage of the crossarm region () and sensor coverage of the pole region ().

Iv-a Sensor Configuration

Sensor choice and placement are important considerations for inspection tasks. The location of the sensors should be such that the probability of detecting the target object is maximised, regardless of the platform orientation and before the application of any additional processing. The analysis conducted in this section informs the sensor choice and placement on the platform, by considering a theoretical treatment of the expected operation and associated constraints. The first analysis evaluates the crossarm detection probability using typical sensor field of view constraints to derive suitable protection zone boundaries. The second analysis uses a modified detection probability metric to enhance the inspection performance through flight control limitations and sensor placement parameters.

For the analysis presented in this section, the following inspection scenario is used. Consider a pole located at with and . The crossarm and wire configuration is such that with and . The crossarm and wire orientation is unknown. Consider also a platform located at with orientation , and . The optical sensor(s) are rigidly attached to the platform such that the origin of the sensor(s) coincide with the platform centre of mass. The platform moves in the and direction such that and with . At each location, the platform may take a maximum (or minimum) control input corresponding to . The sensor(s) field of view is such that degrees horizontally and degrees vertically with a detection range of .

Fig. 5: Example crossarm region coverage for , , with (clockwise from upper left) and RealSense field of view . The crossarm region () is shown along with the sensor coverage of the crossarm region () and sensor coverage of the pole region ().

To analyse the probability of detecting the pole, crossarm and wires using optical sensors, the coverage of the crossarm and detection disc is analysed at each and with . First, the bounding polygon that defines the intersection points between the -pane at crossarm height and the spherical section defining the field of view is found. Second the bounding polygons that define the intersection between and the crossarm and detection region is found. Third, a crossarm coverage area metric and pole area metric is found where , and are the area of the bounding polygons defined by , and respectively. Example sensor coverage results and parameters for a single location are shown in Fig. 4, with results for multiple locations shown in Fig. 5 - 7.

Using these results, a candidate protection can then be defined such that , and meters. The regions where the crossarm and wires are more likely to be detected given sensor constraints are and . The important point to note is the relative distance between the assumed infrastructure parameters and the protection zones. Altering the infrastructure parameters will change the protection zones accordingly, so knowledge of exact parameters allows a solid starting point to derive specific protection zones for each application or flight.

Using the same set of coverage results, the sensor location on the platform can be determined. If a single forward facing sensor (non-gimbaled) is used with its optical axis is aligned with the platforms axis, the field of view is obviously limited and the possibility of missed detection increases at safety critical times during inspection. As the platform moves toward the pole, crossarm and wire detection becomes difficult as they leave the field of view. These effects are shown in Fig. 6 and 7, where the notion of tilting is captured by rotating the sensor downward in pitch. An obvious insight is to then use two optical sensors, whereby one is upward facing and the other is downward facing with overlap areas near the axis. The result is an increased probability of detection, moving inward, outwards or hovering near the infrastructure, alleviating the gimbals and providing some robustness to missed detections (i.e. one camera is required to detect the asset)

Fig. 6: Detection area metric with , , for and . As the detection area metric goes from zero () to unity (), the sensor covers more of the pole region such that the probability of detected the crossarm increases, regardless of the crossarm orientation .

Remark 1: Using the above analysis:

  1. Two Intel RealSense cameras are mounted such that their optical axis are offset from the platform axis by and . The field of view of each camera is is horizontally and vertically. Combining the two cameras gives us a sensor with a vertical field of view of with a gap of in the middle.

Fig. 7: Detection area metric with , , for , and . Notice the lack of coverage and reduced probability of detecting the crossarm when pitching up (moving away from the crossarm) and operating near the pole.

Remark 2: The maximum permissible pitch angle is 20 degrees whilst the maximum permissible roll angle assumed to be 0 degrees. These constraints are based on the assumption that the platform will be oriented such that the platform’s axis points toward the pole centre at all times. For the initial analysis presented here, the control input at each location is such that so and so in Fig.6 and 7 respectively.

Remark 3: The above analyse can be repeated for any sensor field of view, infrastructure parameters and platform motion. The results are to be used a guide to design the concept of operation and are can be modified for other platform constraints and refined after a practical assessment of the sensor performance.

Iv-B Detection Algorithm/Approach

The input to the pole detection algorithm consists of two point clouds captured using the RealSense cameras on-board, and the output is the the horizontal distance and the relative angles to the pole.

The detection process starts by combining the two clouds using the static rigid transform between the center of the UAV and the two cameras. Following that, the resulted combined cloud is passed through a noise reduction stage that deploy a voxel grid filter followed by a radius outlier removal. Radius outlier removal calculates the number of neighbors within a given radius for each point. Outliers are all points that have fewer than a given number of neighbors

The pole detection algorithm works under the assumption that the pole is the longest vertical structure in the current view. Based on this assumption, the points in the filtered point cloud are used to fill a 2D histogram with a bin of size cm cm. This is done by projecting the points into the plane. The center of the pole is the bin with the highest number of points count. Given the detected center of the pole in the reference frame of the UAV, the horizontal distance and the relative angles to the pole are calculated and returned to the assisted control system.

Fig. 8 shows the filtered cloud after voxelisation and outlier removal. It also shows a vector between the center of the UAV and pole based on the 2D histogram voting as descried above. The gap in the cloud is explained in section IV-A, Remark 1.

Fig. 8: The filtered cloud with the pole detected. The red dashed line is the vector between the UAV and the detected center of the pole.

V Assisted Control

The platform has 2 modes of operation or flight modes - Manual Mode and Inspect Mode.

V-a Manual Mode

The manual mode allows the platform to be controlled manually and is typically used pre and post inspection task. The manual mode can be instigated during inspection by the operator or in the case of missed infrastructure detections, and is typically used to start, pause (hover) or return from the inspection task. No assisted control is included such that

(1)

Automation in manual mode is inherited from the onboard autopilot functionality which includes the ability to hover at a fixed location and heading when no control input is given.

V-B Inspect Mode

The inspect mode incorporates the safety features through assisted control that will alter manual control inputs that would place the platform inside the protected zones around infrastructure. Due to sensing constraints outlined earlier, the inspect mode operates in 4 states or sub-modes. Each state depends on where the platform is located with respect to the crossarm and wires and detection status. A description of each state and associated control input is provided below with the state transition topology depicted in Fig. 9. In this paper, we only consider control in the horizontal -plane (3DOF) such that in all states.

 
State 1 The inspection mode has been initiated but infrastructure has not yet been detected so assisted control cannot be applied. The platform is essentially in manual mode such that , but can automatically switch to states 2-4 pending relative geometry and sensing performance. This is not the same as in manual mode, and can be seen as an arming of the inspect mode. This state is the only state that can be entered from manual mode. If the inspection is paused this state is re-entered when inspection resumes, but may (almost) immediately switch to states 2-4 subject to the relative geometry and requested control input.

 
State 2 The platform has detected the infrastructure and the safety system attempts to orientate the platform such that the platforms axis always points toward the pole within a specific deadband via proportional-derivative yaw control such that:

(2)

where and are the proportional and derivative gain terms respectively and is the relative heading angle from the quad -axis and the pole centre. The controller allows manual yaw corrections based on the value of where tau is the angle between and axis of the platform. The requested control input is such that there is no collision threat allowing

(3)

 
State 3 The platform has detected the infrastructure and the safety system attempts to re-orientate the platform in the same manner as state 2. The requested control input is such that there is a potential collision threat. The safety system ensures that the platform cannot enter into the protected zone surrounding the pole such that . The operator remains in control of the inspection task, with the requested control modified such that:

(4)
(5)

where is the angle between the requested lateral velocity vector and the pole centre measured from the quad centre of mass. The angle defines the angle between the vector tangential to the inner collision boundary and the pole, measured from the platform.

Fig. 9: Flight mode transition architecture where one and two-way mode transitions are shown as well as the inspect sub-modes ().

 
State 4 The platform has detected the infrastructure and the safety system attempts to re-orientate the platform in the same manner as state 2. The position of the platform is such that there is an immediate collision threat, as the inner boundary has been breached. The safety system ensures that the platform exits the protected zone such that , Operator input is modified such that:

(7)
(8)

forcing the platform outside the inner boundary regardless of the controller input. Importantly, this control design also guards against collision in the event the operator input is incorrect due to disorientation, panic or otherwise.

Vi Infrastructure Inspection

To create a working platform capable of conducting real inspection tasks, the perception system is coupled with the assisted control design in a multi-loop control architecture (nested) operating at different frequencies. The pilot input defines the reference control input which, depending on the state, is modified through the high-level controller and then sent as a reference for the low-level attitude controller (Pixhawk). The high level controller runs at Hz and the low-level controller runs at Hz. This same control architecture/topology is used in both simulation and on the real platform. Both simulated and real experiments are conducted to evaluate the safety system for use in live inspection tasks.

Vi-a Simulated Results

Simulations are conducted using a simple quadrotor model (hector_quadrotor [9]) running in ROS and using Gazebo with an appropriate pole model to provide the feedback for the assisted control. The manual control uses hardware in the loop via a real RC transmitter. A simulation is conducted whereby the platform initially moves toward to pole, descends, orbits the infrastructure, moves toward the pole than ascends. The platform state, control input, flight path and relative distance (range) to the pole are depicted in Fig. 10(a)-(d) respectively.

Unlike the real experiment that we will detail later, the perception system in the simulation case is essentially perfect, so missed detections are not observed. Thresholds are placed on the relative yaw orientation and distance to the pole such that the assisted control can become active during flight. The simulation results demonstrate two main features. First, the assisted control successfully alters the control and consequently stops the operator from moving beyond the inner protection zone boundary. This is indicated by the platform moving into state 3 and the control subsequently converging to zero, even if the operator attempts to move further toward the pole. Second the yaw control applied in state 1 through to state 3 ensures that the platform points toward the pole, resulting in the orbit-like behaviour despite no control input in the direction (s until s).

Vi-B Real Experimental Results

Flight trials are conducted using real (de-energised) electrical infrastructure used for training purposes in the power industry. All perception and control is accomplished onboard (via an Intel NUC), with an offboard laptop required for setup and two RC controllers used for safety. The master RC transmitter is always active and used to switch between manual and inspect modes. The second transmitter is used during inspect mode and is directly linked to the perception and assisted control systems. A flight is conducted whereby the platform starts near the infrastructure before attempting to move as close as possible on two consecutive attempts before retreating away from the asset. The platform state, control input, flight path and relative distance (range) to the pole are depicted in Fig. 11(a)-(d) respectively.

As the platform is close to the infrastructure, the perception system immediately detects the pole and initiates automatic yaw control to re-orient the platform toward the pole. The experimental results support the observation from the simulated results, but also demonstrate a key safety feature. Due to missed detections and measurement error, the platform moves beyond the protection zone boundary on two occasions as the operator attempts to inspect the infrastructure ( and ). The assisted control responds accordingly, rejecting the operator input and pushing the platform back outside the protection zone. This is indicated by the platform moving into state 4 and with the control input being the opposite polarity to that requested until the operator no longer attempts to move toward the pole. At this time, the control polarity is retained by attenuated accordingly to ensure non-aggressive behaviour.

Remark: The negative range value in Fig. 11(d) at s occurs when the pole moves out of detection range (state 1), and the platform is essentially in manual mode.

Table I lists the parameters of the system for both the simulation and the real experiment case.

Parameter (units) Simulated Experimental
(s) 40 8
(s) 0.1 0.05
(s) 0.1 0.05
(-) 3 3
(-) 4 4
(kg) 1 4.62
(m) (-2,1,-3) -
TABLE I: System Parameters

(a) System state (b) Horizontal () control (), (-) and (), (-) (c) Flight path (-), pole () and protection zone () (d) Range ()

Fig. 10: Simulated inspection flight with , , . Notice the platform state transition when operating near and away from the pole and the automatic control attenuation ( and ) if requesting an unsafe input when operating near the pole.

(a) System state (b) Longitudinal () control (), (-) and (), (-) (c) Flight path (-) with start () and end () points (d) Range ()

Fig. 11: Real infrastructure inspection flight with , . Notice the platform state transition to state 4 () and () and the subsequent control polarity reversal to move the platform away from the infrastructure.

Vii Conclusions

This paper presented a full theoretical analysis of expected sensor performance and constrained platform behaviour for the task of assisted control of a small multirotor platform when performing an aerial inspection of fixed energy infrastructure. We show how the theoretical analysis inform the design of the sensor rig and the choice of the optical sensor placement. Using the measurement for the optical sensors on-board, the relative position between the platform and the asset (the electrical pole in our case) is estimated and used to enable the human operator inputs to be autonomously adjusted to ensure safe separation.

The development time required from acquisition to implementation and then successful results was less than 3 weeks. This is due to the appropriate modelling and simulation. No incidents occurred during flight, including no control failures, component failures or inadequate sensing highlighting the importance of the theoretical analysis during developmental stages.

Acknowledgement

This work has been supported by the Cooperative Research Centre for Spatial Information (CRC-SI), whose activities are funded by the Business Cooperative Research Centres Programme.
This work is part of the joint CRC-SI, Ergon and QUT project Aerial Robotics for Close Proximity Infrastructure Inspection - Project Number 2.24.

References

  • [1] K. Máthé and L. Busoniu, “Vision and control for uavs: A survey of general methods and of inexpensive platforms for infrastructure inspection,” in Sensors, 2015.
  • [2] K. Máthé, L. Busoniu, L. Barabás, C.-I. Iuga, L. Miclea, and J. Braband, “Vision-based control of a quadrotor for an object inspection scenario,” 2016 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 849–857, 2016.
  • [3] M. Stokkeland, K. Klausen, and T. A. Johansen, “Autonomous visual navigation of unmanned aerial vehicle for wind turbine inspection,” 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 998–1007, 2015.
  • [4] O. Araar and N. Aouf, “Visual servoing of a quadrotor uav for autonomous power lines inspection,” 22nd Mediterranean Conference on Control and Automation, pp. 1418–1424, 2014.
  • [5] T. M. Lam, H. W. Boschloo, M. Mulder, and R. van Paassen, “Artificial force field for haptic feedback in uav teleoperation,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 39, pp. 1316–1330, 2009.
  • [6] A. M. Brandt and M. B. Colton, “Haptic collision avoidance for a remotely operated quadrotor uav in indoor environments,” 2010 IEEE International Conference on Systems, Man and Cybernetics, pp. 2724–2731, 2010.
  • [7] X. Hou, R. E. Mahony, and F. Schill, “Representation of vehicle dynamics in haptic teleoperation of aerial robots,” 2013 IEEE International Conference on Robotics and Automation, pp. 1485–1491, 2013.
  • [8] I. Sa, S. Hrabar, and P. I. Corke, “Inspection of pole-like structures using a visual-inertial aided vtol platform with shared autonomy,” in Sensors, 2015.
  • [9] J. Meyer, A. Sendobry, S. Kohlbrecher, U. Klingauf, and O. von Stryk, “Comprehensive simulation of quadrotor uavs using ros and gazebo,” in 3rd Int. Conf. on Simulation, Modeling and Programming for Autonomous Robots (SIMPAR), 2012, p. to appear.