Autonomous Driving without a Burden: View from Outside with Elevated LiDAR

08/26/2018
by   Nalin Jayaweera, et al.
0

The current autonomous driving architecture places a heavy burden in signal processing for the graphics processing units (GPUs) in the car. This directly transforms into battery drain and lower energy efficiency, crucial factors in electric vehicles. This is due to the high bit rate of the captured video and other sensing inputs, mainly due to Light Detection and Ranging (LiDAR) sensor at the top of the car which is an essential feature in todays autonomous vehicles. LiDAR is needed to obtain a high precision map for the vehicle AI to make relevant decisions. However, this is still a quite restricted view from the car. This is the same even in the case of cars without a LiDAR such as Telsa. The existing LiDARs and the cameras have limited horizontal and vertical fields of visions. In all cases it can be argued that precision is lower, given the smaller map generated. This also results in the accumulation of a huge amount of data in the order of several TBs in a day, the storage of which becomes challenging. If we are to reduce the effort for the processing units inside the car, we need to uplink the data to edge or an appropriately placed cloud. However, the required data rates in the order of several Gbps are difficult to be met even with the advent of 5G. Therefore, we propose to have a coordinated set of LiDAR's outside at an elevation which can provide an integrated view with a much larger field of vision (FoV) to a centralized decision making body which then sends the required control actions to the vehicles with a lower bit rate in the downlink and with the required latency. The calculations we have based on industry standard equipment from several manufacturers show that this is not just a concept but a feasible system which can be implemented.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset