A Novel Deep Neural Network Architecture for Mars Visual Navigation

08/25/2018 ∙ by Jiang Zhang, et al. ∙ Beijing Institute of Technology 8

In this paper, emerging deep learning techniques are leveraged to deal with Mars visual navigation problem. Specifically, to achieve precise landing and autonomous navigation, a novel deep neural network architecture with double branches and non-recurrent structure is designed, which can represent both global and local deep features of Martian environment images effectively. By employing this architecture, Mars rover can determine the optimal navigation policy to the target point directly from original Martian environment images. Moreover, compared with the existing state-of-the-art algorithm, the training time is reduced by 45.8 proposed deep neural network architecture achieves better performance and faster convergence than the existing ones and generalizes well to unknown environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Generally, there are three basic phases during Mars exploration missions—entry, descent and landing (EDL) [1], where the landing phase finally determines whether Mars rovers land on Martian surface safely and precisely. Due to the large uncertainties and dispersions derived from Martian environment, existing algorithms used for EDL phase cannot guarantee the precision of the Mars rovers’ landing on the target point. Moreover, after landing, Mars rovers are usually required to move to new target points constantly in order to carry out new exploration tasks. Hence, in future Mars missions, autonomous navigation algorithms are essential for Mars rovers to avoid risky areas (such as craters, high mountains and rocks) and reach target points precisely and efficiently (Fig. 1).

Fig. 1: Visual navigation phase after landing.

Currently, one of the most significant methods for Mars navigation is visual navigation [2]. Two main methods for Mars visual navigation are blind drive and autonomous navigation with hazard avoidance (AutoNav) [3]. In blind drive, all commands for Mars rovers are determined by engineers from the earth before starting missions. This method promptly reduces the efficiency and flexibility of exploration missions. By contrast, AutoNav can lender Mars rover execute missions unmannedly. Thus, it is more in coincidence with the increasing future demand on Mars rovers’ autonomy and intelligence.

Classical algorithms for AutoNav such as Dijkstra [4, 5], [6, 7] and [8, 9] have been widely researched in the past decades. It is noteworthy that these algorithms have to search the optimal path iteratively on cellular grip maps, which are both time consuming and memory consuming [10]. When dimensions of maps become large and computation resources are limited, these algorithms may fail to offer the optimal navigation policy. To overcome the dimension explosion problem, intelligent algorithms such as neural network [11]

, genetic algorithm

[12] and particle swarm algorithm [13] were extended into planetary navigation problem. However, prior knowledge about the obstacles in maps is prerequisite for these algorithms to work.

To provide the optimal navigation policy directly from natural Martian scenes, effective feature representation algorithms are required. That is, these algorithms have to understand deep features of input image such as the shape and location of obstacles firstly and then determine the navigation policy according to these deep features. In recent years,

Deep Convolutional Neural Networks (DCNNs)

have received wide attention in computer vision field for their superior feature representation capability

[14]. Notably, although the training process of DCNNs consumes massive time and computation resource, it is completed offline. When applying DCNNs to represent deep features of images online after training, it costs little time and computation resource. Therefore, DCNNs have been widely applied in varieties of visual tasks such as image classification [15], object detection [16], visual navigation [17] and robotic manipulation [18].

Inspired by the state-of-art performance of DCNNs in computer vision field, planetary visual navigation algorithms based on deep neural network have been researched. In [19], a 3 dimensional DCNN was designed to create a safety map for autonomous landing zone detection from terrain image. In [20], a DCNN was trained to predict rover’s position from terrain images for Lunar navigation. Though these algorithms are capable of extracting deep features of raw images, they are unable to provide the optimal policy for navigation directly. To solve this probelm, Value Iteration Network (VIN) was firstly proposed in [21] to plan path directly from images and applied to Mars visual navigation problem successfully. Then, in [22], Memory Augmented Control Network was proposed to find the optimal path for rovers in partially observable environment. Both of these networks for visual navigation employed Value Iteration Module. However, it takes massive time to train them.

In this paper, an efficient algorithm to determine the optimal navigation policy directly from original Martian images is investigated. Specifically, a novel DCNN architecture with double branches is designed. It can represent both global and local deep features of input images and then achieve precise navigation efficiently. The main contributions of this paper are summarized as follows:

  • Emerging deep learning techniques (deep neural networks) are leveraged to deal with Mars visual navigation problem.

  • The proposed DCNN architecture with double branches and non-recurrent structure can find the optimal path to target point directly from global Martian environment images and prior knowledge about risky areas in images are not required.

  • Compared with acknowledged (VIN), the proposed DCNN architecture achieves better performance on Mars visual navigation and the training time is reduced by 45.8%.

  • The accuracy and efficiency of this novel architecture are demonstrated through experiment results and analysis.

The rest paper is organized as follows. Section II provides preliminaries of this paper. Section III describes the novel DCNN architecture for Mars visual navigation. Experimental results and analysis are illustrated in Section IV, followed by discussion and conclusions in Section V.

Ii Preliminaries

Ii-a Markov Decision Process

Mars visual navigation can be formulated as a Markov Decision Process (MDP), since the next state of Mars rover can be determined by its current state and action completely. A standard MDP for sequential decision making is composed of action space , state space , reward

, transition probability distribution

and policy . At time step , the agent can obtain its state from environment and then choose its action satisfying distribution . After that, its state will transit into and the agent will then receive reward from environment, where satisfies the transition probability distribution . The whole process is shown in Fig. 2.

Denote the discount factor of reward by . A policy is defined as optimal if and only if its parameter satisfies

(1)
Fig. 2: Markov decision process.

To measure the expected accumulative reward of and , the state value function and the action value function are defined respectively as

(2)
(3)

By substituting Eq. (2) into Eq. (1), the following equation is derived as

(4)

By solving Eq. (4), the optimal policy is determined such that the objective of MDP is achieved. However, since both state value function and action value function are unknown before, Eq. (4

) cannot be solved directly. Therefore, state value function and action value function have to be estimated in order to solving

MDP problem.

Ii-B Value Function Estimation

Value iteration is an typical method for value function estimation and then addressing MDP problem [25]. Denote the estimated state value function at step by , and the estimated action value function for each state at step by . is utilized to represent the policy at step . Then, the value iteration process can be expressed as

(5)
(6)

Through iteration, the policy and value functions will converge to optimum , and simultaneously.

However, since it is difficult to determine the explicit representation of , and (especially when the dimension of is high), VIN is applied to approximate this process successfully. Specifically, VIN is designed with Value Iteration Module, which consists of recurrent convolutional layers [21]. As illustrated in Fig. 3, the value function layer is stacked with the reward layer

and then filtered by a convolutional layer and a max-pooling layer recurrently. Furthermore, through

VIN, navigation information including global environment and target point can be conveyed to each state in the final value function layer. Experiments demonstrate that this architecture performs well in navigation tasks. However, it takes lots of time and computation resource to train such a recurrent convolutional neural network when the value of becomes large. Therefore, replacing Value Iteration Module with a more efficient and non-recurrent architecture without losing its excellent navigation performance becomes the focus of this paper.

Fig. 3: Value iteration module.

Ii-C Learning-Based Algorithms

Typically, there exist two learning-based algorithms for training DCNNs in value function estimation—Reinforcement learning [25] and Imitation learning [26]. In Reinforcement learning, no prior knowledge is required and the agent can find the optimal policy in complex environment by trial and error [27]. However, the training process of Reinforcement learning is computationally inefficient. In Imitation learning, when the expert dataset is given

, the training process transforms into supervised learning with higher data-efficiency and fitting accuracy.

Considering that expert dataset for global visual navigation is available ( is the optimal action at state and is the number of samples), in this paper, Imitation learning method is applied to find the optimal navigation policy.

Iii Model Description

Iii-a Mars Visual Navigation Model

In this subsection, the process of formulating Mars visual navigation into MDP is presented. More precisely, state is composed of the Martian environment image , target point and the current position of Mars rover at time step . The action represents the moving direction of the Mars rover at time step (0:east, 1:south, 2:west, 3:north, 4:southeast, 5:northeast, 6:southwest, 7:northwest). After taking action , the current location of the Mars rover will change and the state will transit into . If the Mars rover reaches the target point precisely at time , a positive reward will be obtained (such as ). Otherwise, the Mars rover will get a negative reward (such as ).

Furthermore, the output vector of the proposed

DCNN is defined as (). Then the training loss is defined in cross entropy form with norm as [28]

(7)

where is the number of training samples, is the one-hot vector [29] of and

is the hyperparameter adjusting the effect of

norm on the loss function.

By minimizing the loss function , the optimal parameter of navigation policy is determined as follows

(8)

Iii-B The Novel Deep Neural Network Architecture

In this subsection, the novel deep neural network architecture—DB-Net with double branches for deep feature representations and value function estimation is illuminated. The principle design idea of DB-Net is to replace Value Iteration Module of VIN with a non-recurrent convolutional network structure. Firstly, the reprocessing layers of DB-Net compresses the input Martian environment image into feature map ( and ). Then, the global deep feature () and the local deep feature () are extracted from feature map by branch one and branch two respectively. By fusing and , the final deep feature (value function estimation) of Martian environment image is derived. Then, the optimal navigation policy can be determined through Eq. (5).

The diagram of DB-Net is illustrated in Fig. 4, where Conv, Pool, Res, Fc and S

are short for convolutional layer, max-pooling layer, residual convolution layer, fully-connected layer and softmax layer respectively. More specific explanations of

DB-Net are given as follows.

Fig. 4: The diagram of DB-Net.

(1) The reprocessing layers comprises of two convolutional layers (Conv-00, Conv-01) and two max-pooling layers (Pool-00, Pool-01). After compressing the original image , the navigation policy becomes area by area instead of point by point (each area has size ). Thus, the efficiency of visual navigation is promptly enhanced.

(2) Branch one consists of one convolutional layer (Conv-10), three residual convolutional layers (Res-11, Res-12, Res-13), four max-pooling layers (Pool-10, Pool-11, Pool-12, Pool-13) and two fully connected layers (Fc-1, Fc-2). Notably, residual convolutional layer (Fig. 5) is one kind of convolutional layer proposed in [30], which not only increases the training accuracy of convolutional neural networks with deep feature representations, but also makes them generalize well to testing data. Considering that DB-Net is required to represent deep features of Martian image and achieves high-precision in unknown Martian environment images, residual convolutional layers are employed on DB-Net. The deep feature represented by this branch is a global guidance to the Mars rover, containing abstract information about global Martian environment and target point .

(3) Branch two is composed of two convolutional layers (Conv-20, Conv-21) and four residual convolutional layers (Res-21, Res-22, Res-23, Res-24). The deep feature represented by this branch depicts the local value distribution of Martian environment image with target , which acts as a local guidance to Mars rover.

Fig. 5: Residual convolutional layer

(4) The final deep feature is fully connected with and through Fc-3, corresponding to the value of one action at current state . Hence, following Eq. (5), the optimal visual navigation policy is determined.

Compared with VIN, not only the depth of DB-Net is reduced significantly (since it is non-recurrent), but also both global and local information of the image is kept and represented effectively. Detailed parameters of DB-Net are demonstrated in TABLE I.

Reprocessing layers Conv-00

kernels with stride 1

Pool-00 kernels with stride 2
(A=12) Conv-01 kernels with stride 1
Pool-01 kernels with stride 2
Branch one Conv-10 kernels with stride 1
Pool-10 kernels with stride 1
Res-11 kernels with stride 1
Pool-11 kernels with stride 2
Res-12 kernels with stride 1
Pool-12 kernels with stride 2
(B=10) Res-13 kernels with stride 1
Pool-13 kernels with stride 1
Fc-1 192 nodes
Fc-2 10 nodes
Branch two Conv-20 kernels with stride 1
Res-21 kernels with stride 1
Res-22 kernels with stride 1
Res-23 kernels with stride 1
(C=10) Res-24 kernels with stride 1
Res-25 kernels with stride 1
Conv-21 kernels with stride 1
Output layers Fc-3 8 nodes
S-1 8 nodes
TABLE I: Detailed parameters of DB-Net

Iv Experiments and Analysis

In this section, DB-Net and VIN are firstly trained and tested on Martian image dataset derived from HiRISE [31]. The dataset consists of 10000 high-resolution Martian images, each of which has 7 optimal trajectory samples (generated randomly). The training set and the testing set consist of 6/7 and 1/7 dataset respectively. Then, navigation accuracy and training efficiency of DB-Net and VIN are compared. Finally, detailed analysis of DB-Net is made through model ablation experiments. More precisely, the following questions will be investigated:

  • Could DB-Net provide the optimal navigation policy directly from original Martian environment images?

  • Could DB-Net outperform the best framework—VIN in accuracy and efficiency?

  • Could DB-Net keep its performance after ablating some of its components?

Iv-a Experiment Results on Martian Images

In this subsection, the process of training and testing DB-Net and VIN on Martian image dataset is described. The input image has a size of with 3 channels (i.g. ), consisting of the gray image of original Martian environment, the edge image of original Martian environment generated by Canny algorithm [32] and the target image (Fig. 6). Then, training accuracy and testing accuracy of DB-Net and VIN are counted to contrast the proportion of the optimal action they take each step. To compare the navigation performance of DB-Net and VIN, success rate on both training images and testing images are counted. It is worth noting that a navigation process is considered successful if and only if the Mars rover reaches target point from start point without running into any risky areas.

(a) Channel1
(b) Channel2
(c) Channel3
Fig. 6: The input image of DB-Net. (Channel1 is the gray image. Channel2 is the edge image. Channel3 is the target image.)

As illustrated in Fig. 7, both training loss and training error of DB-Net converge faster that VIN

. After 200 training epoches,

DB-Net achieves 96.4% training accuracy and 95.4% testing accuracy, outperforming VIN significantly in precision (as shown in TABLE II). Moreover, compared with VIN, average time cost of DB-Net in one training epoch is reduced by 45.8%, exceeding VIN in efficiency promptly. Finally, DB-Net achieves high success rate both in training data and testing data. Remarkably, Martian environment images in testing data are totally unknown to DB-Net, since training data differs from testing data. Therefore, even if the environment is unknown before, DB-Net can still achieve high-precision visual navigation. By contrast, VIN exhibits poor performance on success rate, which is less than 80% in testing data.

Examples of successful navigation process are demonstrated in Fig. 8. It can be seen that the rover avoid craters with varying size precisely under the guidance of DB-Net. Furthermore, the trajectories are nearly optimal. It is worth noting that prior knowledge of craters are unknown and DB-Net has to understand deep representations of original Martian images intuitively. Therefore, the performance of DB-Net is marvellous.

(a) Training loss
(b) Training error
Fig. 7: Training results of DB-Net and VIN
Architectures DB-Net VIN
Training accuracy 96.4% 90.0%
Testing accuracy 95.6% 89.8%
Training success rate 96.0% 81.1%
Testing success rate 93.3% 79.4%
Average time cost (each epoch) 52.8s 97.5s
TABLE II: Results on 128x128 Martian image
(a) Successful examples
(b) Failed examples
Fig. 8: Experiments on Martian Images. (Green points are landing points. Blue points are the target points. Navigation trajectories are red.)

Iv-B Model Ablation Analysis of DB-Net

In this subsection, to test whether DB-Net could keep its performance after ablating some of its components, model ablation experiments are conducted. Define DB-Net without branch one as B1-Net. Then, derive B2-net by replacing residual convolutional layers of B1-Net with normal convolutional layers. As illustrated in Fig. 9 and TABLE III, without global deep features, the navigation accuracy and success rate of B1-Net drop promptly compared with DB-Net. Moreover, with only normal convolutional layers, training cost and error of B2-Net remain at high levels, unable to provide reliable navigation policy for the Mars rover. Therefore, both of the two-branch architecture and the residual convolutional layers make indispensable contributions to the final performance of DB-Net.

(a) Training loss
(b) Training error
Fig. 9: Training results of DB-Net, B1-Net and B2-Net
28x28 grid map DB-Net B1-Net B2-Net
Training accuracy 96.4% 86.2% 13.8%
Testing accuracy 95.6% 85.8% 12.8%
Training success rate 96.0% 63.2% 1.1%
Testing success rate 93.3% 63.2% 1.3%
TABLE III: Results of model ablation experiments

Moreover, to explore the inner mechanism of DB-Net, the final value function layers () of DB-Net, B1-Net and VIN are contrasted in a visualized way. The value function layers estimates the action value distribution of current Martian images and target point. After being visualized, locations close to target point should be lighter (larger value) while location far from target point or near risky areas should be darker (smaller value). As demonstrated in Fig. 10, the value functions estimated by by DB-Net are more in coincidence with the original Martian images compared with B2-Net. It is clear that risky areas are darker and the lighter locations are around target points in value function layers generated by DB-Net from Fig. 10. By contrast, B1-Net without global deep features cannot estimate the value function as precisely as DB-Net. VIN also fails to recognize risky areas of Martian images evidently. Therefore, DB-Net indeed has a remarkable capability of representing deep features and estimating the value distribution of current Martian environment.

(a) Original Martian images
(b) Value functions estimated by by DB-Net
(c) Value functions estimated by B1-Net
(d) Value functions estimated by VIN
Fig. 10: Visualization of value function layers.

V Conclusions

In this paper, a novel deep neural network architecture—DB-Net with double branches and non-recurrent structure is designed for dealing with Martian visual navigation problem. DB-Net is able to determine the optimal navigation policy to target point directly from original Martian environment images without any prior knowledge. Moreover, compared with the existing best architecture—VIN, DB-Net achieves higher precision and efficiency. Most significantly, the average training time of DB-Net is reduced by 45.8%. In future research, more effective deep neural network architecture will be explored and the robustness of the architecture will be researched further.

Vi Acknowledgement

This work was supported by the National Key Research and Development Program of China under Grant 2018YFB1003700, the Beijing Natural Science Foundation under Grant 4161001, the National Natural Science Foundation Projects of International Cooperation and Exchanges under Grant 61720106010, and by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant 61621063.

References

  • [1] Braun, R., Manning, R.: ‘Mars exploration entry, descent and landing challenges’, Journal of Spacecraft Rockets, 2007, 44 (2), pp.310-323.
  • [2] Matthies, L., Maimone, M., Johnson, A., et al.: ‘Computer Vision on Mars’, Internal Journal of Computer Vision, 2007, 75 (1), pp.67-92.
  • [3] Joseph, C., Arturo, R., Dave, F.: ‘Global path planning on board the Mars exploration rovers’, Aerospace Conference, 2007, pp.1-11.
  • [4] Sakuta, M., Takanashi, S., and Kubota, T.: ‘An image based path planning scheme for exploration rover’, IEEE International Conference on Robotics and Biomimetics, 2011, pp.385-388.
  • [5] Guo, Q., Zhang, Z., Xu, Y.: ‘Path-planning of automated guided vehicle based on improved Dijkstra algorithm’, Control and Decision Conference, 2017, pp.7138-7143.
  • [6] Chiang, C.H., Chiang, PJ., Fei, C.C., et al.: ‘A comparative study of implementing Fast Marching Method and A* search for mobile robot path planning in grid environment: Effect of map resolution’, IEEE Workshop on Advanced Robotics and Its Social Impacts, 2007, pp.1-6.
  • [7]

    Jeddisaravi, K., Alitappeh, R.J., Guimaraes, F.G.: ‘Multi-objective mobile robot path planning based on A* search’, International Conference on Computer and Knowledge Engineering, 2017, pp.7-12.

  • [8]

    Ferguson, D., Stentz, A.: ‘Using interpolation to improve path planning: The Field D* algorithm’, Journal of Field Robotics, 2006, 23 (2), pp.79-101.

  • [9] Shi, J., Liu, C., Xi, H.: ‘A framed-quadtree based on reversed D* path planning approach for intelligent mobile Robot ’, Journal of Computers, 2012, 7 (2), pp.464-469.
  • [10] Wooden, D.T.:‘Graph-based Path Planning for Mobile Robots’, thesis, Georgia Institute of Technology, 2006
  • [11] Bassil Y.: ‘Neural network model for path-planning of robotic rover systems’, International Journal of Science and Technology, 2012, 2 (2), pp.94-100.
  • [12] Zeng, C., Zhang, Q., Wei, X.: ‘Robotic global path-planning based modified genetic algorithm and A* algorithm’, International Conference on Measuring Technology and Mechatronics Automation, 2011, pp.167-170.
  • [13]

    Kang, HI., Lee, B., Kim, K.: ‘Path planning algorithm using the particle swarm optimization and the improved Dijkstra algorithm’, Workshop on Computational Intelligence and Industrial Application, 2009, 17 (4), pp.1002-1004.

  • [14] Gu, J., Wang, Z., Kuen, J., et al.: ‘Recent advances in convolutional neural networks ’, arXiv preprint arXiv:1512.07108, 2015.
  • [15]

    Krizhevsky, A., Sutskever, I., Hinton, G., E.: ‘Imagenet classification with deep convolutional neural networks’, Advances in neural information processing systems, 2012, pp.1097-1105.

  • [16] Huang, J., Guadarrama, S., Murphy, K., et al.: ‘Speed/accuracy trade-offs for modern convolutional object detectors’, arXiv preprint arXiv:1611.10012, 2016.
  • [17] Zhu, Y., Mottaghi, R., Kolve, E., et al.: ‘Target-driven visual navigation in indoor scenes using deep reinforcement learning’, Proceedings of the International Conference on Robotics and Automation, 2017, pp.3357-3364.
  • [18]

    Levine, S., Finn, C., Darrell, T., et al.: ‘End-to-end training of deep visuomotor policies’, Journal of Machine Learning Research, 2015, 17 (1), pp.1334-1373.

  • [19] Tanner, C., Roberto, F., Richard, L., et al.: ‘A deep learning approach for optical autonomous planetary relative terrain navigation’, AAS/AIAA Spaceflight Mechanics Meeting, 2017, pp.329-338.
  • [20] Maturana, D., Scherer, S.: ‘3D convolutional neural networks for landing zone detection from LiDAR’, International Conference on Robotics and Automation, 2015, pp.3471-3478.
  • [21] Tamar, A., Wu, Y., Thomas, G., et al.: ‘Value iteration networks’, In Advances in Neural Information Processing Systems, 2016, pp.2146-2154.
  • [22] Khan, A., Zhang, C., Atanasov, N., et al.: ‘Memory augmented control networks’, arXiv preprint arXiv:1709.05706, 2017
  • [23] Bellman, R.: ‘Dynamic programming’, Princeton University Press, 1957.
  • [24] Bertsekas, D.P., Bertsekas, D.P., Bertsekas, D.P., et al.: ‘Dynamic programming and optimal control’, Athena Scientific, 4th edition, 2012.
  • [25] Sutton, R.S., Barto, A.G.: ‘Reinforcement learning: An introduction’, MIT Press, 1998.
  • [26] Li, Y.: ‘Deep reinforcement learning: An overview’, arXiv preprint arXiv:1701.07274, 2017.
  • [27] Attia, A., Dayan, S.: ‘Global overview of Imitation Learning’, arXiv preprint arXiv:1801.06503, 2018.
  • [28] Goodfellow, I., Bengio, Y., and Courville, A., et al: ‘Deep Learning’, MIT Press, 2016.
  • [29] Harris,D., Harris, S.: ‘Bergstrom, W.J., et al.: ‘Digital design and computer architecture’, Chian Machine Press, 2014.
  • [30]

    He, K., Zhang, X., Ren, S., et al.: ‘Deep residual learning for image recognition ’, IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.770-778.

  • [31] McEwen, S.A., Eliason, M.E., Bergstrom, W.J., et al.: ‘Mars Reconnaissance Orbiter’s High Resolution Imaging Science Experiment (HiRISE)’, Journal of Geophysical Research Planets, 2007, 112(E05S02), pp.1-40.
  • [32] Canny, J.: ‘A Computational Approach To Edge Detection’, IEEE Transaction on Pattern Analysis and Machine Intelligence, 1986, 8(6), pp.679-698.