GAN Path Finder: Preliminary results

08/05/2019 ∙ by Natalia Soboleva, et al. ∙ Higher School of Economics 0

2D path planning in static environment is a well-known problem and one of the common ways to solve it is to 1) represent the environment as a grid and 2) perform a heuristic search for a path on it. At the same time 2D grid resembles much a digital image, thus an appealing idea comes to being -- to treat the problem as an image generation task and to solve it utilizing the recent advances in deep learning. In this work we make an attempt to apply a generative neural network as a path finder and report preliminary results, convincing enough to claim that this direction of research is worth further exploration.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Grids composed of blocked and free cells are commonly used to represent static environment of a mobile agent. They appear naturally in game development [22] and are widely used in robotics [4], [24]. When the environment is represented by a grid, heuristic search algorithms, e.g. A* [11], are typically used for path planning. These algorithms iteratively explore the search space guided by a heuristic function such as Euclidean or octile distance. When the obstacles are present on the way such guidance leads to unnecessary exploration of the areas surrounding the obstacles. This issue can be mitigated to a certain extent by weighting the heuristics [2], using random jumps [15] or skipping portions of the search space exploiting the grid-induced symmetries [10]. At the same time, having in mind, that grids resemble digital images a lot and recently convolutional neural networks demonstrate tremendous success various image processing tasks, an orthogonal idea can be proposed – to plan entirely in the image domain using the state-of-the-art deep learning techniques thus avoiding the unnecessary state-space exploration by construction. In this work we leverage this idea and report preliminary results on path finding as image generation. We describe generative adversarial net that generates a path image in response to context input, i.e. image of the grid-map with start and goal. We demonstrate empirically that the proposed model can successfully handle previously unseen instances.

2 Related work

(a) grid
(b) ground truth
(c) input
(d) output
Figure 1: a) A grid and a path on it; b) corresponding image; c) image-input for the generator; d) image-ouput of the generator. For b), c), d) image pixels are depicted as squares for illustrative purposes.

The line of research that is most relevant to our work is deep learning (DL) for path/motion planning. A wide variety of works, e.g. [5], [3], are focused on motion planning for manipulators. Unlike these works we are interested in path planning for mobile agents. DL approaches to navigation in 3D environment, that rely on the first-person imagery input, are considered in [9], [25], [16]. In contrast to these papers we focus on 2D path finding when a top-down view, i.e. a grid-map, is given as the input. In [23] such a task was considered, among the others, when the Value Iterations Networks (VINs) were presented. Evaluation was carried out on and grids. We are targeting larger maps, i.e. grids. In [14]

it was shown that VINs are “often plagued by training instability, oscillating between high and low performance between epochs” and other drawbacks. Instead Gated Path Planning Networks (GPPNs) were proposed but, again, the evaluation was carried out only on grids of size

and . The most recent work on VINs [21] proposes a pathway to use them on larger maps via abstraction mechanism,value iteration network is applied to

feature maps. Unlike VINs or other approaches based on reinforcement learning (e.g.

[18]), this work i

) is not rooted in modeling path planning with Markov decision process,

ii) considers quite large grids, i.e. , as the indecomposable input.

3 Problem statement

Consider a 2D grid composed of blocked and unblocked cells with two distinguished cells – start and goal. The path on a grid is a sequence of adjacent unblocked cells connecting start and goal222we are assuming 8-connected grids in this work. – see Figure1a. The task of the path planner is to find such path. Often the length of the path is the cost objective to be minimized, but in this work we do not aim at finding shortest paths.

Commonly the task is solved by converting the grid to the undirected graph and searching for a path on this graph. Instead we would like to represent the grid as an image and given that image, generate a new one that implicitly depicts the path – see Figure1.

4 GAN Path Finder

4.0.1 Grid-to-image conversion

To convert the grid to an image we use a straightforward approach: we distinguish 3 classes of the cells, – free, blocked and path (incl. start and goal), and assign a unique color to each of them. Although, we do not use loss functions based on the pixel-distance later on, we intuitively prefer this distance to be maximal between the free pixels and the path pixels. Thus, the free cells become the white pixels of the grayscale image, path cells (including start and goal) – black, blocked cells – gray (as depicted on Figure

1b,c,d).

4.0.2 Types of images

3 types of images depicting grids and paths are to be distinguished. First, the image that depicts the grid with only start and goal locations – see Figure1c. This is the input. Second, the generated image that is the output of the neural network we are about to construct – see Figure1d. Third, the ground truth image that is constructed by rendering the input image with the A* path on it – see Figure1a. Ground truth images are extensively used for training, i.e. they serve as the examples of how “good” images look like. At the same time, from the path-finding perspective we should not penalize the output images that differ from the corresponding ground-truth image but still depict the correct path from start to goal. We will cover this aspect later on.

4.0.3 Architectural choices

Convolutional neural networks (CNNs) [13] are the natural choice when it comes to image processing tasks. As there may exist a few feasible paths on a given grid we prefer to use Generative Adversarial Nets (GANs) [6]

for path planning as we want our network to learn some general notion of the feasible path rather than forcing it to construct the path exactly in the same way that the supervisor (e.g. A*) does. GAN is composed of 2 sub-networks – generator and discriminator. Generator tries to generate the path-image, while discriminator is the classifier that tries to tell whether the generated image is “fake”, i.e. does not come from the distribution of ground-truth images. Both these networks are the CNNs that are trained simultaneously.

In this work we are, obviously, not interested in generating the images that depict some random grid with a correct path on it, but rather the image that depicts the solution of the given path finding instance (encoded as the input image). That leads us to the so-called conditional GANs (cGANs) [17]. Conditioning here means that the output of the network should be conditioned on the input (and not just generated out of the random noise as in vanilla GANs). We experimented with two prominent cGAN architectures – Context Encoders [19]

and pix2pix

[12]

. CE – is the model that is tailored to image inpainting, i.e. filling the missed region of the image with the pixels that look correct. In our case we considered all the free-space pixels as missing and make CE inpaint them. Pix2pix is a more general cGAN that is not tailored to particular generation task but rather solves the general “from pixels to pixels” problem. In path finding we want some free pixels become path pixels. We experimented with both CE and pix2pix and the latter showed more convincing results, which is quite foreseeable as pix2pix is a more complex model utilizing residual blocks. Thus, we chose pix2pix as the starting point for our model.

(a) ground truth
(b) generated
(c) ground truth
(d) generated
Figure 2: Examples of the generated solutions, for which the MSE metric is not consistent, as the generated paths do not match ground truth, but are still feasible.

4.0.4 Generator

The general structure of generator is borrowed from pix2pix [12]. The difference is that in original work authors suggested two slightly different types of generator: the one with separated encoder/decoder parts with residual blocks in the bottleneck and the one utilizing skip-connections through all layers following the general shape of a “U-Net” architecture [20]. We experimented with both variants and the latter appeared to be more suitable for the considered task.

Original pix2pix generator’s loss function is the weighted sum of two components. The first component penalizes generator based on how close the generated image is to the ground truth one, the second one – based on the discriminator’s response (adversarial loss). We kept the second component unchanged while modified the first one to be the cross-entropy rather then the L1 pixel-distance. The rationale behind this is that in the considered case we prefer not to generate a color for each pixel but rather to classify whether it belongs to “free”, “blocked”, “path” class. This turned to provide a substantial gain in the solution quality.

(a) pix2pix
(b) + cross-entropy
(c) GAN-finder
Figure 3: Generator and Discriminator losses (in blue and orange respectively) on the grids with 20% of rectangular obstacles.

4.0.5 Discriminator

The task of the discriminator is to detect whether the generated image comes from the distribution of the ground-truth images or not. We opt to focus discriminator only on the path component of the image, i.e. to train it to detect fake paths, thus we modify the input of the discriminator to be one-channel image which contains path pixels only. Such a “simplification” is evidently beneficial at the learning phase as otherwise the discriminator’s loss is converging too fast and have no impact on the loss of the generator (see Figure3 on the left). Another reason for the discriminator to be focused on the path apart from obstacles is that the displacements of path pixels (e.g. putting them inside the obstacles) is penalized by the supervised part of the generator (i.e., via cross-entropy loss) thus the discriminator should rather detect how well the sequence of cells resembles the path pattern in general (are all cell adjacent, are there no loops/gaps etc.). Such an approach also naturally aligns with the idea that there may exist a handful of equivalent and plausible (and even optimal) paths from start to goal while the ground truth image depicts only one of them.

In contrast to [12] we achieved the best performance when training discriminator un-conditionally (without using input image as a condition). Implementing gradient penalty using Wasserstein distance [8] for training the discriminator also yields better results.

4.0.6 Image post-processing

To make sure that all obstacles remain at their places we transfer all blocked pixels from the original image to the generated one. Another technique we opt to use is gap-filling. We noticed that often a generated path is missing some segments. We use Bresenham line-drawing algorithm [1] to depict them. If the line segment is drawn across the obstacle, the gap remains.

4.0.7 Success metrics

In the image domain one of the most common metrics used to measure how well the model generates images is per-pixel mean-squared error (MSE). Path finding scenario is different in a sense that generated path pixels may be put to the other places compared to the ground truth image, thus MSE will be high, although the result is plausible, as it simply depicts some alternative path – see Figure2. We have already accounted for that fact when forced the discriminator to focus on the path structure in general rather than on the path pixels to be put precisely to the same places as on the ground truth image. We now want to account for it at test time so we introduce an additional metric called “gaps” which measures how many gaps are present in the generated path before post-processing. Finally, after the gaps are attempted to be removed we count the result as the “success” in case there are none of them left and the path truly connects start with goal (thus the path finding instance is solved).

5 Experimental evaluation

5.0.1 Dataset

We evaluated the proposed GAN on the grids, representing outdoor environments with obstacles, as those types of grids were used in previous research on application of deep learning techniques to path finding (see [23], [14] for example). Start was always close to the left border, goal – to the right (as any grid can be rotated to fit this template). We used two approaches to put obstacles. In the first approach, rectangular obstacles of random size and orientation were put to random positions on the grid until obstacle density reached (we also used maps with for the evaluation but not learning). In the second approach, obstacles of rectangular, diamond and circular shape of random size were put to the map until the random obstacle density in the interval was reached. The total size of each dataset was , divided into train – 75 %, test – 15 % and validation – 10 %. For each input we built a ground-truth image depicting path.

20 % density 30 % density Random
MSE Gaps Success MSE Gaps Success MSE Gaps Success
pix2pix [12] 0.0336 19.54 65% 0.13 27.22 57% 0.2 27.56 32%
GAN-finder 0.014 1.4916 91.4% 0.164 2.71 73.1% 0.045 3.142 65.1%
Table 1: Success metrics on different types of data.
(a) 20 % density
(b) 30 % density
(c) Random data evaluation
Figure 4: From left to right: (1) input, (2) ground truth, (3) baseline pix2pix, (4) pix2pix using cross-entropy, (5) GAN-finder output, (6) GAN-finder output post-processed.

5.0.2 Evaluation

Figure 3 illustrates the training process for (a) baseline pix2pix GAN, (b) pix2pix trained with cross-entropy and (c) GAN-finder. It is clearly seen that GAN-finder converges much faster and in a more stable fashion. The examples of the paths found by various modifications of the considered GANs are shown in Figure4.

Success metrics for the 20 % density maps (test part of the dataset) are shown in table 1 on the left. We also evaluated the trained model on the 30 % density maps (maps with such density were not part of the training) – results are shown in table 1 in the middle. Observing these results, one can claim that GAN-finder adapts well to the unseen instances with the same obstacle density (success rate exceeds 90%). It is also capable to adapt to the unseen instances of higher obstacle density. Although in this case success rate is notably lower (around 73%), it does not degrade to near zero values, which means that the model has indeed learned some general path finding techniques. One may also notice that GAN-finder significantly reduces the number of gaps (up to an order of magnitude), compared to baseline. The results achieved on the random dataset are shown in the right column of the table 1. Again GAN-finder is performing much better than the baseline. At the same time, success rate is now lower compared to 20% density maps. We believe this is due to the more complex structure of the environments. One possible way to increase the success rate in this case might be to use more samples for training, another – to use attention/recurrent blocks as in [7]. Overall, the results of the evaluation are convincing enough to claim that the suggested approach, i.e. using GANs for path planning, is worth further investigation.

6 Conclusion and Future Work

In this work we suggested the generative adversarial network – GAN-finder – capable of solving path finding problems via image generation. Obtained results, being preliminary in nature, demonstrate that the suggested approach has a potential for further development as clearly the neural net has learned certain path finding basics. We are planning to extend this work in the following directions.

First, we want to study GAN-finder behaviour in more complex domains (e.g. the ones populated with complex-shaped obstacles, with highly varying obstacle densities etc.). As well we need to fairly compare our method with the other learning-based approaches such as Value Iteration Networks [23].

Second, we wish to further enhance the model to make it more versatile tool suitable for path planning. One of such enhancements is modifying the generator’s loss in correlation with the idea of multiple possible paths from start to goal. E.g., we can ignore path pixels when computing cross-entropy loss but introduce an extra semi- or non-supervised loss component for them in addition to (or completely substituting) the discriminator’s feedback. Another appealing option is to add attention/recurrent blocks to the model. This will provide a capability to successively refine the path in complex domains, e.g. the ones densely populated with the obstacles of non-trivial shapes. This also might help in scaling to large maps.

Finally, we can use GAN-finder not just as a path planner on it’s own, but rather as a complimentary tool for the conventional and well-established heuristic search algorithms, e.g. A*, providing them with a more informed heuristics.

6.0.1 Acknowledgements

This work was supported by the Russian Science Foundation (Project No. 16-11-00048)

References

  • [1] Bresenham, J.E.: Algorithm for computer control of a digital plotter. IBM Systems Journal 4(1), 25–30 (1965)
  • [2]

    Ebendt, R., Drechsler, R.: Weighted A* search – Unifying View and Application. Artificial Intelligence

    173(14), 1310–1342 (2009)
  • [3] Eitel, A., Hauff, N., Burgard, W.: Learning to singulate objects using a push proposal network. In: Proc. of the International Symposium on Robotics Research (ISRR 2017). Puerto Varas, Chile (2017)
  • [4] Elfes, A.: Using occupancy grids for mobile robot perception and navigation. Computer 22(6), 46–57 (1989)
  • [5] Finn, C., Levine, S.: Deep visual foresight for planning robot motion. In: Proceedings of The 2017 IEEE International Conference on Robotics and Automation (ICRA 2017). pp. 2786–2793 (2017)
  • [6] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27. pp. 2672–2680 (2014)
  • [7]

    Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D.: Draw: A recurrent neural network for image generation. In: Proceedings of the 32nd International Conference on MachineLearning (ICML 2015) (2015)

  • [8] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of wasserstein gans. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. pp. 5769–5779. NIPS’17, Curran Associates Inc., USA (2017)
  • [9]

    Gupta, S., Davidson, J., Levine, S., Sukthankar, R., Malik, J.: Cognitive mapping and planning for visual navigation. In: Proceedings of The 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (2017)

  • [10] Harabor, D., Grastien, A.: Online graph pruning for pathfinding on grid maps. In: Proceedings of The 25th AAAI Conference on Artificial Intelligence (AAAI 2011). pp. 1114–1119 (2011)
  • [11] Hart, P.E., Nilsson, N.J., Raphael, B.: A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics 4(2), 100–107 (1968)
  • [12]

    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5967–5976 (2017)

  • [13]

    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM

    60, 84–90 (2012)
  • [14] Lee, L., Parisotto, E., Chaplot, D.S., Xing, E., Salakhutdinov, R.: Gated path planning networks. In: Proceedings of the 25th International Conference on Machine Learning (ICML 2018). pp. 2953–2961 (2018)
  • [15] Likhachev, M., Stentz, A.: R* search. In: Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI-2008) (2008)
  • [16] Mirowski, P., Grimes, M., Malinowski, M., Hermann, K.M., Anderson, K., Teplyashin, D., Simonyan, K., Zisserman, A., Hadsell, R., et al.: Learning to navigate in cities without a map. In: Advances in Neural Information Processing Systems. pp. 2419–2430 (2018)
  • [17] Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR abs/1411.1784 (2014), http://arxiv.org/abs/1411.1784
  • [18] Panov, A.I., Yakovlev, K.S., Suvorov, R.: Grid path planning with deep reinforcement learning: Preliminary results. Procedia computer science 123, 347–353 (2018)
  • [19] Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2536–2544 (2016)
  • [20] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. pp. 234–241. Springer International Publishing, Cham (2015)
  • [21] Schleich, D., Klamt, T., Behnke, S.: Value iteration networks on multiple levels of abstraction. In: Proceedings of Robotics: Science and Systems (RSS-2019)
  • [22] Sturtevant, N.R.: Benchmarks for grid-based pathfinding. IEEE Transactions on Computational Intelligence and AI in Games 4(2), 144–148 (2012)
  • [23] Tamar, A., Wu, Y., Thomas, G., Levine, S., Abbeel, P.: Value iteration networks. In: Advances in Neural Information Processing Systems 29 (NIPS 2016). pp. 2154–2162 (2016)
  • [24] Thrun, S.: Learning occupancy grid maps with forward sensor models. Autonomous robots 15(2), 111–127 (2003)
  • [25] Zhu, Y., Mottaghi, R., Kolve, E., Lim, J.J., Gupta, A., Fei-Fei, L., Farhadi, A.: Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: 2017 IEEE international conference on robotics and automation (ICRA 2017). pp. 3357–3364 (2017)