Image Space Potential Fields: Constant Size Environment Representation for Vision-based Subsumption Control Architectures

09/26/2017 ∙ by Jeffrey Kane Johnson, et al. ∙ 0

This technical report presents an environment representation for use in vision-based navigation. The representation has two useful properties: 1) it has constant size, which can enable strong run-time guarantees to be made for control algorithms using it, and 2) it is structurally similar to a camera image space, which effectively allows control to operate in the sensor space rather than employing difficult, and often inaccurate, projections into a structurally different control space (e.g. Euclidean). The presented representation is intended to form the basis of a vision-based subsumption control architecture.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Agents often encounter large and varying numbers of entities within a scene while performing navigation, which can be problematic for conventional planning and control approaches that reason explicitly over all entities. Consider, for instance, busy roadways or crowded sidewalks or convention halls: conventional planning approaches in these scenarios can suffer significant performance degradation as entity count increases [1, 2, 3]. While more scalable approaches exist, they often have strict requirements on system dynamics [4] or observability of agent policies [5].

To help address the scalability problem, this report builds on [6] to present a constant size environment representation for vision-based navigation. The representation is modeled after a camera image space, which is chosen because cameras are a ubiquitous sensor modality, and image space is typically discretized and constant size. The proposed representation allows planning and control routines to reason almost directly in sensor space thereby avoiding often complex and noisy transformations to and from a more conventional Euclidean space representation. This new representation can help vision-based mobile robots navigate complex multi-agent systems efficiently, and can aid satisfying the strict resource requirements often present in real-time, safety critical, and embedded systems [7]. Further, the representation enables additional guidance information to be added in while making guarantees about the preservation of hard constraint (Definition 9) information. This property makes it ideal for use in vision-based subsumption control architectures.

Fig. 1: Top: Illustration of an object in the image plane (left), and its Image Space Potential field (right). Bottom: Information from multiple sources (left) can be combined into a single field (right). Black boxes represent ROIs.

Ii Background

The approach in this report is based in part on potential fields [8]. These fields represent attractive and repulsive forces as scalar fields over a robot’s environment that, at any point, define a force acting on the robot that can be interpreted as a control command. As noted in literature, this type of approach is subject to local minima and the narrow corridor problem [9], particularly in complex, higher-dimensional spaces [10]. Randomized approaches can partially overcome these difficulties [11, 12], while extensions to globally defined navigation functions [13, 14, 15] can theoretically solve them but are often difficult to use in practice. This report uses low-dimensional potential fields, limiting the possibility of narrow corridors, and designs the fields such that additional information can be added in to help break out of minima [16]. Because the potential fields are modeled after an image space, controlling on them can be accomplished effectively through visual servoing techniques [17, 18, 19, 20].

In order to define values for the potential fields, this approach draws on a wealth of related works in optical flow and monocular collision avoidance, notably [21, 22, 23, 24, 25, 26, 27]. The intuition of these approaches is that a sequence of monocular images provides sufficient information to compute time-to-contact (Definition 8), which informs an agent about the rate of change of proximity.

The fields in this representation are intended for use in a subsumption architecture [28] where additional information about the system can be layered in while hard constraint information is guaranteed to be preserved. This closure property is proven to hold under a restricted input space with specially constructed potential transform functions.

Iii Definitions

Definition 1.

A potential field (also artificial potential field) is field of artificial forces that attracts toward desired locations and repels from undesirable locations.

Definition 2.

An affinely extended potential field is a potential field with a potential function that ranges over the affinely extended reals . A positive (or negative) affinely extended potential field is defined over but contains only positive (or only negative) infinite values.111For technical reasons, the potential fields are intentionally not defined over the projectively extended reals .

Definition 3.

An image space potential function is a mapping of an image pixel value to a tuple in that consists of the potential value and its time derivative:

Definition 4.

An asymptotic region is a closed set of points on such that the potential function takes a value for any element of . (This is related to the notion of a natural boundary in complex analysis.)

Definition 5.

Encroachment is the reduction in minimum proximity between two or more objects in a workspace as measured by a metric .

Definition 6.

Guided collision avoidance describes the strategy of choosing goal-directed motions from the space of collision avoiding controls in order to navigate while satisfying collision constraints.

Definition 7.

Navigation or multi-agent navigation describes the general process of navigating a space possibly shared with multiple other agents.

Definition 8.

Time-to-contact (), is the predicted duration of time remaining before an object observed by a camera will come into contact with the image plane of the camera. The time derivative of is written .

Definition 9.

A hard constraint is a system constraint that an agent is never allowed to violate.

Definition 10.

A soft constraint is a system constraint that an agent only prefers not to violate.

Iv The Image Space Potential Field

Image Space Potential (ISP) fields are affinely extended potential fields that are modeled after image planes. As with image planes, these potential fields can be discretized, and regions of interest (ROIs) can be defined for them. In this report it is assumed that the fields are origin and axis aligned with the agent’s camera image plane, and that they have the same ROIs (Figure 1).

Iv-a Representing Hard & Soft Constraints

In potential field representations the distinction between hard and soft constraints can be made in terms of the limiting value of the field as the robot approaches some state that would cause constraint violation: the limiting value of the field over states where hard constraint violation would occur can be infinite, such that no reward can overwhelm the cost, and the limiting value of the field over states where soft constraint violation would occur can be finite, such that a reward must be at least some value before the robot chooses to violate it.

In order for the ISP field representation to be useful it must incorporate this notion of constraints, and it must maintain that notion through summation operations. It is shown below that sums of ISP fields do maintain this information through the use of asymptotic regions, and that they behave as expected so long as the following requirements are met:

  1. All ISP fields involved in summation must have like affine extensions, i.e., all fields must either be positively or negatively affinely extended

  2. ISP fields may only be multiplied by scalars in

  3. ISP fields may only be elementwise multiplied by scalar fields where all values are in

These properties ensure that operations on ISP fields are closed and that asymptotic regions are preserved. The following lemmas prove this:

Lemma 1.

Let and be ISP fields, and let be an asymptotic region in . Define element as any arbitrary point and define scalar value . In any field or , will persist as an asymptotic region.


It suffices to show that the property holds for a single element of the field. Let be the value of any element of from . Then, by definition of addition and multiplication on the affinely extended real number line:

The restrictions that ISP fields have like affine extensions and that scalars belong to guarantee the conditions in the right column. Thus, any point in with infinite potential in will have infinite potential in or .∎

Lemma 2.

Let and be ISP fields. Define elements and as any arbitrary points in and , and define scalar value . For all and , it will be that


It suffices to show that the property holds for a single element of the field. When addition is closed. As noted in Lemma 1 when either, or both, is infinite addition is also closed, assuming are both either positive or negative affinely extended fields. Similar arguments apply to scalar multiplication. However, if were allowed to range to infinite values, the following would result in an indeterminate form for field value , and closure would be broken:

Thus, addition over strictly positive or strictly negative affinely extended fields and element-wise scalar multiplication with are both closed operations.

Note that Lemmas 12 do not hold if ISP fields are allowed infinite values of mixed signs. This is why ISP fields are restricted to only positive or only negative affinely extended potential fields.

Iv-B Subsumption through Addition

The results of Lemmas 12 are powerful because they imply that the information of arbitrary fields can be added together without losing information about hard constraints. Thus, a control architecture using ISP fields can implement subsumption through addition.

V The Potential Function

The potential function, which maps image pixel values to potential values, can be defined in arbitrary ways, either with geometric relations, learned relations, or even heuristic methods. In this report, hard constraint information will be derived from a geometric measure over pixels in a temporal image sequences called

time-to-contact, or . Soft constraint information will be represented by user-specified values meant to bias how strongly directed a chosen control is to a particular goal. The potential function maps these measurement values to a unitless potential value space through the potential transformation defined later in this section.

V-a Obtaining Hard Constraint Values

As noted often in literature (e.g. [27, 23, 24, 26]),

can be computed directly from the motion flow of a scene, which is the vector field of motion in an image due to relative motions between the scene and the camera. Unfortunately, it is typically not possible to measure motion flow directly, so it is usually estimated via

optical flow, which is defined as the apparent motion flow in an image plane. Historically this has been measured by performing some kind of matching of, or minimization of differences between, pixel intensity values in subsequent image frames [29, 25, 30]

. More recently deep learning techniques have also been successfully applied to the problem 


Assuming some reasonably accurate estimation of optical flow vector field exists, can be computed directly under certain assumptions [23]. A significant advantage of computing from optical flow, in particular dense optical flow, is that the computation is independent of the number of objects in a scene. In other words, given optical flow, the scalability issues of object tracking and segmentation can be avoided. In practice, however, the computation of optical flow can be noisy and error prone, so feature- and segmentation-based approaches can also be used [22, 21]. The idea of these approaches is to compute from the rate of change in detection scale. For a point in time, let denote the scale (maximum extent) of an object in the image, and let be its time derivative. When the observed face of the object is roughly parallel to the image plane, and under the assumption of constant velocity translational motion and zero yaw or pitch, it is straightforward to show that [32]:


As shown in [6], scale has a useful invariance property for these types of calculations that can make computations robust to certain types of noise and assumption violations. Lemma 3 demonstrates this:

Lemma 3.

The scale of an object on the image plane is invariant to transformations of the object under on the plane.


Let and be end points of a line segment on the plane in the world space, with parallel to the image plane and coincident with the camera view axis. Without loss of generality, assume unit focal length. The instantaneous scale of the line segment in the image plane is given by:


Thus, any transformation of the line segment on the plane for which is constant makes , and thereby and , independent of the values of and . By definition, satisfies this condition. ∎

In addition, the time derivative of , when available, enables a convenient decision function for whether an agent’s current rate of deceleration is adequate to avoid head-on collision or not [33]:


Equation 2 allows the computation for whole regions of the image plane at once given a time sequence of labeled image segmentations, while enables decisions to be made about the safeness of the agent’s current state.

V-B Obtaining Soft Constraint Values

Whereas hard constraint values are measurements of geometric properties and intended to prevent undesired physical interactions in the world, soft constraint values are intended only to bias how control actions are chosen. The potential transformation described in the next section guarantees that soft constraint values can never override hard constraints values, which allows soft constraint values to be arbitrarily chosen. In this report, soft constraint values are assume to be user-specified.

The following two sections describe how hard and soft constraint values are transformed into the potential space.

V-C The Potential Transformation

Fig. 2: This plot shows a selection of shape parameters settings in Equation 5. The green lines show ranging from with and the blue lines show ranging from with . The constraint range is taken to be . Translation is . Best viewed in color.
Fig. 3: This plot shows a selection of shape parameter settings in Equation 7. The green lines show ranging from with and the blue lines show ranging from with . The constraint range is taken to be . Translation is . Best viewed in color.

The task of projecting sensor measurements and user-defined bias values is accomplished by a potential transformation that transforms pixel-wise measurements that have some semantic meaning into the unitless potential space. Two types of potential transformations are presented: a hard constraint transform and a soft constraint transform. Without loss of generality, assume for the remainder of this report only negative affinely extended potential fields.

The hard constraint transform is intended to map values within a specific range to infinite potential values, and values outside the range to finite potential values. In this way sensor measurements corresponding to hard constraint violations are mapped into asymptotic regions, which ensures that information about them is preserved during subsumption. At time , for a finite input pixel value , finite range , y-axis translation value , and finite shape parameters , the hard constraint transform is given in Equation 5 and illustrated in Figure 2. The time derivative of is given in Equation 6.


Conversely, the soft constraint transform is intended to map all values into a given finite range. In this way all user-given bias values are prevented from becoming part of asymptotic regions, which ensures that hard constraint information is not corrupted. At time , for a finite input value , finite range with midpoint , x-axis translation value , and finite shape parameters , the soft constraint transform is a parameterized logistic function given in Equation 7 and illustrated in Figure 3. The time derivative is given in Equation 8.


The shape parameters in both Equations 57 allow potential field shape to be manipulated in problem-specific ways. The time derivatives in Equations 68 are useful in the situation that a time derivative of the input pixel value is given. For convenience, let contain all parameters for computing a constraint transform.

Vi Vision-Based Subsumption Architecture

This section outlines a subsumption-based control architecture for ISP fields. Subsumption is implemented by formulating the general navigation problem as a guided collision avoidance problem, in which a global guidance controller subsumes a local collision avoidance controller.

The navigation problem can be described as below:

Problem 1.

Navigation: Let be a set of agents navigating along a 2D manifold and assume that collision is never inevitable in the initial system state. Assume each agent is equipped with cameras, and that each agent has knowledge of the physical dynamic properties of the environment and of other agents. Assume each agent actuates according to a unique decision process and that each agent may assume with certainty that other agents will prefer to avoid collision and to avoid causing collision. How can an agent navigate toward a goal while remaining collision free?

Problem 1 can be decomposed into a local collision avoidance problem, and a global guidance problem. The local problem is given below:

Problem 2.

Collision Avoidance: Assume an agent navigating a workspace receives some observation input of over time. Let be the set of objects and agents that does not include . For a distance metric , threshold , and a sequence of observations , how can estimate the rate of change of such that it can compute controls to maintain ?

The metric between any two points is taken as the measure between those points, thus, Problem 2 makes the assumption that maintaining some suffices to ensure collision avoidance, which is often a reasonable assumption [34]. This assumption implies several other assumptions, namely, that agents maintain controllability at all times, and that agents react within the horizon. These assumptions can be loosened in a probabilistically rigorous way by exploiting the Safety-Constrained Interference Minimization Principle [35, 36].

Finally, the global guidance controller problem can be defined as:

Problem 3.

Guidance: For desired goal-directed control , control space metric , and given a feasible control set , choose a control such that:

Problem 3 can be solved any number of optimization techniques. In particular, if the solution algorithm to Problem 2 produces convex sets, many efficient optimization routines become available.

Vi-a Example Algorithms

Algorithms 12 address Problems 12 explicitly with Problem 3 being straightforward to solve given the input. This solution is intended as a sketch of a control system based on ISP fields, so some details are omitted. The biasing fields in Algorithm 2 are derived from user-provided values.

While Problem 1 is formulated generally similar to a ground navigation scenario, ISP fields can applied to arbitrary environs so long as Algorithms 12 are modified accordingly.

Vii Conclusion

This report presented Image Space Potential (ISP) fields, which are a general environment representation for vision-based navigation. ISP fields are constant space complexity with respect to the image, which is crucial for ensuring scalability and running time of algorithms. ISP fields also enable planning and control to occur in a space that is structurally similar to the sensor space, which means that sensor data does not need to go through what are often difficult and noisy projections into a structurally different planning and control space.

ISP fields are intended to form the foundation of a vision-based subsumption control architecture, such as that described in the previous section. To enable this use, they allow arbitrary amounts of guidance information to be added into the representation while guaranteeing that hard constraint information will not be lost or corrupted in the process. This capability makes ISP field representation particularly well suited to enabling machine learning to be applied to safety critical applications, such as automated driving, where it is often impractical for machine learning alone to make such guarantees 


An implementation of the data structures and algorithms described in this report is being developed and maintained under open source license 

[38]. The implementation is expected to change and grow over time, so for any disparity between the implementation and this document, the implementation should be assumed to be authoritative.

The implementation itself is developed under ROS [39], and ISP field data structures and operations are implemented using OpenCV [40]

, a highly optimized, industry standard computer vision library.

1:procedure ControlSet()
2:     Let be the list of image column indices
3:     Let map to via erosion along
4:     Let map onto sets of accelerations
5:     for  do
10:     end for
11:     return s
12:end procedure
Algorithm 1 This algorithm addresses Problem 2. Given an ISP field , compute the set of steering and acceleration commands where is a kernel for computing the steering angle to acceleration map at a single horizon line , , are proportional and derivative gains, and parameterizes a soft constraint mapping onto the set of normalized valid longitudinal controls.
1:procedure SDControl()
2:     Let be the current state of
3:     Let be the list of image column indices
4:     Let
5:     Let biasing field from
6:     Let biasing field from
7:     Let
8:     Let min reduce along
9:     Let max-valued column index of
10:      steering angle corresponding to
12:     return
13:end procedure
Algorithm 2 This algorithm addresses Problem 1. For a given ISP field and guidance control from Problem 3, compute the control that safely controls the agent . See Algorithm 1 for descriptions of the other parameters.


  • [1] S. Brechtel, T. Gindele, and R. Dillmann, “Probabilistic mdp-behavior planning for cars,” in 14th International IEEE Conference on Intelligent Transportation Systems, ITSC 2011, Washington, DC, USA, October 5-7, 2011, 2011, pp. 1537–1542. [Online]. Available:
  • [2] E. Galceran, A. G. Cunningham, R. M. Eustice, and E. Olson, “Multipolicy decision-making for autonomous driving via changepoint-based behavior prediction: Theory and experiment,” Auton. Robots, vol. 41, no. 6, pp. 1367–1382, 2017. [Online]. Available:
  • [3] F. A. Oliehoek and C. Amato, A Concise Introduction to Decentralized POMDPs, ser. Springer Briefs in Intelligent Systems.   Springer, 2016.
  • [4] J. van den Berg, S. J. Guy, M. C. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in Robotics Research - The 14th International Symposium, ISRR, August 31 - September 3, 2009, Lucerne, Switzerland, 2009, pp. 3–19.
  • [5] K. E. Bekris, D. K. Grady, M. Moll, and L. E. Kavraki, “Safe distributed motion coordination for second-order systems with different planning cycles,” I. J. Robotic Res., vol. 31, no. 2, pp. 129–150, 2012. [Online]. Available:
  • [6] A. N. Witheld, “Title witheld,” in Title Witheld, Year Witheld.
  • [7] S. Louise, V. David, J. Delcoigne, and C. Aussaguès, “OASIS project: deterministic real-time for safety critical embedded systems,” in Proceedings of the 10th ACM SIGOPS European Workshop, Saint-Emilion, France, July 1, 2002, 2002, pp. 223–226. [Online]. Available:
  • [8] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” in Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, Missouri, USA, March 25-28, 1985, pp. 500–505. [Online]. Available:
  • [9] Y. Koren and J. Borenstein, “Potential field methods and their inherent limitations for mobile robot navigation,” in IN PROC. IEEE INT. CONF. ROBOTICS AND AUTOMATION, 1991, pp. 1398–1404.
  • [10] S. M. LaValle, Planning algorithms.   Cambridge University Press, 2006.
  • [11]

    J. Barraquand and J. C. Latombe, “A monte-carlo algorithm for path planning with many degrees of freedom,” in

    Proceedings., IEEE International Conference on Robotics and Automation, May 1990, pp. 1712–1717 vol.3.
  • [12]

    J. Barraquand and J. Latombe, “Robot motion planning: A distributed representation approach,”

    I. J. Robotics Res., vol. 10, no. 6, pp. 628–649, 1991. [Online]. Available:
  • [13] D. Koditschek and E. Rimon, “Robot navigation functions on manifolds with boundary,” Advances in Applied Mathematics, vol. 11, no. 4, pp. 412–442, 1990.
  • [14] D. C. Conner, A. A. Rizzi, and H. Choset, “Composition of local potential functions for global robot control and navigation,” in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), vol. 4, Oct 2003, pp. 3546–3551 vol.3.
  • [15] D. E. Koditschek, “Exact robot navigation by means of potential functions: Some topological considerations,” in Proceedings of the 1987 IEEE International Conference on Robotics and Automation, Raleigh, North Carolina, USA, March 31 - April 3, 1987, 1987, pp. 1–6. [Online]. Available:
  • [16] A. A. Masoud, “Solving the narrow corridor problem in potential field-guided autonomous robots,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, April 18-22, 2005, Barcelona, Spain, 2005, pp. 2909–2914. [Online]. Available:
  • [17] N. J. Cowan and D. E. Chang, “Toward geometric visual servoing,” EECS Department, University of California, Berkeley, Tech. Rep. UCB/CSD-02-1200, Sep 2002. [Online]. Available:
  • [18] L. E. Weiss, A. C. Sanderson, and C. P. Neuman, “Dynamic visual servo control of robots: An adaptive image-based approach,” in Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, Missouri, USA, March 25-28, 1985, pp. 662–668. [Online]. Available:
  • [19] R. T. Fomena and F. Chaumette, “Visual servoing from spheres using a spherical projection model,” in 2007 IEEE International Conference on Robotics and Automation, ICRA 2007, 10-14 April 2007, Roma, Italy, 2007, pp. 2080–2085. [Online]. Available:
  • [20] A. X. Lee, S. Levine, and P. Abbeel, “Learning visual servoing with deep features and fitted q-iteration,” CoRR, vol. abs/1703.11000, 2017. [Online]. Available:
  • [21] J. Byrne and C. J. Taylor, “Expansion segmentation for visual collision detection and estimation,” in IEEE International Conference on Robotics and Automation, Kobe, Japan, May 12-17, 2009, pp. 875–882.
  • [22] T. Mori and S. Scherer, “First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles,” in 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, May 6-10, 2013, pp. 1750–1757.
  • [23] T. Camus, D. Coombs, M. Herman, and T. Hong, “Real-time single-workstation obstacle avoidance using only wide-field flow divergence,” in

    13th International Conference on Pattern Recognition, ICPR 1996, Vienna, Austria, 25-19 August, 1996

    , 1996, pp. 323–330.
  • [24] D. Coombs, M. Herman, T. Hong, and M. Nashman, “Real-time obstacle avoidance using central flow divergence and peripheral flow,” in ICCV, 1995, pp. 276–283.
  • [25] G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Image Analysis, 13th Scandinavian Conference, SCIA 2003, Halmstad, Sweden, June 29 - July 2, Proceedings, 2003, pp. 363–370.
  • [26] R. C. Nelson and Y. Aloimonos, “Obstacle avoidance using flow field divergence,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, no. 10, pp. 1102–1106, 1989.
  • [27] Y. Barniv, “Expansion-based passive ranging,” NASA Ames Research Center, Moffett Field, California, Tech. Rep. 19930023159, June 1993.
  • [28] R. A. Brooks, “A robust layered control system for a mobile robot,” IEEE J. Robotics and Automation, vol. 2, no. 1, pp. 14–23, 1986. [Online]. Available:
  • [29] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell., vol. 17, no. 1-3, pp. 185–203, 1981. [Online]. Available:
  • [30] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in

    Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI ’81, Vancouver, BC, Canada, August 24-28, 1981

    , 1981, pp. 674–679.
  • [31] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, “Deepflow: Large displacement optical flow with deep matching,” in IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, 2013, pp. 1385–1392. [Online]. Available:
  • [32] T. Camus, “Real-time optical flow,” Ph.D. dissertation, Brown University, Department of Computer Science, 1994.
  • [33] D. N. Lee, “A theory of visual control of braking based on information about time-to-collision.” Perception, vol. 5, no. 4, pp. 437–459, 1976.
  • [34] R. Lamm, B. Psarianos, and T. Mailaender, Highway Design and Traffic Safety Engineering Handbook, ser. McGraw-Hill handbooks.   McGraw-Hill, 1999. [Online]. Available:
  • [35] J. Johnson, Y. Zhang, and K. Hauser, “Semiautonomous longitudinal collision avoidance using a probabilistic decision threshold,” in IROS 2011 International workshop on Perception and Navigation for Autonomous Vehicles in Human Environment, 2011.
  • [36] ——, “Minimizing driver interference under a probabilistic safety constraint in emergency collision avoidance systems,” in 2012 15th International IEEE Conference on Intelligent Transportation Systems, Sept 2012, pp. 457–462.
  • [37] S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” CoRR, vol. abs/1610.03295, 2016. [Online]. Available:
  • [38] J. K. Johnson, “Maeve automation development libraries,”, 2017.
  • [39] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA Workshop on Open Source Software, 2009.
  • [40] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.