Qualitative vision-based navigation based on sloped funnel lane concept

08/23/2018 ∙ by Mohamad Mahdi Kassir, et al. ∙ Isfahan University of Technology 0

A new visual navigation based on visual teach and repeat technique is described in this paper. In this kind of navigation, a robot is controlled to follow a path while it is recording a video. Some keyframes are extracted from the video. The extracted keyframes are called visual path and the interval between each two keyframes is called a segment. Later, the robot uses these keyframes to navigate autonomously to follow the desired path. Funnel lane is a recent method to follow visual paths which was proposed by Chen and Birchfield. The method requires a single camera with no calibration or any further calculations such as Jacobian, homography or fundamental matrix. A qualitative comparison between features coordinates is done to follow the visual path. Although experimental results on ground and flying robots show the effectiveness of this method, the method has some limitations. It cannot deal with all types of turning conditions such as rotations in place. Another limitation is an ambiguity between translation and rotation which in some cases may cause the robot to deviate from the desired path. In this paper, we introduce the sloped funnel lane and we explain how it can overcome these limitations. In addition, some challenging scenarios were conducted on a real ground robot to show that. Also, the accuracy and the repeatability of both methods were compared in two different paths. The results show that sloped funnel lane is superior.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The process of determining and following a safe and appropriate path from a starting point to a goal point is called navigation. There are various methods which use different sensors to perform it. Recently, visual navigation methods have been considered by the researchers due to the development of powerful processing modules and the expansion of their applications in mobile robots. These methods are used in both ground 4 ; 5 ; 6 ; 9 ; 10 ; 15 ; 20 ; 21 and flying 11 ; 12 ; 13 ; 14 ; 19 autonomous robots.

Regardless of the kind of robots, the visual navigation methods can be categorized into two types: map-based and map-less visual navigation1 .

Map-based visual navigation methods 18 ; 20 ; 21 are based on a model of the environment (map) where the robot has to find its location on it.

Map-less visual navigation methods do not need such a model to navigate in the environment 16 ; 17 ; 22 . The robot depends on the elements observed in the environment to navigate.

Some navigation methods display the environment with sequential images which characterize the desired path. They are considered as map-less visual navigation methods that are based on visual teach and repeat technique. The main advantages of these methods are scalability, not needing global metric map construction, and simple implementation. The images can be gathered easily from an environment. These methods can have more applications especially for robots with limited memory. On the other hand, according to the lack of scale and geometric information, following such paths is not an easy task.

In this paper, our navigation system falls into the category of visual teach and repeat technique. In the teaching phase (Fig. 1), the robot is guided to follow a path while recording a video, after that keyframes are extracted from the recorded video to make the visual path. The intervals between two consecutive keyframes are called segments.

Figure 1: Teaching phase (the keyframes are extracted from the recorded video
Figure 2: Repeating phase

In the repeating phase (Fig. 2), the robot has to be able to follow the visual path autonomously. Usually, a method is used to control the robot inside a segment in the visual path and a criterion is defined to switch from a segment to the next segment until reaching the last keyframe. Visual servoing is a well-known technique which is used to control the robot inside a segment. Visual servoing approaches usually need calculations such as Jacobian 3 and homography or fundamental matrix 10 ; 11 ; 12 .

Another approach is the funnel lane that was proposed by Chen and Birchfield 5

. The robot follows the path by making qualitative comparisons between the features extracted from the images in the teaching phase and the repeating phase. The method does not require any calculation to relate world coordinates to image coordinates. Funnel lane assumes that the optical axis of the attached camera is parallel to the heading direction of the robot. For each feature, a region is determined based on two constraints of that feature. These regions are called funnel lanes. The intersection of these funnel lanes forms the combined funnel lane. The robot tries to keep itself inside the combined funnel lane to reach to its destination. Funnel lane has been implemented on ground robot

5 ; 6 and on quadrotor 2 ; 14 ; 24 .

Standard funnel lane theory has its limitations. It specifies a region for the robot so that the robot can follow the visual path. The robot is controlled by getting left, right and straight moving commands. However, funnel lane does not have the ability to provide any information for the robot to know how much it should turn. For this reason, in funnel lane, the robot’s radius of rotation is pre-set (translation and rotational speed of the robot are set beforehand 5 ) and the robot turns using the same radius along the whole path when turning is required. Therefore, the robot is not able to deal with all turning conditions. The robot can never deal with rotation in place which is important especially in narrow places because it will not move forward at all. This limitation should be taken into account in the teaching phase in order to be able to follow the visual path in the repeating phase. In other word, the robot’s radius of rotation in the teaching phase should be set with regards to its value in the repeating phase or vice versa. As a result, the robot is not allowed to take all kinds of paths in the teaching phase as well. In addition, due to this limitation, the robot faces difficulty in correcting its path when it deviates from the desired path. This shortcoming decreases the robot maneuverability and limits the robot movements.

Another limitation is the occasional ambiguity between translation (forward movement) and rotation (turning movement) inside the funnel lane. This ambiguity can cause the robot to deviate from the desired path as we will explain later. This ambiguity was mentioned by the authors 5 themselves however they tried to reduce this shortcoming by using odometry information.

In this paper, we introduce sloped funnel lane which does not have these limitations. In sloped funnel lane, the robot is free to take any path with different turning conditions in the teaching phase. As well in the repeating phase, the robot sets the radius of rotation according to the situations it faces. The ambiguity is resolved without using any other sensors. Instead of creating a funnel lane for each feature and intersecting them to form the combined funnel lane, one funnel lane is created by looking at all features together. Also, two slopes based on the whole features are added in one step to the funnel lane. Therefore the proposed method is called sloped funnel lane. One of the slopes is used to determine the radius of rotation and help to reduce the ambiguity between translation and rotation. The other slope is used to keep the robot moving by a balance way throw the funnel lane.

In the rest of this paper, first, some notations and assumptions are introduced which are used throughout the paper. After that, the method to create the visual path will be discussed. In section 4, we have a brief discussion about the funnel lane concept and its limitations. Then we will explain the sloped funnel lane which is proposed in this paper and we show how the sloped funnel lane overcomes the limitations of the standard funnel lane. After that, experimental examples that show how the proposed sloped funnel lane successfully follows a visual path in which the standard funnel lane failed to follow, is presented. Finally, we will have a conclusion.

2 Notations and assumptions

In visual navigation systems some assumptions must be considered: enough light exists in the environment, the scene is often static, the environment contains enough texture to extract enough features, there is sufficient overlap between consecutive keyframes and the change of the conditions in the teaching phase and repeating phase does not affect the feature matching process in the repeating phase very much.
Some notations are used in this paper as follows:

  • is the current image of the robot.

  • is the video taken from path .

  • is the keyframe number in path .

  • is all keyframes in path .

  • is the segment in path , .

  • features of image .

  • right features of image .

  • left features of image .

  • matched features of image with image (in image ).

  • matched features of image with image (in image ). Note that is different with because the coordinates of the matched features in image are not necessarily similar to the coordinates of the matched features in image .

  • is the number of matched features of image with image .

  • is the standard deviation of

    coordinates of .

  • is the ratio of standard deviation of coordinates of to the standard deviation of coordinates of .

  • is the Euclidean distance between the median of coordinates of and the median of coordinates of .

Figure 3 shows a video recorded from a path consisting frames, selected keyframes and segment , which is the interval between keyframe and keyframe .

Figure 3: keyframes are selected from frames to create a visual path and segment as shown is the interval between keyframe and keyframe

3 visual path creation

A robot is controlled to follow a path manually while it is recording a video. Some keyframes are selected from the video. The selected keyframes are called visual path. To select these keyframes, features of the first frame are detected and tracked in the video. A keyframe is selected when the percentage of successfully tracked features falls below 50 percent 5 . The process is repeated until reaching the end of the video. The remaining successfully tracked features in each segment are stored with their coordinates because they are used in the repeating phase.

4 Standard Funnel lane

Standard funnel lane concept was introduced by Chen and Birchfield 5 . The robot is controlled such that it is able to reach a destination image according to the image it receives from its attached camera. The camera optical axis is parallel to the robot heading and its optical axis passes through the axis of rotation of the robot. In the following, we explain the standard funnel lane. Then the motion control based on it will be described.

Figure 4: A robot is moving on straight line with a camera attached on it which its optical axis is parallel to the robot’s heading.

Suppose that the robot wants to move from the current location to location . There are some fixed landmarks that are seen in the camera of the robot in both locations as shown in figure 4. Suppose we have both the current image and the destination keyframe image and the origin of the feature’s coordinates is at the intersection of the optical axis and the image plane. If the robot goes forward in a straight line with the same heading direction as that of , the point will move away from the origin of the feature’s coordinates toward . When the robot reaches the destination, point will reach . Therefore the funnel lane is defined as follows:

Definition 1: A funnel lane of a fixed landmark L and a robot location is the set of locations such that, for each , the two funnel constraints are satisfied 5 :

where and are the horizontal coordinates of the image projection of at locations and , respectively.

If the robot is on the path toward the destination keyframe with the same heading direction, the funnel lane will be as shown in figure 4(a). Note that the region is specified by two lines which represent the constraints of the funnel lane. The two constraints are satisfied when the robot is inside the funnel lane. For a right side feature (), the first constraint () is violated when the robot exits from the right side and the second constraint () is violated when it exists from the left side. For a left side feature () the opposite is true.

If the heading direction of the robot is not the same direction of the destination keyframe , the lines of the funnel lane are rotated by an angle depending on the angle that the robot has with destination keyframe as shown in figure 4(b).

(a)
(b)
Figure 5: (a) Funnel lane created when the robot has the same heading angle with the destination, (b) Funnel lane created when the robot has a heading angle with the destination

For each landmark, a funnel lane region is created. By intersecting all funnel lanes, a combined funnel lane is obtained in which the constraints of all features are satisfied. Figure 6 shows an example of how the combined funnel lane will be if we have two features.

Figure 6: Final funnel lane created by two features when the robot has the same heading angle with the destination

4.1 Motion control based on standard funnel lane

First, the features of the current image are matched with the features of the beginning in the segment. Then, the matched features are tracked and their horizontal coordinates are compared with the horizontal coordinates of their correspondence features in the destination . If no constraint for each feature is violated the robot continually moves forward because it is assumed to be inside the combined funnel lane. Whenever constraint 1 of a right side keyframe feature () is violated it means that the robot has gone outside the funnel lane from the left side so it has to get a right turning command and whenever constraint 2 of a right side feature is violated it means that the robot has gone outside the funnel lane from the right side so it has to get a left turning command to get it back to the funnel lane. If the keyframe feature is left side () the directions are reversed. The constraints are checked for each feature. The final command will be the majority command gets by all features.

4.2 Limitations

Motion control based on standard funnel lane has some limitations which are:

1- Constant radius of rotation

In funnel lane, the robot is moving forward and it turns by an amount to the right or to the left depending on the command it gets5 ; 6 . Note that the translational and rotational speeds are set beforehand. In another word, the radius of rotation of the robot is set beforehand. This reduces the maneuverability of the robot. The robot cannot take any path in the teaching phase. Moreover in the repeating phase according to this reduction of maneuverability the robot cannot correct its direction easily when it deviates from the desired path especially in turnings.

2- The ambiguity of translation and rotation

An ambiguity exists between translation (going straight) and rotation (turning) inside the funnel lane itself 5 . Falling inside the funnel lane does not necessarily mean a translation command to the robot. To make it more clear consider figure 7 where there are features just in the right side and the coordinates of the destination features lay on the right side of the current features. In the first case, a turning causes the destination features lay on the right side of the current features (figure 6(a)). In the second case the path is straight forward and therefore the destination features lay on the right side of the current features (figure 6(b)). In the standard funnel lane, the two constraints ( and ) are satisfied for all features and the robot falls on the combined funnel lane, which means it will get a straight forward command for both cases. This causes the robot to deviate from the desired path in case of figure 6(a).

(a)
(b)
Figure 7: Circle symbols show the positions of the current features and star symbols show the positions of their corresponding features in destination keyframe . (a) A left turning causes the destination features to lay on the right side of the current features, (b) A forward movement causes the destination features to lay on the right side of the current features

Existing destination features on both sides of the image help to narrowly constrain the path of the robot. This explains why existing features on both sides in standard funnel lane is necessary 5 . But unfortunately, the ambiguity will remain inside the funnel lane. Moreover, it is not guaranteed that the destination matched features lay on both sides. In turning conditions the tracked features come out from the frame and the remaining common features between two consecutive keyframes will be shifted to the right or to the left side of the image. In other words, the common features in the destination keyframe will be shifted. To make it clear consider figure 8 which shows two consecutive keyframes which are selected to create the visual path in turning condition. As it is seen the remaining features are shifted to the right because a turning to the left has occurred. In addition in the repeating phase at the feature matching process, not all features are matched due to changes of view, light, etc. Moreover, some features are lost due to tracking failure (inside the segment) or due to moving objects. As a result especially in turning conditions the destination matched features are not guaranteed to be on both sides.

(a)
(b)
Figure 8: Two consecutive keyframes selected in left turning condition, (a) the first keyframe and (b) the next keyframe.

3- No control inside the funnel lane

The robot is moving forward until it gets out from the funnel lane. After getting out it receives a command to return it back to the funnel lane.

5 Sloped funnel lane

Sloped funnel lane is a method which overcomes the shortcomings of the standard funnel lane. First, we will explain the sloped funnel lane. Then the motion control based on it will be described. After that, we will show how the sloped funnel lane can overcome the limitations of the standard funnel lane.

The standard funnel lane gives no information about the radius of rotation, and there is an ambiguity between translation and rotation as explained. The standard funnel lane is created according to the fact that the features will move away from the center of the feature’s coordinates toward the edge of the images when the robot moves in a straight line toward the destination image.

Actually, in standard funnel lane for each feature, a funnel lane is created and later they are combined. However, more information can be extracted by looking at all features together. In straight movements, as seen from the robot’s camera features move away from the center, in addition, will move away from each other as the robot moves forward. So, we can conclude that the ratio of the standard deviation of coordinates of all matched features in the current image to the standard deviation of coordinates of their corresponding features in destination image will become greater as the robot moves forward toward the destination.

To take this fact into account, we add slopes to the standard funnel lane. The idea is inspired by the movement of a ball on a sloped surface. If the surface has a slope toward front, the ball moves forward. If the surface has a slope toward left or right sides the ball will roll to the left or right. Moreover, if the surface has a slope toward front and left /right side at the same time the ball will roll forward and tend to the left /right. Depending on the amount of the slope toward forward and toward left /right the ball will roll in different trajectories.

In our case, the ball is the robot and the surface is the sloped funnel lane. The different trajectories are considered to be turnings with different radii of rotations. In figure 9 different trajectories with different radii of rotation when the robot turns to the left are shown. To simplify things, the radius of rotation is specified through the forward slope. The sharper slope means the larger radius of rotation. The right and left slopes are only used to determine the direction of the turn or whether the robot should turn or not. In a nutshell, if there is a right or left slope, the robot will turn right or left according to the radius of turning specified by the forward slope, otherwise the robot will not turn.

To define such a surface we define the slope around axis inversely proportional to the ratio of the standard deviation and the slope around axis is defined proportional to the difference of current and destination feature coordinates.

Figure 9: Different trajectories with different radiuses of rotations are shown when the robot turns to the left

The farther the current image is from the destination keyframe, the slope of the funnel lane around axis should be larger, and it is reduced when we go toward the destination keyframe. Thus we define this slope inversely proportional to the ratio of to :

(1)

In addition, the slope around the axis depends on the distance of the current features with the destination features. The more difference causes the more slope. This slope is used to control the robot inside the funnel lane. We calculate two slopes according to the right and left features. The features of the current image are considered as right or left features according to being on the right or left side of the destination keyframe. Two features that represent right and left features are chosen. The feature that represents the right features is the median of the right features () and the other one that represents the left features is the median of left features (). In case of existing just one feature at each side, the only existing feature is chosen to represent the side. In the absence of the right or left features, the sloped is created just by one feature that represents the other ones. The right features create a negative slope around the axis while the left features create a positive slope. The final slope is the sum of both slopes. It is noteworthy that, the slopes should be normalized before summing their values in order to balance between left and right features. So we define the slope around :

(2)

where and are the median coordinates of the left features at the location and the median coordinates of their correspondences at location , respectively. and are the median coordinates of the right features at the location and the median coordinates of their correspondences at location , respectively.

Figure 10 shows an example of summing these two slopes. The sum of two slopes in figure 9(a) will be positive and in figure 9(b) will be negative.

(a)
(b)
Figure 10: Two examples of summing the slope of both sloped funnel lanes representing each side: (a) the sum will be positive, (b) the sum will be negative.

This slope is used to control the robot inside the funnel lane itself. Instead of waiting for the robot to get out from the funnel lane, this slope helps to keep the robot inside it. These two slopes are added to the funnel lane and as we mentioned in sloped funnel lane just one funnel lane is created by all features together. Therefore the definition of the sloped funnel lane will be as the following:

Definition 2: A sloped funnel lane (SFL) of a set of fixed landmarks L, where some of them are left landmarks to (projected on the left side of the destination keyframe) and the others are right landmarks to (projected on the right side of the destination keyframe) at a robot location () is the set of locations such that, for each , the following four funnel constraints are satisfied:

and the funnel lane slope around axis (pitch) is:

and the slope around axis (roll) is:

where and are the median coordinates of the image projection of at the location and the median coordinates of their correspondences at location , respectively. and are the median coordinates of the image projection of at the location and the median coordinates of their correspondences at location , respectively. and are the standard deviation of the coordinates of the matched features of current image with the destination keyframe at locations and , respectively.

Figure 10(b) shows the obtained sloped funnel lane when the robot heading angle is the same as the destination keyframe with a slope around the axis and no slope around the axis ( and which means a forward movement should happen). Figure 10(a) demonstrates with the same conditions but with just a negative slope around the axis ( and is negative which means a left turning in place should happen). Figure 10(c) shows a sloped funnel lane with a slope around the axis (pitch) and figure 10(d) shows a sloped funnel lane with a slope around the axis (roll) in case of the absence of the left features.

In the sloped funnel lane similar to the standard funnel lane if the heading direction of the robot is not in the same direction of the destination keyframe , the lines of the funnel lane are rotated by an angle which is equal to the angle that the robot has with the destination keyframe .

(a)
(b)
(c)
(d)
Figure 11: (a) Sloped funnel lane created with a negative slope around axis (roll), (b) Sloped funnel lane created with slope around axis (pitch). The obtained sloped funnel lane in case of the absence of the left features is shown in (c) with a negative slope around axis and in (d) with slope around axis (pitch)

5.1 Motion control based on sloped funnel lane

The robot moves forward until it is inside a funnel lane with no slope around the axis. The robot is inside the funnel lane when the four constraints are satisfied. Whenever constraint 1 or constraint 4 are violated it means that the robot has gone outside the funnel lane from the left side so it gets a right turning command and whenever constraint 2 or constraint 3 is violated it means that the robot has gone outside the funnel lane from the right side so it gets a left turning command to keep it in the funnel lane. While the robot is inside the funnel lane but the funnel lane has a positive slope around the axis, the robot gets a right command and when it has a negative slope, it gets a left command. Note that the radius of rotation is determined according to the slope around the axis in all turnings commands. The less the slope, the sharper the robot turns and vice versa. As the slope around axis gets near zero, the radius of rotation in turning command will also be near zero and the turning will be more like rotation in place.

The motion control based on the sloped funnel lane is presented in algorithm 1.

1:
2:if four constraints are satisfied then inside SFL
3:     if  then zero roll
4:         Move forward
5:     else if   then roll is negative
6:         Turn left
7:     else if  then roll is positive
8:         Turn right
9:     end if
10:else
11:     if constraint 1 or constraint 4 are violated then
12:         Turn right
13:     else if constraint 2 or constraint 3 are violated then
14:         Turn left
15:     end if
16:end if
Algorithm 1 Motion control based on sloped funnel lane

5.2 How sloped funnel lane does not have the limitations of standard funnel lane

The sloped funnel lane can deal with the limitations that are mentioned in section 4.2. We will demonstrate the limitations and explain how sloped funnel lane can handle them.

1- Constant radius of rotation

The radius of rotation is defined in the sloped funnel lane. As we explained, the slope around the axis determines the radius of rotation, which means that the robot has more maneuverability. It is free to take any path in the teaching phase with different turning conditions including rotation in place. In the repeating phase, the robot will set its radius of rotation adaptively, depending on the situation it faces. In addition, if the robot deviates from the path especially in turnings, it can correct its direction more easily by changing its radius of rotation. For example, as shown in figure 12, suppose that the robot starts to follow the desired path from A. The robot in position B gets the turning command. In figure 11(a) the robot faces a problem to correct its direction due to its constant radius of rotation, while in figure 11(b) the robot corrects its direction easily.

(a)
(b)
Figure 12:

2- The ambiguity of translation and rotation

In the sloped funnel lane, a slope around the axis is added which looks at all features together. This slope helps to resolve the ambiguity of rotation and translation. A small slope means a small radius of rotation which means a small translation the robot has to do and vice versa. For example, in figure 7, the standard funnel lane does not distinguish between both keyframes as we have shown before, but the slope around the axis in the sloped funnel lane helps to distinguish between them.

The reason is that the slope around the axis is inversely proportional to the standard deviation ratio which in the first case is closer to 1 than the second case. In both cases, a left command is sent. Therefore no features exist on the left side and slope around the axis will be negative. But in the first case, the robot will turn sharper near to rotation in place (less translation), and in the second case, a turning near to moving straight forward occurs (less rotation).

As a result, the sloped funnel lane by resolving this ambiguity prevents the robot from deviating and from getting out of the desired path.

3- No control inside the funnel lane

In the sloped funnel lane, the slope around the axis is added. This slope is used to control the robot inside the sloped funnel lane. The slope helps the robot to move in a balanced way throw the funnel lane. This helps to keep the robot inside the funnel lane instead of waiting for leaving out of it.

6 keyframe switching criterion

Funnel lane is a method to control a robot between two keyframes and how to move inside a segment. An important issue is how to define the criterion to switch to another keyframe. Mean square error between the coordinates of current features and features in the destination keyframe () can be used as a criterion. Chen and Birchfield 6 proposed a method based on MSE. They supposed that the MSE error will become smaller as the robot moves toward the destination image, and the error is decreasing until reaching it. In practice, in our experiments, we noticed that this error was not decreasing uniformly due to losing features and insensitive steering. This criterion is related to the movement of the robot which makes it so sensitive. Figure 13 shows a sample of this error in a real experiment. As it is shown, the error was oscillating and a lot of switching happens because the criterion needs very sensitive steering. So steering a little more than necessary or even losing some features causes the MSE not to decrease.

Figure 13: A sample of MSE error in a real experiment

Another method uses mean square error with odometry information to define a probability for switching

5 . We prefer to define a criterion just based on the features themselves, and not using odometry. In 2 the switching is based on matching two successive keyframes 2 . The features of the current image are matched with the features in the destination keyframe and with the features in the keyframe next to the destination keyframe. A switching happens whenever the number of matched features with destination keyframe becomes less than the number of matched features of the keyframe next to destination keyframe. Therefore two matchings are required for every cycle to know when to switch.

In our work, a simple method based on the slope around defined in the sloped funnel lane is used. When StdRatio(current image, destination keyframe) becomes greater than 1 and the Euclidean distance of the median of both coordinates ED(current image, destination keyframe) becomes less than a threshold, a switching happens.

7 experimental results

Real experiments were conducted on a robot with a VEX platform 23 . The robot uses an IP camera and sends the images using WIFI to a laptop. Blob features are used in this paper. A well-known blob detection technique is SIFT 8 that uses the difference of Gaussian operator to detect features. SURF 7 is a speeded-up version of SIFT. It approximates the Gaussian with a box filter and the convolution with a box filter can be calculated simultaneously for different scales. In our experiments, we choose SURF detectors to speed up the navigation algorithm and its length is chosen to be 64. Larger length gives more accuracy but it decreases the speed of features matching. For feature tracking Kanade–Lucas–Tomasi (KLT) algorithm with default block size [31 31] is used. The algorithm is executed on a laptop and the commands are sent to the robot for path following. The algorithm is implemented in MATLAB 2016 on a VAIO laptop (core i7 1.73GHz RAM 4GB). The robot is shown in figure 14. First, the robot is controlled manually from the laptop while recording a video from the traversed path. After that, the visual path is constructed as explained in the previous sections. Then, the robot is placed on the same initial point and is controlled by the algorithm running on the laptop to follow the recorded visual path.

The method used for visual navigation after creating the visual path is presented in algorithm 2.

Figure 14: The robot which is used to evaluate the proposed navigation method
1:assumed: The visual path i consists from n keyframes, robot starts from segment 1
2:C=capture the current image
3:j=1
4:Detect surf features of
5:Match features of with
6:
7:
8:lost=false
9:while  or lost=false do
10:     if  and or  then A switching to the next segment is happens
11:         
12:         C=capture the current image
13:         Detect surf features of
14:         Match features of C with
15:         
16:     else Control inside a segment
17:         if   then Sufficient features remained
18:              Track the matched features with KLT
19:              
20:              Control the robot with the sloped funnel lane
21:         else
22:              
23:              while   do Robot deviates or features lost
24:                  C=capture the current image
25:                  Detect surf features of
26:                  Match features of with
27:                  
28:                  Stop the robot
29:                  time=time+1;
30:                  if   then
31:                       lost=true
32:                       return
33:                  end if
34:              end while
35:         end if
36:     end if
37:end while
38:Stop the robot
Algorithm 2 visual navigation
(a)
Figure 15: The matched features of the current image with the destination keyframe is shown by green color and their corresponding destination features are shown by red color, StdRatio(current image, destination keyframe) is shown at the top of the figure

In section 5.2 we show how the sloped funnel lane outperforms the standard funnel lane. In the sloped funnel lane unlike the standard funnel lane the robot is free to take any path (with different radius of rotations) in the teaching phase.

Therefore, these experiments have been performed to show the impact of these restrictions on following the paths in the repeating phases even when the robot takes a path with a similar constant radius of rotation in the teaching phase.

Six practical scenarios are considered to show that. Moreover, two paths are chosen to compare the accuracy and the repeatability of our method with the standard funnel lane.

First, the visual path is created. Then the robot is placed at the initial point and it tries to follow the visual path once with the sloped funnel lane and again with the standard funnel lane. Figure 15 shows the features in the current image and their correspondence features in the destination keyframe. Also is shown at the top of the figure.

7.1 Six practical scenarios

The goal is to evaluate the path following the ability of both algorithms in six challenging scenarios. Three scenarios are indoor and the rest are outdoor. Two of the three chosen indoor scenarios are short and challenging, while the other one is almost a straight path. The first one is a 9-meter path inside a room with a narrow space. First, the robot is controlled to follow the path after that the robot is placed at the same initial point. In the first trial, the robot follows the path with the standard funnel lane and in the second trial, it follows the path with sloped funnel lane. Figure 15(a) shows the teaching path and both paths followed by the robot with the standard and sloped funnel lane. The robot was not able to follow the path by standard funnel lane and it hits the chair. The reason is that the radius of rotation is set beforehand in the standard funnel lane and the robot turns by a constant radius. A small deviation from the desired path or switching later than it should, make it impossible to correct or compensate its direction especially in such a scenario with narrow space.

The second scenario is another 6-meter path with one turning to the left and with wide space. The robot in the repeating phase is placed two meters in front of the initial point in the teaching phase. Figure 15(b) shows the followed paths with both methods. Even though in the standard funnel lane the robot constantly gets left commands, it is not able to follow the path because it is placed two meters in front of the initial point. The sloped funnel lane was able to correct its direction because it decreases its radius of rotation and it gets a sharper turning command to get back on the desired path.

The third indoor path is almost straight 25 meters in a corridor as shown in 15(c). The results were very close and both methods followed the path successfully.

(a)
(b)
(c)
Figure 16: (a) The first indoor scenario with narrow space (funnel lane failed) (b)The second scenario with a different initial point at the repeating phase (funnel lane failed) and (c) The third indoor scenario with an almost straight path.

We have also chosen three outdoor scenarios. The first one is a parking lot. The robot is controlled to park between two cars near each other as shown in figure 16(a). Both methods get to perform equally well. But in the standard funnel lane, the robot corrects its direction hardly and it gets closer to the side of the car which increases failure risk. Another outdoor scenario is a closed loop path with a dynamic situation. In the teaching phase, the robot is controlled to follow a looped path, and in the repeating phase two of the parked cars are left and the ability to follow the path with both methods is evaluated. Figure 16(b) shows the results of both methods. The gray cars are the ones left in the repeating phase. The robot failed to follow the path by the standard funnel lane because a lot of features of one side were lost and the ambiguity causes the robot to deviate and getting out the desired visual path. Last outdoor scenario is a path with wide turning and as shown in figure 16(c) both methods follow the path successfully.

(a)
(b)
(c)
Figure 17: (a) The first outdoor parking scenario(b)The second outdoor scenario is a closed loop that two cars are left in the repeating phase (funnel lane failed) and (c) The third outdoor scenario with wide turning.
(a) keyframe 1
(b) keyframe 2
(c) keyframe 3
(d) keyframe 4
(e) keyframe 5
(f) keyframe 6
Figure 18: The keyframes selected to create the visual path which standard funnel lane fails to follow and sloped funnel lane follows successfully.
standard funnel lane sloped funnel lane
acc. / rep. acc. / rep.
sharp turn 3.45 / 0.55 1.31 / 0.51
almost striaght 1.19 / 0.62 1.0 / 0.46
Table 1: The comparison of the accuracy and the repeatability of both standard funnel lane and sloped funnel lane

7.2 Accuracy and repeatability comparison

The six practical scenarios showed the ability of both methods to follow some challenging paths. In this section, we compare the accuracy and the repeatability of both methods. The comparison method is proposed by the authors of funnel lane itself 5 . Two indoor paths are chosen and the experience was repeated for ten times by both algorithms. The first one is a 10-meter path with one sharp turn to the left and low texture indoor environment. Figure 18 shows the selected keyframes that create the visual path of the route. The second one is a 10 meter indoor almost straight route. The distance between the final point reached by the robot and the desired final point is calculated. The average RMS Euclidean distance and the standard deviation which expresses the accuracy and the repeatability of the algorithms are calculated by the following equations:

(3)
(4)

where is the desired final point and is the final reached point and is:

(5)

The results are shown in table 1.

Actually, the robot fails to follow the path in the sharp turn with the standard funnel lane, while the sloped funnel lane was able to follow the path successfully in most cases.

It is noteworthy that the sloped funnel lane is as good as the standard funnel lane and the experiments performed shows the deficiencies of standard funnel lane has been solved successfully.

Do not forget that in the sloped funnel lane the robot’s radius of rotation is assigned adaptively, depending on the situation it faces. Therefore, the robot can deal with different turning condition including rotation in place. The robot, unlike the standard funnel lane, is free to take any path (turnings with any radius) in the teaching phase. To obviate the situations for standard funnel lane, in these experiments the robot’s radius of rotation was considered almost similar and constant in both phases, however, in some cases, the standard funnel lane failed to follow them. Standard funnel lane faces a problem in turnings in narrow spacing and in sharp turnings. The reason is that the robot in such cases is facing difficulties in correcting its direction due to its constant radius of rotation. This is compounded by the impact of the ambiguity which causes the robot to deviate from the desired path.

Two additional experiments are conducted to demonstrate the effectiveness of the approach. The first one is a 30-meter indoor path inside the department and the second one is a 70-meter outdoor path inside IUT campus. Figure 18(a) and figure 18(b) show the results. The most important thing in experiments is to consider the assumptions mentioned in section 2.

(a)
(b)
Figure 19: (a) The indoor path and (b)the outdoor path, the sloped funnel lane follow them successfully.

8 conclusion

In this paper, qualitative visual navigation based on the sloped funnel lane concept was proposed. In the teaching phase, the robot is controlled manually to follow a path. In the repeating phase, the robot has to follow the desired path autonomously. First, a visual path was created by selecting some keyframes from the video taken by the robot in the teaching phase. After that in the repeating phase, the concept of the sloped funnel lane which overcomes some limitations of the standard funnel lane was introduced. The proposed sloped funnel lane, unlike the standard funnel lane, can deal with different turning conditions including rotation in place. The radius of rotation is not set beforehand which limit the maneuverability of the robot. As well it reduces the ambiguity of translation and rotation which exists in the standard funnel lane. As a result, a more robust and reliable method than the standard funnel lane has been proposed. The limitations of the standard funnel lane were explained in details and we demonstrated how the proposed sloped funnel lane overcomes them. Moreover, some experiments were conducted on a real robot and the results showed that our proposed method outperforms the standard funnel lane.

Acknowledgment

The authors would like to thank Artificial Intelligence laboratory members for their support.

References

  • (1) F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual Navigation for Mobile Robots: A Survey,” J. Intell. Robot. Syst., 2008.
  • (2) T. Nguyen, G. K. I. Mann, R. G. Gosine, and A. Vardy, “Appearance-Based Visual-Teach-And-Repeat Navigation Technique for Micro Aerial Vehicle,” J. Intell. Robot. Syst. Theory Appl., 2016.
  • (3) D. Burschka and G. Hager, “Vision-based Control of Mobile Robots,” in Proc. IEEE Int. Conf. Robot. Autom., 2001.
  • (4) A. Diosi, A. Remazeilles, S. Šegvić, and F. Chaumette, “Experimental evaluation of an urban visual path following framework,” in IFAC Proceedings Volumes (IFAC-PapersOnline), 2007.
  • (5) Z. Chen and S. T. Birchfield, “Qualitative vision-based path following,” IEEE Trans. Robot., 2009.
  • (6) C. Zhichao and S. T. Birchfield, “Qualitative vision-based mobile robot navigation,” in Proceedings - IEEE International Conference on Robotics and Automation, 2006.
  • (7) H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., 2008.
  • (8) D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., 2004.
  • (9) J. J. Guerrero, R. Martinez-Cantin, and C. Sagüés, “Visual map-less navigation based on homographies,” J. Robot. Syst., 2005.
  • (10) B. L. B. Liang and N. Pears, “Visual navigation using planar homographies,” Proc. 2002 IEEE Int. Conf. Robot. Autom. (Cat. No.02CH37292), 2002.
  • (11) A. Remazeilles and F. Chaumette, “Image-based robot navigation from an image memory,” Rob. Auton. Syst., 2007.
  • (12)

    S. Šegvić, A. Remazeilles, A. Diosi, and F. Chaumette, “Large scale vision-based navigation without an accurate global reconstruction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007.

  • (13) Tien Do , Luis C. Carrillo-arce , Stergios I. Roumeliotis, ”Autonomous Flights through Image-defined Paths”, International Symposium of Robotics Research (ISRR), 2015.
  • (14) T. Nguyen, G. K. I. Mann, and R. G. Gosine, “Vision-based qualitative path-following control of quadrotor aerial vehicle,” in 2014 International Conference on Unmanned Aircraft Systems, ICUAS 2014 - Conference Proceedings, 2014.
  • (15) E. Royer, M. Lhuillier, M. Dhome, and J.-M. Lavest, “Monocular Vision for Mobile Robot Localization and Autonomous Navigation,” Int. J. Comput. Vis., 2007.
  • (16) H. Chao, Y. Gu, and J. Gross, “A comparative study of optical flow and traditional sensors in UAV navigation,” Am. Control …, 2013.
  • (17) M. V. Srinivasan, “Honeybees as a Model for the Study of Visually Guided Flight, Navigation, and Biologically Inspired Robotics,” Physiol. Rev., 2011.
  • (18) K. Kidono, J. Miura, and Y. Shirai, “Autonomous visual navigation of a mobile robot using a human-guided experience,” in Robotics and Autonomous Systems, 2002.
  • (19) E. Royer, J. Bom, M. Dhome, B. Thuilot, M. Lhuillier, and F. Marmoiton, “Outdoor autonomous navigation using monocular vision,” Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2005.
  • (20) A. Remazeilles, F. Chaumette, and P. Gros, “3D Navigation Based on a Visual Memory,” in International Conference on Robotics and Automation, 2006.
  • (21) Y. Matsumoto, K. Ikeda, M. Inaba, and H. Inoue, “Visual navigation using omnidirectional view sequence,” in Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), 1999.
  • (22) H. Chao, Y. Gu, and M. Napolitano, “A survey of optical flow techniques for UAV navigation applications,” in 2013 International Conference on Unmanned Aircraft Systems, ICUAS 2013 - Conference Proceedings, 2013.
  • (23) \(http://www.vexrobotics.comvisitedin2018\).
  • (24) A. G. Toudeshki, F. Shamshirdar, and R.Vaughan, “UAV Visual Teach and Repeat Using Only Semantic Object Features,” CoRR, 2018.