I Introduction
Robot localization has been researched by scientists from the very beginning of robotics. Knowing its position and orientation is an essential task for an autonomous system in many circumstances.
Very often, it is a prerequisite to perform more complex tasks.
The marine domain is no exception. In most  if not all  real life scenarios, the robot location is an essential information.
Marine robots are increasingly used to perform a great variety of tasks, ranging from oil&gas applications to defense, from marine biology to underwater archaeology. In all of these scenarios, the robot location is fundamental.
There are several challenges to perform underwater localization. The lack of GPS signal is the most evident one. In order to overcome this, various acousticbased solution can be employed, like for example Long BaseLine (LBL) acoustic positioning system. This requires to deploy acoustic transponders as aid for the vehicle, which can compute its location with a triangulation from the data received by the transponders. The drawback of this technique is however the need to actively deploy external transponders, which cause additional cost, time and logistic challenges. Additionally, the GPS location at the dropin point might not be the same than the GPS on the seabed, especially in deep sea, with strong currents.
Many offshore infrastructures are located in environments which fall into this category.
For this reason, several techniques have been used in the past years to allow an underwater vehicle to determine its location based entirely on the onboard sensor suite.
Geometric approaches were developed based on distance sensors like sonar, in parallel with geometric approaches developed in land robotics, based on laser scanners.
In recent years there has been a substantial interest from the research community to explore semantic aspects of knowledge representation, and its influence in the vehicle’s tasks. Generally speaking, robots still lack the highlevel abstraction capability typical of humans. This is a complex problem, as it aims to shift the paradigm from sensor processing into a more organized, longterm knowledge structure in robotics systems, with possibility of augment, reasoning and learn.
This paper represents a step in this direction, with the of use of semantic information in processes traditionally covered only by geometric approaches.
The paper is organized as follows:
 Section II will present the stateofart and the related work
 Section III will present the technical approach that was undertaken
 Section IV will present the results
 Section V will critically discuss limitations and improvements of the proposed approach
Ii Related Work
AUV localisation is a problem which has been studied from the very early stages of marine autonomy. Already in 1990, Rigaud et al. proposed a system based on sensor fusion to correct dead reckoning [1], using a discrete stochasticaccumulation method. The results are shown postprocessing sonar data from a pool. Particle filters, introduced by Gordon in 1993 [2]
, have been an effective way of representing the evolution of a probability distribution by a set of samples. They have now become one of the
standard filtering techniques in robotics. Their adoption in the underwater domain was however slower than in other robotics area, due to the more limiting conditions, such as reduced computational power in the pressure hull(s). Kondo et al. proposed a realtime localization method for navigation based on particle filters that can fuse multisensor data  image and acousticbased profiling systems [3]. Karlsson proposed a particle filter approach for AUV navigation, with a focus on mapping [4]. Silver et al. presented particle filter merged with scan matching techniques [5]. Used with an approximation of the likelihood of sensor readings, based on nearest neighbor distances, particle filters are able to approximate the probability distribution over possible poses. Wirth et al. addressed the use of particle filters for visual tracking of underwater elongated structures such as cables or pipes [6]. Models for probabilistic tracking are obtained directly from real underwater image sequences. Previous work from some of the authors include the application of particle filters and Kalman filters in a variety of underwater settings
[7, 8, 9]. Those approaches are mainly geometric, namely the goal is to determine the robot’s pose (location and orientation), based on geometric measurements (or transformed to geometric measurements). In the recent years, an emerging area in robotics has been dealing with more cognitive aspects linked to the world representation, which extend the geometriconly aspects. Semantic representation has become an active field of research, shifting the research focus from signal to symbol. Semantic information have been used in marine robotics in various scenarios, with the goal of improving situation awareness [10]. Projects focused on persistent autonomy in the marine domain has addressed semantic representation, linking it to robust longterm planning and dynamic replanning [11, 12]. In this work we aim to build on top of the results in knowledge representation and in localization, to address semantic localization, an area which is still very challenging, especially in the marine domain. Lim & Sinha proposed an approach to detect semantic features to use in a Kalman filter [13]. The proposed 2Dto3D matching approach for recognizing 3D points in the map does not require the construction of an explicit semantic map. Rather semantic information can be associated with the 3D points in the reconstruction process and can be retrieved via recognition during online localization. Other approaches focused on feature detection, focusing on the 3D Point Cloud and the PCL Library [14], or on featurebased scene descriptors from images, using dynamic Bayesian networks
[15]. Other approaches aim to determine the semantic place the robot is. For example Villena et al. proposed a multimodal HRI approach, with a finite set of places where the robot could be [16]. This approach to the problem of semantic localization however cut off completely the geometric dimension of the localization problem, whereas the proposed work aims to use the aid of semantic knowledge to compute the robot pose in geometric terms. Yi et al. uses a particle filter approach exploiting semantic relations among objects to aid the localization process [17]. This approach uses spatial relations among objects in the environment recognized in camera images. The proposed approach in this paper is similar to the last approach as general structure of the localization approach (without the active component), though there are important differences in the sensor suite, in the way knowledge is represented and therefore in the computation of the likelihood function. No significant semantic approach for localization in the marine robotics domain is currently available in the related literature.Iii Technical Approach
A particle filter is a Bayes filter that works by representing a probability distribution as a set of samples, as expressed in the following equation.
(1) 
where represents the number of samples, is the state of the sample , is the impulse function centered in . The more dense the samples in a region, the higher is the probability that the current state falls within that region. In principle, in order to maintain a sample (particle) representation of the system state distribution over the time , the samples should be created from the probability distribution of the current state, given the observation history . Such a distribution is in general not available in a form suitable for sampling. However, the importance sampling principle ensures that if:

we are able to evaluate pointwise and to draw samples from an arbitrarily chosen importance function , such that , and

we are able to evaluate pointwise ,
then it is possible to recover a sampled approximation of as outlined in the following equation:
(2) 
where are samples drawn from and is the importance weight related to the sample that takes into account the mismatch among the target distribution and the importance function. One of the most common particle filtering algorithms is the Sampling Importance Resampling (SIR) filter. A SIR filter incrementally processes the observations and the commands
(process evolution), by updating a set of samples representing the estimated distribution
. This is done by performing the following three steps:
Sampling: The next generation of particles is obtained by the previous generation , by sampling from a proposal distribution .

Importance Weighting: An individual importance weight is assigned to each particle, according to the following equation:
(3) The weights account for the fact that the proposal distribution in general is not equal to the true distribution of the successor states.

Resampling: Particles with a low importance weight are typically replaced by samples with a high weight. This step is necessary since only a finite number of particles are used to approximate a continuous distribution. Furthermore, resampling allows to apply a particle filter in situations in which the true distribution differs from the proposed one.
Moving from a pure geometric approach to a semanticaided one, the main function that needs some modification is the Importance Weighting. In a pure geometric approach, the weight is calculated comparing arrays of distances, given that every observation from the robot can be translated into the geometry of the surrounding landscape. In the chosen semanticaided approach, each observation is formed by q objects, with a relative pose with respect to the robot pose:
(4) 
The calculation of the weight of a particle means to evaluate how close two sets of observations and are. can be expressed in the same way than the above mentioned . assuming objects observed:
(5) 
A mixture of two families of Gaussians for the objects in common between the two observations is employed to calculate a first estimation of the weight. All Gaussians are centered in 0, with the value calculated at the points:
The calculated value is then adjusted taking into consideration the objects present in one observation, but not in the other one, with .
Iv Simulated Setup and Results
The developed algorithm was integrated in the ROSbased AUV simulator used at Jacobs University. This has the advantage to use the same architecture that runs in the real AUV,with simulated sensors and dynamics. The environment prepared to evaluate the proposed semanticaided localization approach is a 50x50m area, with the AUV moving close to various manmade structures, as shown in Figure 1. A screenshot of the simulator, based on Morse [18], can be seen in Figure 2.
Tests were run with the standard geometric approach and with the proposed semanticaided one. 100 tests were run for each of the approach, and the results are shown in Figure 3 and Table I
. In the specific scenario, the average time to convergence, as well as the error and the particle variance are comparable in the two approaches. The proposed semanticaided approach shows some improvements, but both results are in the same range. The real difference is however in the average time needed for the execution, with the semanticaided approach being approximately six times faster than the classical geometrical one. This is particularly relevant in the underwater domain, where computational power is still an open issue, and it is not scalable in the same way than for land robotics. A more efficient process is therefore pivotal for the actual use of the proposed approach in the field.
Avg. time (ns)  100 runs  
Semanticaided  Geometric 
1171934146  6977878010 
V Discussion and Conclusions
This paper has presented a semanticaided approach for AUV localization. Classical techniques are based on geometrical information only, whilst the proposed approach takes into consideration the transformation from signal to symbol and work with objects and their relative position to the robot. The preliminary results shown in this paper are very encouraging, showing a very efficient process, about six times faster than its geometric counterpart. One of the most demanding operation in any particle filter technique is the simulation of the observation from each particle. Having a semantic map stored in the vehicle knowledge base significantly reduce the complexity of this operation. Simulating distances with respect to random surfaces would require computational demanding ray tracing algorithms, whilst the complexity of generating a semantic observation, given the robot pose and the semantic map, is significantly lower. For the simulated setup, raytracing techniques were even avoided, but analytic geometric formulas have been employed. These however can only be used in specific environments, like the one simulated. For more complex environments, ray tracing is the norm. Therefore, the efficiency shown by the semanticaided approach can also be greater than the factor calculated, based on the way geometric observations are simulated. On the other hand, there is an important step which was not in the scope of this paper, but it enters into play when addressing the overall system. This is the object recognition process, moving from the signals from the sensors to symbolic representation. This can be computational intensive, depending on the technique used and on the surrounding environment. This of course needs to be considered when talking about computational efficiency, though there are two important considerations: the first is merely linked on the times this operation would need to be invoked, and that is once per observation. This may well be worth in order to avoid using expensive ray tracing algorithms for thousands of particles. The second consideration is that there is an increasing effort from the research community in autonomous systems to include semantic aspects, build a semantic map, use a semantic map connected with the planning or the learning system. If a vehicle is already building and maintaining a semantic map, then using it for localization would not require additional efforts and sensor processing techniques than the ones already in place. There are several future directions that we are interested in pursuing. First we will show a validation of this approach with real data gathered in previous missions. Due to various constraints, those results are not yet available for publication. We are also interested in working with probabilistic semantic maps, rather than deterministic, as shown in this paper. Finally, active techniques  already developed by some of the authors with a geometric setting  would benefit from a similar semanticaided approach presented in this paper.
Acknowledgment
The research leading to these results has received funding from the European Union Horizon2020 Programme  Marie SkłodowskaCurie Action  under grant agreement No. 709136 TICAUV.
References
 [1] V. Rigaud, L. Marce, J. L. Michel, and P. Borot. Sensor fusion for auv localization. In Symposium on Autonomous Underwater Vehicle Technology, pages 168–174, Jun 1990.
 [2] N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/nongaussian bayesian state estimation. IEE Proceedings F  Radar and Signal Processing, 140(2):107–113, April 1993.
 [3] H. Kondo, T. Maki, T. Ura, Y. Nose, T. Sakamaki, and M. Inaishi. Relative navigation of an auv using imageandacoustic based profiling systems. In OCEANS ’04. MTTS/IEEE TECHNOOCEAN ’04, volume 3, pages 1330–1335 Vol.3, Nov 2004.
 [4] R. Karlsson, F. Gusfafsson, and T. Karlsson. Particle filtering and cramerrao lower bound for underwater navigation. In Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 IEEE International Conference on, volume 6, pages VI–65–8 vol.6, April 2003.
 [5] D. Silver, D. Bradley, and S. Thayer. Scan matching for flooded subterranean voids. In IEEE Conference on Robotics, Automation and Mechatronics, 2004., volume 1, pages 422–427 vol.1, Dec 2004.
 [6] Stephan Wirth, Alberto Ortiz, Dietrich Paulus, and Gabriel Oliver. Using particle filters for autonomous underwater cable tracking*. IFAC Proceedings Volumes, 41(1):161 – 166, 2008. 2nd IFAC Workshop on Navigation, Guidance and Control of Underwater Vehicles.
 [7] F. Maurelli, S. Krupinski, Y. Petillot, and J. Salvi. A particle filter approach for auv localization. In OCEANS 2008, pages 1–7, 2008.
 [8] F. Maurelli, Y. Petillot, A. Mallios, P. Ridao, and S. Krupinski. Sonarbased auv localization using an improved particle filter approach. In OCEANS 2009EUROPE, pages 1–9, May 2009.
 [9] Y. Petillot, F. Maurelli, N. Valeyrie, A. Mallios, P. Ridao, J. Aulinas, and J. Salvi. Acousticbased techniques for auv localisation. Journal of Engineering for Maritime Environment, 224(4):293–307, 2010.
 [10] Emilio Miguelanez, Pedro Patrón, Keith Brown, Yvan R. Petillot, and David M. Lane. Semantic knowledgebased framework to improve the situation awareness of autonomous underwater vehicles. IEEE Transactions on Knowledge and Data Engineering (In Press), PP(99), 2010.
 [11] David M. Lane, Francesco Maurelli, Tom Larkworthy, Darwin Caldwell, Joaquim Salvi, Maria Fox, and Konstantinos Kyriakopoulos. Pandora: Persistent autonomy through learning, adaptation, observation and replanning. IFAC Proceedings Volumes, 45(5):367 – 372, 2012. 3rd IFAC Workshop on Navigation, Guidance and Control of Underwater Vehicles.
 [12] F. Maurelli, M. Carreras, J. Salvi, D. Lane, K. Kyriakopoulos, G. Karras, M. Fox, D. Long, P. Kormushev, and D. Caldwell. The pandora project: A success story in auv autonomy. In OCEANS 2016  Shanghai, pages 1–8, April 2016.
 [13] Hyon Lim and Sudipta N. Sinha. Towards realtime semantic localization. In ICRA workshop on semantic perception, 2012.
 [14] Jesus MartínezGómez, Vicente Morell, Miguel Cazorla, and Ismael GarcíaVarea. Semantic localization in the pcl library. Robotics and Autonomous Systems, 75:641 – 648, 2016.
 [15] Fernando Rubio, M Julia Flores, Jesús Martınez Gómez, and Ann Nicholson. Dynamic bayesian networks for semantic localization in robotics. In XV Workshop of Physical Agents: Book of Proceedings, WAF 2014, June 12th and 13th, 2014 León, Spain, pages 144–155, 2014.
 [16] Álvaro Villena, Ismael GarcíaVarea, Jesus MartínezGómez, Luis RodríguezRuiz, and Cristina RomeroGonzález. A study of robot semantic localization based on multimodal hri, 11 2015.
 [17] Chuho Yi, Il Hong Suh, Gi Hyun Lim, and ByungUk Choi. Activesemantic localization with a single consumergrade camera. In Systems, Man and Cybernetics, 2009. SMC 2009. IEEE International Conference on, pages 2161–2166. IEEE, 2009.
 [18] G. Echeverria, N. Lassabe, A. Degroote, and S. Lemaignan. Modular open robots simulation engine: Morse. In 2011 IEEE International Conference on Robotics and Automation, pages 46–51, May 2011.
Comments
There are no comments yet.