Skip to main content

Multisensor robust localization in various environment with correlation checking test


Probabilistic localization based on Bayesian theory has been researched as a sensor fusion method to improve the robustness of localization. Pieces of position information, generated by sensors’ observation models with consideration for noises, are fused according to Bayesian theory. However, having large noises not considered in their observation models, the sensors output erroneous position information; thus, the fusion result has a significant error, even when the other sensors output correct ones. In this research, we have proposed a sensor fusion system with a relative correlation checking test to realize robust localization. Pieces of erroneous position information, biased against others and having a negative correlation with others, are detected and excluded in our proposed system by checking their correlation between all of them. The purpose of this paper is to evaluate the robustness of our fusion system by conducting recursive localization experiments in various environments.

Localization, estimating a device position based on onboard sensor data, has been researched for vehicles, smartphones, autonomous mobile robots, etc. [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. For example, as for autonomous mobile robots, position information is used for map construction, path planning, and object avoidance [21,22,23,24,25,26,27,28,29]. Although various localization methods using Global Positioning System (GPS), camera, Light Detection And Ranging (LiDAR), etc. , have been proposed, it has been a critical issue that the localization accuracy depends on the environment. For example, the localization methods using GPS frequently fail in downtown districts because of reflected signals on buildings. The ones of the camera and LiDAR also fail in environments where many similar features exist. These uncertainties of sensor observations can make a fatal influence on the robustness of localization.

In order to improve robustness of localization, probabilistic localization based on Bayesian theory, such as Kalman Filter, Particle Filter, etc., have been used as a sensor fusion method [5,6,7,8,9,10,11,12,13,14,15,16,17]. It can fuse pieces of position information from every sensor according to the uncertainty of each sensor’s observation. However, probabilistic localization is likely to fail when just an uncertainty of a sensor observation is not correctly evaluated. For example, an uncertainty of GPS observation is generally computed according to the value which reflects the number of observable satellites and the arrangement of them, known as Dilution Of Precision (DOP). Although DOP is also used for smartphone position accuracy circle, it cannot reflect the multi-pass error influence, and the GPS position information often contains significantly high error, even when its DOP is low. Many researchers have proposed observation models to evaluate the uncertainty of the observation for each sensor; nevertheless, a general model has not been established due to its high dependency on sensors and environments.

As a different viewpoint from the evaluation of the localization performance using uncertainty, we proposed a sensor fusion system evaluating the localization performance using correlation. Evaluating the correlation of position information with each other, the system aims to detect and exclude erroneous position information biased against others. Our fusion process is as follows; 1. expressing a piece of position information for each sensor as a discrete probability distribution according to Particle Filter theory, 2. detecting and eliminating probability distributions biased against others by relative correlation checking test, 3. computing a fused distribution by multiplying probability distributions passing the test.

In order to realize robust localization in various environments, we proposed a sensor fusion system with a relative correlation checking test in [12]. Meanwhile, in [13], we combined our proposed system with a Particle Filter for improving robustness our system to a recursive localization method and confirmed the effectiveness in a limited environment. To demonstrate the robustness of our system in various environments, this paper mentions some results of recursive localization experiments in three environments: the road between buildings, parking spaces, square. We show the robustness of localization improves on various kinds of environments.

Related work

Before mentioning our proposed sensor fusion system, we introduce some existing researches classified into three approaches.

The first fusion approach is that a user assigns localization sensors to the environments where a robot moves in, according to their experiences and knowledge [18]. While this approach is done easily if the robot moves a limited area, the workload becomes harder as the area becomes larger. Moreover, there is no guarantee that the assignment will works the best forever because the environment changes as time passes so that this is not suitable from the viewpoint of usability.

The second fusion approach is to compute a piece of position information from all sensors [1, 20, 21]. For example, in [21], Graeter et al. designed a localization system combining the advantages of a camera and LiDAR: a strong point feature matching of a camera such as Scale-Invariant Feature Transform (SIFT) and VISO, and an accurate depth measurement based on Time of Flight. Their system computes the depth of detected features in images using LiDAR points projected into images. However, in order to achieve the above calculation accurately, it requires strict sensor calibration in the order of millimeter to project LiDAR points into images correctly; moreover, it has to compute the depth of feature points in images by using sparsely projected LiDAR points near them. Even when sensor observations don’t contain large noises, this approach sometimes outputs erroneous position information.

The third fusion approach is to compute a piece of position information for each sensor, then fuse all of them. One of the most famous methods of this approach is probabilistic localization according to Bayesian theory. It expresses pieces of position information as probability distributions, which includes information about position and uncertainty of observation. In this approach, it is a crucial problem how to evaluate an uncertainties in sensors’ observation models. As mentioned in introduction, Maier et al. design the GPS observation model which computes its uncertainty based on DOP in [5]; however, it doesn’t reflect the multi-pass error influence. Therefore, in [17], Yamazaki et al. proposed an observation model: judging whether a satellite signal is sent by visible satellites or not using satellite arrangement and 3-dimensional map, then removing signals from invisible satellites as reflected one. However, it couldn’t consider other causes of GPS localization error: diffraction signals caused in the ionosphere and convection. Because these error sources change unpredictably depending on places and time, the observation model with consideration for all error sources has not been realized yet. On the other hand, as for LiDAR’s observation models, Hossein et al. [26] proposed an observation model computing the uncertainty by matching the feature shapes in clustered LiDAR points to landmarks in a three-dimensional map. Akai et al. [10] also proposed an observation model computing the uncertainties by matching LiDAR points to environmental map constructed by Normal Distribution Transition (NDT). However, it would fail evaluation of uncertainty in the case that there are some similar objects in the environment; the uncertainty would be low where a robot is on wrong but similar place to the true position place. There is no standard observation model whatever sensors you use. Erroneous probability distributions fatally affect the fusion result based on the Bayesian theory.

A similar approach to our system was proposed by Wei et al. in [8]. They added a relative sensor evaluation system to Bayesian theory fusion approach. In detail, the similarity between every sensor’s position information, expressed as Information Filter, are checked one by one by the normalized innovation squared checking test, then the position information having low similarities with others are removed before the fusion step. All kinds of sensors don’t seem to have the same observation noises at the same time and place because their observation characteristics are different from each other so that relative sensor evaluation could detect just erroneous position information from noisy sensors. Information Filter is a Bayesian Filter that assumes the noises in the system follow Gaussian distribution. However, in the real world localization, some noises do not follow Gaussian noises. On the other hand, Particle Filter does not assume the Gaussian distribution. Particle Filter has more flexibility than the information filter because it is not under the assumption of Gaussian noises. Moreover, the experiment in Wei’s paper was conducted only in one environment, so it couldn’t demonstrate the robustness of localization because the localization accuracy changes depending on the environment. In our works, we proposed a fusion system using Particle Filter and conducted some experiments in various environments to demonstrate our system’s robustness of localization in various environment.

Generic probabilistic localization

In this section, we first mention the theoretical basis of the Bayesian state estimation representation. The formulation and mechanisms of the Particle Filter are then explained. At last, the sensor data fusion with multisensor observation is demonstrated.

Theoretical basis of Bayesian theory

The basic theory of the proposed framework is recursive state estimation with sequential observations. The Bayesian state estimation in this work is composed following two steps:

Prior update

$$\begin{aligned} p\left( \mathbf {x}_{t} | \mathbf {z}_{1:t-1}\right) =\int p\left( \mathbf {x}_{t} | \mathbf {x}_{t-1}\right) p\left( \mathbf {x}_{t-1} | \mathbf {z}_{1:t-1}\right) d\mathbf {x}_{t-1}, \end{aligned}$$

Measurement update

$$\begin{aligned} p\left( \mathbf {x}_{t} | \mathbf {z}_{1:t}\right) =\frac{p\left( \mathbf {z}_{t} | \mathbf {x}_{t}\right) p\left( \mathbf {x}_{t} | \mathbf {z}_{1:t-1}\right) }{\int p\left( \mathbf {z}_{t} | \mathbf {x}_{t}\right) p\left( \mathrm {d} \mathbf {x}_{t} | \mathbf {z}_{1:t-1}\right) }, \end{aligned}$$

where \(\mathbf {x}_{t},\mathbf {z}_{1:t},p(\cdot )\) are the state of the robot at time step t, the observations up to t, the probability density function, respectively. \(p\left( \mathbf {z}_{t} | \mathbf {x}_{t}\right)\) is calculated using sensor data through an observation model. This recursion is initialized with a known probability distribution \(p(\mathbf {x}_0|\mathbf {z}_0)\) based on an initial observation.

Particle filter

Particle filter, which simulates a probability distribution by a set of discrete points (particles), have been used due to its non-parametric features. It approximates \(p(\mathbf {x}_t |\mathbf {z}_{1:t-1})\) in Equation (1) by crude density of a set of particles \(\mathbf {s}^{[n]}_t\). We denote N is the size of the particles and \(\delta (\cdot )\) is a Dirac delta function. Equation (1) is approximated by the following function;

$$\begin{aligned} p\left( \mathbf {x}_{t} | \mathbf {z}_{1 : t-1}\right) \approx p\left( \mathbf {s}_{t}^{[n]} | \mathbf {z}_{1 : t-1}\right) =\frac{1}{N} \sum _{n=1}^{N} \delta \left( \mathbf {x}_{t}-\mathbf {s}_{t}^{[n]}\right) \end{aligned}$$

Then, \(p(\mathbf {z}_t |\mathbf {x}_t)\) in Eq. (2) is approximated by an Eq. (4);

$$\begin{aligned} p\left( \mathbf {z}_{t} | \mathbf {x}_{t}\right) \approx p\left( \mathbf {z}_{t} | \mathbf {s}_{t}^{[n]}\right) = \sum _{n=1}^{N} \pi \left( \mathbf {z}_{t} | \mathbf {s}_{t}^{[n]}\right) \end{aligned}$$

where \(\pi (\cdot )\) is an observation model of the sensor. \(p(\mathbf {x}_t |\mathbf {z}_{1:t})\) is obtained by substituting Eq. (3) and (4) into Eq. (2) as follows;

$$\begin{aligned} p\left( \mathbf {x}_{t} | \mathbf {z}_{1 : t}\right) \approx p\left( \mathbf {s}_{t}^{[n]} | \mathbf {z}_{1 : t}\right) =\alpha \sum _{n=1}^{N} \pi \left( \mathbf {z}_{t} | \mathbf {s}_{t}^{[n]}\right) \delta \left( \mathbf {x}_{t}-\mathbf {s}_{t}^{[n]}\right) \end{aligned}$$

where, \(\alpha\) is a normalizing constant. The Minimum Mean Squared Error (MMSE) or Maximum A Posteriori Probability (MAP) of \(p(\mathbf {x}_t |\mathbf {z}_{1:t})\) are commonly utilized to compute an estimated robot position. After all, a new set of particles \(\mathbf {s}_{t+1}^{[n]}\) is generated according to \(p\left( \mathbf {x}_{t} | \mathbf {z}_{1 : t}\right)\).

With multi-sensor observation

In probabilistic localization, multiplication of probability distributions is used to fuse sensor estimation. We denote \(M,p_{m}(\cdot ),p_{f}(\cdot )\) are the number of sensors, a probability distribution from m-th sensor, and a fused distribution; \(p_{f}(\cdot )\) is computed according to the following equation.

$$\begin{aligned} p_{f}\left( \mathbf {s}_t^{[n]} |\mathbf {z}_{1:t}\right) =\alpha \prod _{m=1}^{M} p_{m}\left( \mathbf {s}_t^{[n]} |\mathbf {z}_{1:t}\right) \end{aligned}$$

where \(\alpha\) is a normalization factor. The multiplication of a probability is defined as a joint probability.

Note that, \(p(\mathbf {s}_t^{[n]} |\mathbf {z}_{1:t})\) is represented as \(p(\mathbf {s}^{[n]})\) in the remainder of this paper for simplicity.

Sensor fusion system with relative correlation checking test

During the localization process, the measurements might be contaminated by large disturbances. As mentioned in section I, erroneous probability distribution, biased against others, makes serious influence on a fusion result even though the probabilistic theory is used. In our proposed sensor fusion system shown in Fig. 1, the probability distributions, whose correlations with others are low, are validated as biased in the relative evaluator; then the biased probability distributions are removed in the selector. This section mentions the correlation checking test and a selection process based on relatively checking results. Note that this system can work when there are three or more sensors.

Fig. 1

Proposed sensor fusion framework

Correlation checking test

The correlations between probability distributions are validated by Pearson correlation [30]. As we denote the covariance between \(p_m(\cdot ), p_l(\cdot )\) is \({\text {Cov}}\left[ p_{m}(\cdot ), p_{l}(\cdot )\right]\), and the standard deviations of \(p_{m}(\cdot ),p_{l}(\cdot )\) are \(\sigma _{m},\sigma _{l}\) respectively, the Pearson correlation efficient \(\rho _{m,l}\) is calculated by the following equation;

$$\begin{aligned} \begin{aligned} \rho _{m, l}&=\frac{{\text {Cov}}\left[ p_{m}\left( \mathbf {s}^{[n]}\right) , p_{l}\left( \mathbf {s}^{[n]}\right) \right] }{\sigma _{m} \sigma _{l}} \\&=\sum _{n=1}^{N}\frac{(p_{l}\left( \mathbf {s}^{[n]}\right) -\mu _l)(p_{m}\left( \mathbf {s}^{[n]}\right) -\mu _m)}{\sigma _{m} \sigma _{l}} \end{aligned} \end{aligned}$$

where \(\mu _l,\mu _m\) are the mean of the \(p_{l}\left( \mathbf {s}^{[n]}\right) ,p_{m}\left( \mathbf {s}^{[n]}\right)\).

We utilize \(\rho _{m,l}\) as a value representing the coincidence of high probability particles between \(p_{m}(\cdot )\) and \(p_{l}(\cdot )\).

For example, we consider the probability distributions whose high probability particles are similar as shown in Fig. 2a. The vertical, horizontal axes and blue circles show \(\mathbf {x}\), \(p(\cdot )\), \(\mathbf {s}^{[n]}\) respectively. Both \(p_{m}(\mathbf {s}^{[i]}),p_{l}(\mathbf {s}^{[i]})\) are high at \(\mathbf {s}^{[i]}\), and both \(p_{m}(\mathbf {s}^{[j]}),p_{l}(\mathbf {s}^{[j]})\) are low at \(\mathbf {s}^{[j]}\). A scatter plot whose vertical axis is \(p_{m}(\mathbf {s}^{[n]})\) and horizontal axis is \(p_{m}(\mathbf {s}^{[n]})\) is shown in Fig. 2b. In Fig. 2b, \(p_{l}(\mathbf {s}^{[n]})\) gradually increase along with \(p_{m}(\mathbf {s}^{[n]})\) increasing. In other words, \(p_{m}(\mathbf {s}^{[n]})\) and \(p_{l}(\mathbf {s}^{[n]})\) have positive correlation, and \(\rho _{m,l}\) is nearly 1. On the other hand, as shown in Fig. 3a, when high probability particles between \(p_{m}(\mathbf {s}^{[n]})\) and \(p_{l}(\mathbf {s}^{[n]})\) are not similar, \(p_{m}(\mathbf {s}^{[i]})\) is high and \(p_{l}(\mathbf {s}^{[i]})\) is low at \(\mathbf {s}^{[i]}\), and \(p_{m}(\mathbf {s}^{[j]})\) is low and \(p_{l}(\mathbf {s}^{[j]})\) is high at \(\mathbf {s}^{[j]}\). Figure 3b shows a scatter plot in the case of Fig. 3a. In Fig. 3b, \(p_{l}(\mathbf {s}^{[n]})\) gradually decrease along with \(p_{m}(\mathbf {s}^{[n]})\) increasing. In other words, \(p_{m}(\mathbf {s}^{[n]})\) and \(p_{l}(\mathbf {s}^{[n]})\) have negative correlation, and \(\rho _{m,l}\) is nearly \(-1\). Therefore, \(\rho _{m,l}\) represents the coincidence of high probability particles between \(p_{m}(\mathbf {s}^{[i]})\) and \(p_{m}(\mathbf {s}^{[j]})\). We can evaluate the correlation by setting the threshold value \(\rho _{th}\) to \(\rho _{m,l}\).

Fig. 2

Positively correlated probability distributions

Fig. 3

Negatively correlated probability distributions

Relative evaluation of different sensor observations

At the selection step shown in Fig. 4, the correlation between all probability distributions are evaluated, and the number of \(\rho _{m,l}>\rho _{th}\) for each \(p_{m}(\mathbf {s}^{[n]})\) is counted, which is \(L_m\). Then, \(p_{m}(\mathbf {s}^{[n]})\) whose \(L_m\) is lower than M/2 are removed, and other passing probability distribution are fused by Eq. (6).

This selective sensor fusion system is on the basis of the following assumption; it seldom happens that different kinds of sensors are affected by the same fatal environmental noises at the same time and place. If the number of erroneous probability distributions is less than half of all, the proposed system can correctly detect and exclude the erroneous probability distributions. On the other hand, if over half of the sensors are affected by environmental noises at the same time, there is the possibility that the proposed system cannot detect only erroneous probability distributions. However, under the above assumption, the peak positions of the erroneous probability distributions would vary with each other because the noises are different depending on the sensor. Thus the erroneous probability distributions would have a negative correlation to others, and this system uses all probability distributions. Although the proposed system cannot decrease the affection of the erroneous position information, it is the same as the conventional system that fuses all probability distribution. Therefore this does not mean the proposed system is inferior to the conventional system. The proposed system is suitable for fusing different kind of sensors. Note that the proposed system must have three or more sensors for the relative evaluation.

Fig. 4

Relative checking framework

Experimental results

We mention localization experiment results in some environments, and discuss the robustness of the proposed system.

Experimental conditions

We conducted three localization experiments at Yakusa campus of Aichi Institute of Technology. The robot (Dongbu Robot TETRA DS-IV) was equipped with GPS (Hemisphere Cresent A101), omni camera (Vstone VSC450U-200-TK), and LiDAR (Hokuyo UTM-30LX) as shown in Fig. 5. Each sensor simultaneously got its data in every 0.2 s. After that, the probability distribution for each sensor is computed, then they are fused by the proposed system and the conventional system, without relative evaluation. We set \(N=100,\rho _{th}=0\) and computed estimated positions by MMSE. We measured the true robot’s position by using a laser range finder (Leica DISTO S910) set on the environment. The laser sensor was set on the point whose absolute position is acquired by using Google map. The laser sensor can correctly measure the distance between the sensor and the robot. Combining the distance with the relative direction measured by gyro sensor in a laser sensor, the laser sensor can measure the relative position of the robot. The relative position was transformed to the global position using the laser sensor’s absolute position.

The observation models of GPS, omni camera, LiDAR are normal distribution model[5], similar picture searching using Bag of Features[31], point cloud matching, respectively, which are described as follows.

Fig. 5

Robot with GPS, omni camera, and LiDAR

Observation model of GPS

As an observation model of GPS, we utilized the normal distribution model expressed by the following equation.

$$\begin{aligned} \pi \left( \mathbf {z}_{t}^{GPS}, \mathbf {s}_{t}^{[n]}\right) =\exp \left( -\frac{\left( \mathbf {s}_{t}^{[n]}-\mathbf {p}_{G P S}\right) ^{2}}{2 \sigma _{G P S}^{2}}\right) \end{aligned}$$

where \(\mathbf {p}_{GPS}\) and \(\sigma _{GPS}\) and are the GPS estimated position and the standard deviation of normal distribution. This is usually used for the GPS observation model. We reflect DRMS to the standard deviation, which is computed by the multiplication of HDOP and USRE.

$$\begin{aligned} \sigma _{GPS}=HDOP \times UERE \end{aligned}$$

HDOP is the degradation value of the measurement accuracy of the horizontal space computed based on the arrangement of satellites. USER is the error value reflected the error cause of the GPS measurement (ionosphere and depending error). USER is set as 1.5 m.

Observation model of camera

As the observation model of the omni camera, similar picture searching of the Bag of Features (BoF) is used.

$$\begin{aligned} \pi \left( \mathbf {z}_{t}^{Camera}, \mathbf {s}_{t}^{[n]}\right) =\prod _{l=1}^{L} \exp \left( -\frac{\left( \mathbf {s}_{t}^{[n]}-\mathbf {p}_{Camera}^{(l)}\right) ^{2}}{2 \sigma _{Camera}^{(l) 2}}\right) \end{aligned}$$

where L is the number of similar images, \(p_{Camera}^{(l)}\) is a position where the picture l is taken. Based on the similarity of similar image \(sim^{(l)}\), the standard deviation of the normal distribution is computed as follows.

$$\begin{aligned} \sigma _{Camera}=10\exp {-20.25sim^{(l)}} \end{aligned}$$

Equation (11) is a fitting result of the relationship of the similarity of the images and the distance between the picture position and the robot by non-linear function. The cluster of BoF is created by clustering the feature of Scale-Invariant Feature Transform (SIFT). We set a threshold of the similarity \(sim^{(l)}\) to remove the affection of the non-similar images. Therefore, if there is no similar image in the BoF, the probability distribution becomes a uniform distribution.

Observation model of LiDAR

We implemented Point matching algorithm for LiDAR observation model.

$$\begin{aligned} \pi \left( \mathbf {z}_{t}^{(LiDAR)}, \mathbf {s}_{t}^{[n]}\right) =\exp \left( -\frac{e^{(l)}}{2 \sigma _{LiDAR}^{2}}\right) \end{aligned}$$

where e is the sum of the distances between input points and matched reference points, and \(\sigma _{LiDAR}\) is the standard deviation of the LiDAR observation, set to \(1/10^{-5}\).

Exp. I: The Road between Building

Fig. 6

Experimental environment (road between buildings)

Fig. 7

Localization errors in Fig. 6

The first experiment was conducted at the road between buildings as shown in Fig. 6. The robot ran on the yellow arrow shown in Fig. 6. The result of localization errors in this experiment is shown in Fig. 7. In Fig. 7, the horizontal axis shows time, and the vertical axis shows the localization errors: the distances between the estimated positions and the true positions. As shown in Fig. 7, the maximum localization error of the conventional system in the yellow rectangle term was higher than 6 m. On the other hand, the localization errors of the proposed system were lower than 3 m in its term.

The probability distributions from sensors at 330 s obtained by the proposed system are shown in Fig. 8, and Table 1 shows the correlation table at 330 s.

Fig. 8

Probability distributions at 330 s in Fig. 6

Table 1 Correlation table at 330 s in Fig. 6

In the distribution images in Fig. 8, gray and white areas represent occupied and unoccupied areas respectively, and the depth of red represents the probability of the robot existence, and green circles show the true positions. Figure 8 show that the LiDAR’s and camera’s probabilities of particles within 1m from the true position were higher than those of outside particles. On the other hand, the GPS’s probabilities of particles within the same area were lower than those of outside particles. This implies the GPS’s measurement was contaminated by a large disturbance and should be removed before a fusion step. Table 1 shows that the correlation efficient between the probability distributions from the LiDAR and camera exceeded \(\rho _{th}=0\), thus the proposed system fused the probability distributions except the one from GPS. The proposed fusing method succeeded in decreasing the localization errors.

Exp. II: The Square

In the second experiment conducted at the square as shown in Fig. 9, the robot ran on the yellow arrow.

The result of localization errors in Fig. 10 shows the errors of the proposed system in the yellow rectangle term were lower than those of the conventional system. The probability distributions at 152 s in the proposed system are shown in Fig. 11, and Table 2 shows the correlation table at 152 s.

Fig. 9

Experimental environment (square)

Fig. 10

Localization errors in Fig. 9

Fig. 11

Probability distributions at 152 s in Fig. 9

Table 2 Correlation table at 152 s in Fig. 9

Figure 11 shows that the camera’s and GPS’s particles aside on the true position have higher probabilities than the others. On the other hand, the LiDAR’s particles aside on the true position have lower probabilities than the others. This implies the LiDAR’s measurement was contaminated by a large disturbance and should be removed. Table 2 shows that the correlation efficient between the probability distributions from the camera and GPS exceeded the threshold \(\rho _{th}=0\), and those distributions have positive correlations. Thus, the proposed system fused the probability distributions except the one from GPS and succeeded in decreasing the localization errors.

Exp. III: The Parking

In the third experiment conducted at the parking space as shown in Fig. 12, the robot ran on the yellow arrow.

Fig. 12

Experimental environment (parking)

Fig. 13

Localization errors in Fig. 12

The result of localization errors in Fig. 13 shows those of the proposed and conventional system were lower than 1 m all times. The probability distributions at 254 s in the proposed system are shown in Fig. 14, and a correlation table at 254 s is shown in Table 3.

Fig. 14

Probability distributions at 254 s in Fig. 12

Table 3 Correlation table at 254 s in Fig. 12

Figure 14 shows that the GPS’s and LiDAR’s particles within 1m from the true position had high probabilities. On the other hand, regarding the particles in the camera, every particle had same probability value, named uniform distribution. Uniform distribution means the sensor cannot identify the specific position. In the parking lot where the experiment III was conducted, the arrangement of the cars were changing. When I made the environmental map, there were many cars. However, on the date when I conducted the localization experiment, there were few cars because it was the day-off of the university. Since the images captured in the localization were not matched to any images captured in the map, the camera could not identify the specific position. So probability distribution was set to a uniform distribution. The uniformed distribution does not affect to the fusion result. However, the localization only by the uniformed distribution would fail because it does not have any position information. Therefore, the proposed system must not select the only uniformed distribution observed by camera. Table 3 shows the proposed system was able to select the probability distributions of the LiDAR and GPS, so that the proposed system succeeded in localizing the same quality as the conventional system fusing all probability distributions.


In the environment where one sensor has erroneous position information, the localization errors of the proposed system were less than those of the conventional system. The proposed system successfully reduces the maximum localization errors from 6 m to 3 m. Moreover, in the environment where no sensor has erroneous position information, the localization errors of the proposed system were nearly the same as those of the previous system. It is found that the proposed method can estimate the position with the same or higher accuracy as the conventional method when the pieces of position information from every sensor include erroneous ones. As a result, we confirmed the robustness of the proposed system.


In this paper, we have discussed the robustness of our proposed sensor fusion system by some experiments in various environments. Our proposed system detects and excludes erroneous position information by checking correlations between all of them, and fuses the rest. In all experiments, the localization errors of the proposed system were lower or the same as those of the conventional system. We confirmed the effectiveness of the proposed system. As the future tasks, we plan to fuse other types of sensors and confirm whether or not the robustness improves as the number of sensors increases.


  1. 1.

    Wang K, Liu YH, Li L (2014) A simple and parallel algorithm for real-time robot localization by fusing monocular Vision and odometry/AHRS sensors. IEEE/ASME Trans Mech 19(4):1447–1457

    Article  Google Scholar 

  2. 2.

    Kim J, Chung W (2016) Localization of a mobile robot using a laser range finder in a glass-walled environment. IEEE Trans Ind Electr 63(6):3616–3627

    Article  Google Scholar 

  3. 3.

    Hornung A, Wurm KM, Bennewitz M (2010) Humanoid robot localization in complex indoor environments. In: Proceedings - IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, pp. 1690–1695

  4. 4.

    Yamada D, Ohya A (2016) Statistical laser-scanner measurement model for mobile robot localization. Trans Soc Instrument Control Engineers 52(12):661–670

    Article  Google Scholar 

  5. 5.

    Maier D, Kleiner A (2010) Improved GPS sensor model for mobile robots in urban terrain. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 4385–4390

  6. 6.

    Caron F, Davy M, Duflos E, Vanheeghe P (2007) Particle filtering for multisensor data fusion with switching observation models: Application to land vehicle positioning. IEEE Trans Signal Processing 55(6):2703–2719

    MathSciNet  Article  Google Scholar 

  7. 7.

    Giremus A, Tourneret JY, Calmettes V (2007) A particle filtering approach for joint detection/estimation of multipath effects on GPS measurements. IEEE Trans Signal Processing 55(4):1275–1285

    MathSciNet  Article  Google Scholar 

  8. 8.

    Wei L, Cappelle C, Ruichek Y (2013) Camera/laser/GPS fusion method for vehicle positioning under extended NIS-based sensor validation. IEEE Trans Instrum Measur 62(11):3110–3122

    Article  Google Scholar 

  9. 9.

    Takahashi S, Ohta M, Sasaki J, Yamashita K (2016) A Study for WiFi based positioning system using bayesian estimation method. IEEJ Trans Electr Inf Syst 136(11):1553–1554

    Google Scholar 

  10. 10.

    Akai N, Takeuchi E, Yamaguchi T, Luis YM, Yoshihara Y, Okuda H, Suzuki T, Ninomiya Y (2018) High-accurate localization INS and using multilayer LiDAR for autonomous cars. Trans Soc Automotive Eng Japan 49(3):675–681

    Google Scholar 

  11. 11.

    Chung HY, Hou CC, Chen YS (2015) Indoor intelligent mobile robot localization using fuzzy compensation and kalman filter to fuse the data of gyroscope and magnetometer. IEEE Trans Ind Electr 62(10):6436–6447

    Article  Google Scholar 

  12. 12.

    Suyama K. Funabora Y, Doki S, Doki K (2015) Robust localization for mobile robot by K-L Divergence-based sensor data fusion. In: IECON 2015 - 41st Annual Conference of the IEEE Industrial Electronics Society, pp. 2638–2643

  13. 13.

    Ohashi N, Funabora Y, Doki S, Doki K (2018) Majority rule sensor fusion system with particle filter for robust robot localization. In: Proceedings - IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1106–1111

  14. 14.

    Brooks A, Makarenko A, Upcroft B (2008) Gaussian process models for indoor and outdoor sensor-centric robot localization. IEEE Trans Robotics 24(6):1341–1351

    Article  Google Scholar 

  15. 15.

    Alvarado BP, Matia F, Galan R (2018) Improving indoor robots localisation by fusing different sensors. In: Proceedings - IEEE International Conference on Intelligent Robots and Systems, pp. 2616–2623

  16. 16.

    Liang Q, Wang L, Li Y, Liu M (2018) Indoor mapping and localization for pedestrians using opportunistic sensing with smartphones. In: Proceedings - IEEE International Conference on Intelligent Robots and Systems, pp. 1649–1656

  17. 17.

    Yamazaki M, Takeuchi E, Ohno K, Tadokoro S (2012) GPS based particle filter localization method with multipath model using 3D-map. J Robot Soc Japan 29(8):702–709

    Article  Google Scholar 

  18. 18.

    Shinohara M, Sam AR, Inoue K, Ozaki K (2013) Development of localization method using magnetic sensor and LIDAR. Trans Soc Instrument Control Engineers 49(8):795–801

    Article  Google Scholar 

  19. 19.

    Kim C, Yang H, Kang D, Lee D (2015) 2-D cooperative localization with omni-directional mobile robots. In: Proceedings - 12th International Conference on Ubiquitous Robots and Ambient Intelligence, pp. 425–426

  20. 20.

    Oleynikova H, Burri M, Lynen S, Siegwart R (2015) Real-Time Visual-Inertial Localization for Aerial and Ground Robots. In: IEEE International Conference on Intelligent Robots and Systems, pp. 3079–3085

  21. 21.

    Graeter J, Wilczynski A, Lauer M (2018) LIMO: Lidar-monocular visual odometry. In: Proceedings - IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 7872–7879

  22. 22.

    Mur-Artal R, Montiel JMM, Tardos JD (2015) ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans Robotics 31(5):1147–1163

    Article  Google Scholar 

  23. 23.

    Khan S, Wollherr D, Buss M (2016) Modeling laser intensities for simultaneous localization and mapping. IEEE Robotics Autom Lett 1(2):692–699

    Article  Google Scholar 

  24. 24.

    Biswas J, Veloso M (2012) Depth camera based Indoor mobile robot localization and navigation. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 1697–1702

  25. 25.

    Castellanos JA, Neira J, Tardos JD (2001) Multisensor fusion for simultaneous localization and map building. IEEE Trans Robotics Autom 17(6):908–914

    Article  Google Scholar 

  26. 26.

    Hossein TN, Mita S, Han L, Do QH (2012) Multi-sensor data fusion for autonomous vehicle navigation and localization through precise map. Int J Autom Eng 3(1):19–25

    Article  Google Scholar 

  27. 27.

    Shan T, Englot B (2018) LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: Proceedings - IEEE International Conference on Intelligent Robots and Systems, pp. 4758–4765

  28. 28.

    Yang B, Yang R (2018) Interactive particle filter with occlusion handling for multi-target tracking. In: Proceedings - 12th International Conference on Fuzzy Systems and Knowledge Discovery 2015, pp. 1945–1949

  29. 29.

    Bajcsy A, Herbert SL, Fridovich-Keil D, Fisac JF, Deglurkar S, Dragan AD, Tomlin CJ (2019) A scalable framework for real-time multi-robot, multi-human collision avoidance. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 936–943

  30. 30.

    Fisher RA (1925) Statistical Methods for Research Workers, 13 edn

  31. 31.

    Iimura T, Yamamoto K (2012) Development of Single-passenger Mobility-Support Robot “ROPITSR[!R]” : Recovery from Position Lost Using Similar Images. In: Proceedings - JSME Conference on Robotics and Mechatronics, 2015, 1P2-G05 (in Japanese)

Download references

Author information




NO initiated this research, designed and performed the experiments. NO performed the data analysis, interpretation of the experimental results and wrote the manuscript with the help and review from YF, SD, and KD. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Nozomu Ohashi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ohashi, N., Funabora, Y., Doki, S. et al. Multisensor robust localization in various environment with correlation checking test. Robomech J 8, 3 (2021).

Download citation


  • Localization
  • Sensor fusion
  • Particle filter
  • Outlier rejection