Study of a robotic system to detect water leakage and fuel debris -System proposal and feasibility study of visual odometry providing intuitive bird's eye view

To obtain the necessary information of the fuel debris and water leakages in the decommissioning task of Fukushima Daiichi nuclear power plants, an ultrasonic-based investigating method had been proposed towards the future internal investigation of the Primary Containment Vessel (PCV). In this article, we studied the assisting mechanisms and methods for the investigating method, as the rotatable winch mechanism and visual localization method. We used the rotatable winch mechanism to adjust the height and orientation of the ultrasonic sensor and localized the robot with cameras to localize the sensor, as assisting information for data combination. We studied the feasibility of the conventional visual odometry method of applying to the situation and performed localizing accuracy evaluating experiments with a mobile robotic platform prototype. Results showed that the visual odometry method could generate intuitive maps from bird’s eye view, and provided the average error rate of 35mm/1500mm, which met the required error rate for grating movement. The adjustable parameter ranges that could provide the required accuracy were also conducted.


Background
The decommissioning of the Fukushima Daiichi Nuclear Power Plants (NPPs) is an urgent national problem in Japan. Approximately 200 tons of nuclear fuel debris, which contains melted nuclear fuels and structural materials, is estimated to remain in each nuclear reactor. In order to remove the fuel debris safely and e ciently from the primary containment vessels (PCV), it is essential to understand the distribution of the fuel debris inside the PCV. Additionally, we need to nd and stop the water leakage inside the PCV to minimize the amount of the radioactive water leaking outside. This paper proposes a new robotic system to detect fuel debris and water leakage outside the pedestal in the PCV using ultrasonic sensors. The proposed system consists of a mobile robot, a winch mechanism, a camera, and the ultrasonic sensors shown in Fig. 1. The mobile robot moves on the grating oor, which is located above the outside of the pedestal. The winch mechanism on the mobile robot deploys the ultrasonic sensor through the grating lattice. The height and orientation of the sensor are controlled by the winch mechanism. The ultrasonic sensors detect the shape and characteristics of the fuel debris, as well as the water ow of the retained water under the grating. Since the maximum measurement range of the ultrasonic sensor is limited, the ultrasonic measurement should be done repeatedly in the various locations on the grating. Therefore, it is important to accurately localize the position and orientation of the mobile robot to combine the partial data into global data. In order to achieve accurate localization, we focus on the texture of the grating lattice. The grating lattice is regularized as 25 mm × 100 mm, and its lattice texture is orthogonal. Thus, we can regard the grating texture as a coordinate system. If we can count the number of the grating lattice while traveling, we can localize the robot position by the resolution of the lattice size.
The contributions of this paper are twofold. One is the proposal of a new robotic system to detect fuel debris and water leakages by using the ultrasonic sensors, and the other is to study feasibility and accuracy of the localization of the robot using mono-camera on the grating. In particular, we investigate the possibility of generating a global map in bird's eye view, which is very intuitive and easy to understand. The global map in bird's eye view will greatly help the operator of the robot to move the robot during the mission, and minimize the operation time to reach the speci ed target position, resulting in e cient investigation within limited working duration. This paper is organized as follows. The Sect. 2 introduce related works and clarify the current problem to be solved. The Sect. 3 proposes a new robotic system and clarify the research objective in this paper. The Sect. 4 addresses the issue of the localization by mono-camera using a visual odometry method in a bird's eye view, providing an intuitive global map for the operators. The Sect. 5 describes hardware experiment using a prototype model, and the Sect. 6 quantitatively evaluate the localization accuracy.
Finally, we conclude this paper and discuss the remaining future works in Sect. 7.

Related Work
In this research, our target plant is unit 1, where the grating oor remains above the retained water illustrated in Fig. 1. For the Unit 1 reactor, fuel debris is likely to have spread outside the pedestal through access port for the workers at the bottom of the PCV. In Apr. 2015, a con guration changeable robot system called PMORPH1 developed by Hitachi-GE Nuclear Energy, had successfully entered the PCV of the Unit 1 reactor through the X-100B penetration, and the dose rates and the internal temperatures at different points on the grating were measured [1]. In Mar. 2017, the improved PMORPH2 moved on the grating, and PMORPH2 utilized a winch mechanism to sink a sensor unit, consisting of an underwater camera, a dosimeter, and illuminations, into retained water through the grating lattice at the target position [2]. Two cameras were installed in the front side of the robot to form a stereo camera, and utilized them to localize the position and orientation of the robot. The landmarks were assigned in advance based on the PCV internal structure obtained by the design drawings [3]. The operator(s) remotely controlled the robot movements based on the frontal two camera images, which were close to the grating. The acquired survey results revealed the radiation level near to the basement of the PCV was around 10 Gy/h, and there were fallen objects and sediments on the basement.
Although the visual images are essential to perform remotely operated investigation using the robots, the radiation resistance of the currently available camera is not high enough. The maximum allowable accumulated irradiation dose of the equipped camera was 1000 Gy, limiting the maximum duration of working time as 100 hours. Moreover, the visual images taken by the camera showed that the retained water was quite turbid so that the poor visibility restricted the localization of the fallen objects on the basement. As for the localization of the robot, it was necessary to set landmarks based on the PCV structure in a normal state. However, setting the landmarks in advance are not guaranteed to localize the robot with su cient accuracy because of the heavily damaged the PCV structure. Additionally, it was also hard to intuitively understand the location of the robot because of the similar repetitive images of the grating with shallow angles between the frontal two cameras' axes and the grating oor.
To solve the problems mentioned above, we propose a new robotic system to detect fuel debris and water leakages by using the ultrasonic sensors in the following section. We also propose to apply a visual odometry method using mono-camera to generate an intuitive global map.

Proposal Of A Robotic System
We propose a robotic system consisting of a mobile robot, a winch mechanism, a camera, and the ultrasonic sensors shown in Fig. 1. Similar to the PMORPH2, the mobile robot moves on the grating oor and carries the sensor unit whose height is controlled by the winch mechanism. The robot stops at multiple measurement points and measures the surrounding environment in the retained water under the grating oor. The main differences between the PMORPH2 and our proposal are summarized as follows.

Utilization of the ultrasonic sensors
Winch mechanism with two degree-of-freedom Localization of the robot using a visual odometry method The following subsections describes each distinct item in detail.

Ultrasonic sensors
An ultrasonic sensor can remotely measure the shape of the object in both air and water, and can achieve high radiation resistance. In the former nuclear accident of Three-mile Island, the ultrasonic measurement was practically applied to inspect the reactor vessel [4]. In fact, we did a preliminary experiment by exposing an ultrasonic sensor to gamma-ray at the rate of 650 Gy/h by 60Co, and con rmed that the degradation of the sensor signal was less than 3% where accumulated radiation dose was approximately 10,000 Gy [5], suggesting the ten times higher radiation resistance than that of the camera installed on the PMORPH2.
Additionally, the ultrasonic sensor is applicable to turbid water to measure the shape of the object. Moreover, if we apply a phased array ultrasonic sensor using Ultrasonic Velocity Pro ler (UVP) method [6], we can acquire two-dimensional velocity pro les of the ow of the turbid water, which is capable of detecting water leakage. The ow mapping of the water can be utilized to stop the water leakage, contributing to reduce the amount of the radioactive water and keep the radioactive objects submersed. Figure 2 illustrates the measurement of the ow velocity eld and the debris measurement. Because the sensor height in the vertical direction is controlled by the winch mechanism, the arrayed sensor is arranged to measure the horizontal direction to e ciently obtain the 3-dimensional ow velocity eld.
We arranged the 16 channels arrayed element in both horizontal and vertical direction to measure two planar velocity elds (Fig. 3). The developed sensor can pass through the grating lattice by adjusting the orientation of the sensor. We con rmed that the developed sensor could measure two planar velocity elds where the water including tracer particles ow out of a water tank [7]. Our research group also developed a method to measure the shape of the simulated fuel debris [5].
3.2 Winch mechanism with two degree-of-freedom Because of limited measurement range of the ultrasonic sensor, we propose to install the swivel degreeof-freedom (DoF) around the sensor cable axis for the winch mechanism. This rotation also permits the rectangular ultrasonic sensor to pass through the grating lattice by adjusting the orientation around the sensor cable.
The sensor height in the vertical direction is adjusted by the rotation of the spool of the winch mechanism. Since the spool needs to rotate several turns to reach the bottom of the PCV, we installed a slip ring inside the reel, allowing the in nite rotation of the reel while keeping the electric connection. Figure 4 shows a prototype model of the proposed two DoF winch mechanism. The purpose of developing this prototype model is to con rm the basic functions for our proposal. Hence, we did not consider the size limitation of the access port. We experimentally con rmed that insertion of the slip ring between the ultrasonic sensor and the pulse receiver did not affect the quality of the measurement.

Localization of the robot using a visual odometry method
In order to obtain the velocity eld of the water ow, and the distribution of the fuel debris in the PCV, we need to measure at the multiple measurement points, and integrate the partial data to global data. Therefore, the localization of the robot plays an essential role of the measurement.
In the previous investigation by PMORPH2, the robot position and orientation were obtained based on the predetermined landmarks, and it was not intuitive for the operator(s) to understand the robot position and orientation by seeing the images of the frontal two cameras.
To solve the problems, we propose to apply a visual odometry method that can generate a global map in bird's eye view. We focus on the texture of the grating lattices because the size of the lattice is regularized as 25 mm × 100 mm, and its lattice texture is orthogonal and repetitive. Thus, we can regard the grating texture as a kind of coordinate system. We can easily localize the robot position and orientation based on the grating lattice coordinate system.

Research objects
In order to prove the usefulness of the proposed robotic system, various aspects must be investigated such as measurement accuracy of the ultrasonic sensor, hardware feasibility considering access route, radiation resistance of each component and so on.
In particular, the rest of this paper focuses on the robot localization problem using a visual odometry method. We apply a conventional visual odometry method and validate the feasibility of the localization on the grating oor. We utilize an open-source computer vision library OpenCV, reducing the software development time. After applying several modi cations to increase accuracy of the localization of the robot, we quantitatively evaluate the performance of the algorithm using a prototype model whether it has su cient accuracy for the actual mission. We also investigate the relationship between the parameters for camera setting and localization accuracy. Those fundamental results contribute to develop a practical robotic system and its localization algorithm for the further decommissioning task in Fukushima Daiichi NPPs.

Visual Odometry Method In Bird's Eye View
In this section, we will brie y present the outline of the conventional visual odometry method. On its basis, we will expand to describe the main modi cations for the aimed effect of bird's eye view, and the corresponding improvement strategies to enhance its performance.

Outline
The general working procedure of the conventional visual odometry is shown below.
Apply the initial camera calibration with checkboard [8] to acquire the necessary camera parameters.
Denote two adjacent frames as a single segmentation. Find the feature points and match them within the segmentation. Here we used Oriented FAST and Rotated BRIEF (ORB) features [9].
Estimate the relative camera motion in the segmentation based on the positional relationship between the matched points.
Integrate the estimated camera motion to localize the camera under the global coordinate system.
Perform the coordinate transformation between the camera pose and the robot pose.
Generate the map according to the corresponding camera positions.
In the speci c situation where the robot moves on the grating oor, we would like to utilize the characteristics of regular texture formed by the rectangular gratings of 25 mm x 100 mm. Concretely speaking, we intended to perform a camera orientation transformation to acquire the bird's eye view. Such a transformation was accomplished with a perspective transformation right after the camera calibration.

Perspective transformation with the inclined camera
Since the semi-wide-angle camera was set with given heights and inclined angles between the horizontal plane, the captured grating oor would also appear inclined, as the left part of Fig. 6 shows. To acquire the bird's eye view from the vertical direction, we placed a checkerboard on the grating oor to specify the grating plane. Then, a perspective transformation was made based on the homography matrix, the camera view could be adjusted to vertical direction, as the right part of Fig. 6 shows.
Thus, after the perspective transformation, we conducted all the rest steps under the bird's eye view. Therefore, the map generation was simpli ed as the 2-dimensional linear combination of the images.

Improvement strategies
Considering the speci c case of applying visual odometry with bird's eye view and regular features, we listed the improvement strategies below.
By utilizing the general smooth camera model by MonoSLAM [10], replace the mistakenly estimated camera motion based on the adjacent motion data.
Intently remove the component in Z-axis of the estimated motion matrix.
Count the passed grating lattices to assist the planar localization.
Since the visual odometry method we utilized is highly dependent on features, the situation where available features are unabundant will cause unexpected errors in localization. We proposed a compensating method to assist the calculation of the estimated camera motion from the frame without abundant feature points.
By utilizing the general smooth camera model by MonoSLAM, we assumed that the velocity difference between two continuous segmentations could be considered small. Thus, we classi ed the estimated velocities into several continuous motion intervals according to the velocity vector characteristics. Within the same interval, we set the velocity limitations to nd the unreliable motion estimation together with the matched points number. Therefore, we averaged the other reliable motion vectors in the same interval to generate an estimation of the camera motion in the unreliable frame, and replaced it in as a compensation. Such methods turned out to help smooth the generation of maps and increase the correctness of the localization.
When estimating the camera motion from the continuous images, we displayed the results with a motion matrix. Eq. (1) shows the ideal generated motion matrix Cn. We denoted the horizonally forwarding direction as y-direction, and the horizontally transverse direction as x-direction. Thus, we denoted the vertically upwarding direction as z-direction. The element θ c,n indicates the instant rotation angle of the camera in the n-th frame, x c,n, and y c,n indicate the translation in x and y-direction. It can also be written as Eq. (2) shows, the rotation matrix A and a translation vector b. Since the robot mainly moved on the grating oor, we intently normalized the matrix, to remove the misestimated velocity components in zdirection from the generated motion matrix. Thus, the misestimation due to vibrations and noise can be reduced.
Due to the accumulating process of the estimated camera motions, correspondingly, the calculating error in each segmentation will also accumulate. In simultaneous localization and mapping (SLAM) eld, an excellent solution to such a problem is the loop-closure detection method. That requires the robot to judge the similarity between the pictures so that the robot can adjust the trajectory according to the positions that it has reached before [11]. However, in our situation, we considered that the highly repetitive lattices of the grating oor might not be suitable for loop-closure detection to work as expected. For the speci c situation of using grating lattices, we considered that counting the number of the passed lattices would help solve this problem. Thanks to the bird's eye view, the grating lattices plane can be displayed vertically with the view direction. Thus, we used the known size and unique shape to identify the past grating, and count the number to estimate the displacement of the robot. Such a method did help reduce the possible errors when the generated map is smooth and continuous enough.

Prototype Model
To evaluate the performance of the modi ed visual odometry method on the localizing accuracy, we built a prototype model robot for the preliminary experiments.
RhinoUS is integrated with four independently driven wheels for necessary motion. These wheels are designed to be waterproof and dustproof by using a rubber seal, allowing the robot to move on grating base smoothly. Four motors are utilized to control the velocity of the wheels independently. Currently, we applied a joystick to control the movement of the robot.] Since the prototype robot wasn't designed for the nal on-site investigations, we will not discuss the size limitations or radiation resistance in this section. Still, we will evaluate the possible in uences due to camera pose on the localizing accuracy in Sect. 6, as a preliminary study on the feasibility of adapting to the other robot platforms towards the on-site investigations. reconstruction with High Dynamic Range (HDR) camera, realizing the reconstructing accuracy within 2 mm. We considered this 3D scanner could provide reliable and accurate measurement of the current robot pose, and serve as the reference object in the comparison. Concretely speaking, in the speci c motion, we used the modi ed visual odometry method to perform localization, and recorded the evaluated robot displacement and orientation. We installed four special-made spheres on each corner of the robot, to be recognized in the reconstructed results. Thus, we used the position of the spheres to calculate the position and orientation of the robot, and compared with the evaluated results from visual odometry.
As for the evaluating criterion, we utilized the localizing error rate applied by PMORPH2 in previous investigations, error within 100 mm with 1500 mm displacement. Since 100 mm is the width of a single grating lattice, and 1500 mm is the regular distance between multiple investigating points.

Basic experiments with simpli ed environments
In the beginning phase, we used a simpli ed environment by splicing several grating blocks to form the grating oor. We allowed the robot to perform various types of motion, including translation, rotation, and a combination of them. Figure 8 shows one of the generated maps with the camera trajectory in purple lines. Table.2 shows the comparing results between the evaluated displacement and the measured displacement. Thus, the comparing results showed that the y-direction error rate was 9.7 mm/1500 mm, and the xdirection error rate was 20 mm/1500 mm. We considered the accuracy met the requirement in such a simpli ed condition. Besides, the grating lattices in the reconstructed map appear quite regular, which made it possible to count the lattices' numbers to estimate the robot position.

Simulated experiments in JAEA Naraha Center for Remote Control Technology Development
To further investigate the performance of the visual odometry method, we utilized the experimental instrument water tank in Naraha Center for Remote Control Technology Development, provided by Japan Atomic Energy Agency (JAEA). Figure 9 shows the structure of the three-oored water tank. At its center, a water tank with a 4690 mm diameter was utilized as the simulated basement body. The top oor served as supporting platform of the investigating robot. Across the water tank, we prepared a wooden bridge with a small grating part, whose lattice size was same as that of the No.1 reactor.
Concretely speaking, we would like to simulate the investigating route as the red line in the right part of Fig. 9. Since there is only a small grating part on the wooden bridge, to simulate the grating oor, we took images of grating lattices and printed them to covered the oor.
Thus, we repeated the simulated routes three times and the comparing results are shown in Table 3. Figure 10 shows one of the generated maps by visual odometry method. Thus, we calculated the error rate from these datasets, and the averaged error rate is 35 mm/1500 mm, and the maximum error rate is 54 mm/1500 mm, which satis ed the required error rate. Besides, we compared the reconstructed objects like the cross-shaped metal supporters under the small grating part on the wooden bridge with the realistic one, and con rmed the reconstructed correctness.

Camera parameters ranges according to the required accuracy
Through the experiments in the simulated experiments, we could con rm that under the speci c experimental conditions, the visual odometry method was feasible to provide the required localizing accuracy. However, in the on-site investigations, there might be more restrictions according to the environmental conditions. Like the size limitations to pass through the access penestration, limited camera performace in the extreme situation. In order to provide useful information for the further development of the practical model, in the following sections, we would like to focus on the parameters of the camera pose and program setting and use an experimental method to investigate the available ranges to keep the error rate within 100 mm/1500 mm. Table.4 shows the concerned factors, their default values in the previous experiments, and the adjusting ranges. Since we mounted the camera system on a metal frame, the adjustable ranges of the camera pose might be limited. For the predetermined rotation mode, the robot rotated around the center for 45 degrees clock wisely. Thus, we de ned the error rate ER r as the proportion of the orientation difference over the desired orientation. We de ned the robot center displacement as the shift error SE r . Eqs. (5) and (6) shows the relationships. In the predetermined rotation motion, (x d, y d , θ d ) should be set as (0,0,45).
Thus, we would use the error rate within 100 mm/1500 mm in translation motion, and the same proportion 3degree/45degree in a rotation motion as the requirement. By maintaining the error rate within the range, we adjusted the parameter to acquire the adjustable ranges. As the results, we presented the changing tendency of both error rates and the shift errors with the changing parameters in Fig. 11, Fig. 12. The vertical axis indicated the error rates with the line chart, the horizontal axis showed the changing parameter. We also included the shift error value on the gure with the bottom bar chart, as an alternative index of the accuracy. Re ecting on the overall tendency of both the error rates and the shift errors, we concluded that the shift error would also be decreasing when the error rate was low. The ranges of the parameters were listed in Table 5.
Generally, we regarded that the camera height had little in uence on the localization accuracy of translation mode. A similar situation occurred in the rotation mode; however, when the height was below 0.3 m, error appeared to arise in both the angle and position estimating process, as the error rate and shift error were quite signi cant. We considered that this was due to the limited view range and illumination in low height. For the camera inclined angle, in the translation mode, we noticed the relatively signi cant errors when the angle was below 40 degrees. And the tendency in the rotation mode indicated the adjustable ranges within 40 ~ 60 degrees. We considered that the view transforming process limited the angle ranges, while the horizontal camera con guration of PMORPH might not be directly applied. When it comes to the program setting, the results appeared to be similar that both the parameters had a minimum limit to maintain the normal working performance. We concluded that the fps should be more than 24 frames/s and the minimum number of feature points should be larger than 150.
In this paper, we proposed a new robotic system to detect fuel debris and water leakages by using the ultrasonic sensors. Besides, we focused on the feasibility and localizing accuracy of a modi ed visual odometry method on the autonomous localization of robot moving on the grating oor towards the future investigations.
We performed the localizing accuracy evaluating experiments with a prototype mobile robot carried out in simulated environment, and measured the average localizing error rate being 35 mm/1500 mm; the maximum error rate being 54 mm/1500 mm. Such results met the accumulated error rate requirement of 100 mm/1500 mm from PMORPH2.
Compared with the localizing method of PMORPH, visual odometry method could generate more intuitive maps from bird-view by using a perspective transformation. The correctness of the reconstructed maps could be con rmed by comparing the reconstructed features with the actual environment. Besides, we also studied the available range of vital parameters that met the required error rate.
In this article, we didn't investigate the in uence of the size limitations and illuminating conditions in the experiments, as the main limitations of the current research. As one of the future works, we will perform the supplementary researches on how to shrink the mechanisms to maintain the core functions, apart from the current wheeled robots. Besides, to study the possible in uence due to dark illuminations and conduct the corresponding adjustments to the algorisms as one of the future works, we would like to make use of the video datasets of mockup grating captured in dark illumination conditions, which are provided by JAEA.

Declarations
Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Figure 11
The error rate and shift error with changing camera height and angle.