Skip to main content

Development of mobile sensor terminals “Portable Go” for navigation in informationally structured and unstructured environments

Abstract

This paper proposes a navigation system of a personal mobility robot in informationally structured and unstructured environments. First, we introduce a group of robots named Portable Go, which are equipped with laser range finders, gyros, and omni-directional wheels. The Portable Go robots expand the informationally structured environment by deploying in the informationally unstructured environment in advance. Then, a personal mobility robot based on an electric wheelchair is guided by eleven Portable Go robots in the new informationally structured environment which is just created by the Portable Go robots. Through navigation experiments, we verify that the proposed system navigates the personal mobility robot from the informationally structured environment to the informationally unstructured environment smoothly by using the Portable Go robots.

Introduction

Due to the growth of our aging population, the expectation for service robots, which provide various service tasks and support our daily lives, is increasing more and more. Various types of service robots, such as PR2 (Willow Garage) and HSR (TOYOTA), have been developed so far to perform daily tasks. Basically, these robots are self-contained, which means they are equipped with a processing unit and a number of sensors, such as a laser range finder, a stereo camera, or a tactile sensor. However, the service robot is expected to perform service tasks that are complicated and cover a wide range. Users also have various demands according to the situation. In addition, the daily life environment is also quite complicated and varies dynamically. Therefore, limitations exist for a self-contained robot to provide proper services every time due to its underdeveloped sensing and processing capabilities. Instead of a self-contained robot, another approach using an informationally structured environment (ISE) has been proposed to support a robot providing service activities. In the ISE, a variety of sensors are embedded beforehand in the surroundings of the service robot, and service tasks are planned and executed according to the rich sensory information obtained from not only the on-board sensors of the robot but also the embedded sensors in the environment.

We have been developing a software platform named the ROS-Town Management System (ROS-TMS) [1] for the ISE [2, 3]. ROS-TMS consists of several hierarchical layers that have functions for the control system of robots and sensors: the understanding of sensory information, task planning and execution, human interfaces, and a database. All these functions are implemented as execution nodes based on the ROS architecture. The information gathered by the embedded sensors are registered to the database in ROS-TMS and shared among service robots performing various service tasks in the environment. Therefore, the robots can provide various service tasks quite efficiently by using the common and rich information in the database.

We also have been developing a hardware platform for the ISE named Big Sensor Box (B-sen), shown in Fig. 1, and have been conducting service experiments by various robots [3]. In B-sen, for example, an optical tracking system consisting of eighteen infrared cameras (Bonita, Vicon), some of which are shown in Fig. 2, are installed on the ceiling, and the positions of objects, robots, and humans are measured with an accuracy that is less than 1 mm. RFID tags are attached to all the objects and RFID tag readers are installed in cabinets or refrigerators to detect the objects placed in them. A B-sen is located on the second floor of an academic building in Japan (Center for Co-Evolutional Social Systems, Kyushu University, Ito Campus). The setup is shown in Fig. 3.

Fig. 1
figure 1

Big Sensor Box (B-Sen) [3]

Fig. 2
figure 2

Optical tracking system in the B-sen

However, in the near future, when service robots are introduced into our daily life environments, we cannot expect that many sensors will be installed anywhere in an environment beforehand. Instead, most environments where the service robot will provide service tasks will have very few or no embedded sensors. Even in B-sen, once the robot leaves B-sen, no sensors are embedded in the corridor (Fig. 3), and it is quite costly to install a lot of sensors in these environments afterward. In addition, in some areas, the service robot will seldom be operated and dense embedded sensors are not required.

As an example of a service robot, we investigate a personal mobility robot that moves in various environments automatically. If many sensors are installed beforehand (ISE), the personal mobility robot moves automatically by using the information obtained only by the embedded sensors. On the other hand, if no sensors are installed in the environment, which can be called as an informationally unstructured environment (N-ISE), the personal mobility robot has to move by using its installed on-board sensors [4].

Fig. 3
figure 3

Informationally structured and unstructured environments (ISE and N-ISE)

In this paper, we propose a group of mobile robots named “Portable Go”, which expands the ISE in the N-ISE by spreading and monitoring an environment using on-board laser range finders. “Portable Go” consists of 11 small mobile robots named Portable Go robots, which are equipped with laser range finders and are able to move in the N-ISE by themselves by using Adaptive Monte Carlo Localization (AMCL) [5]. Thus, “Portable Go” can spread and manage a new ISE locally and temporally in a N-ISE. Next, as a case study, we conducted experiments of autonomous driving of a personal mobility robot, which is a service robot that a human can ride on (i.e., a wheelchair), through an ISE such as B-sen, and a N-ISE such as a corridor.

Note that one may think it is enough for a service robot to be equipped with on-board sensors, and neither embedded sensors nor an ISE are required. However, we think that the ISE will become a standard and fundamental facility for service robots in the future, and the N-ISE, where a service robot must perform service tasks by on-board sensors and processing units, should be informationally structured as much as possible. This is because, if the task space is informationally structured once, a variety of robots can be useful in our daily lives even if the sensing performance of each robot is inadequate. In addition, if a N-ISE can be converted to an ISE with low cost, we do not need to install expensive sensors such as a laser scanner in individual robots and even a simple and low cost robot with no or very limited sensors can perform intelligent service tasks in ISE. Otherwise, it is necessary to equip all service robots with expensive sensors, and the total cost will be higher than the initial cost for constructing ISE, especially if the number of robots increases. However, it is quite costly to install a lot of sensors anywhere in the environment afterward. In addition, in some areas, the service robot will seldom be operated and dense embedded sensors are not required. Therefore, it is meaningful that the environment is informationally structured adaptively and temporarily. Consequently, the task space should be informationally structured as much as possible even if it is temporal; thus, the proposed approach is meaningful to consider as a possible situation.

The purpose of this research is to develop a group of small sensor robots to acquire the position information of other moving objects such as robots and human in order to realize the autonomous driving of a robot. The position and velocity of moving objects is fundamental information and can be used in various applications for not only the autonomous driving but also the people flow/counting analysis. However, in order to realize higher-level intelligence, for example, natural human interaction or decision making in a complex environment, it is not enough to simply provide the position information and more advanced sensing systems such as speech recognition, object detection, or behavior estimation will be required.

Related work

Several studies have reported multi-robot collaboration by heterogeneous multiple robots not only to perform complex tasks that cannot be completed by a single robot but also to increase efficiency by sharing roles [6,7,8,9].

Dorigo et al. proposed a heterogeneous multi-robot system named Swarmnoid [10]. In this system, mobile robots (foot-bots), arm robots (hand-bots), and flying robots (eye-bots) are used collaboratively and cooperatively. For example, multi-robots working together could perform complex tasks such as taking a book from a bookshelf. Especially, the flying robot is able to hang on a ceiling and watch the situation in a room from above. Thus, other robots can know the situation around them even in an unknown environment. The fundamental idea is that these robots can be localized and navigate by themselves by using on-board sensors or by the observations of relative positions between other robots. No studies have sufficiently addressed the case of robots without a localization function, or the case of navigation in various environments including ISE and N-ISE.

On the other hand, collaboration of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAV) has been proposed in many studies [11,12,13,14,15,16,17,18,19,20]. Sukhatme et al. [11] proposed a surveillance and navigation system of UGVs by a UAV. They utilized two mobile robots and a helicopter. The helicopter carried the mobile robots, landed on the ground, and issued instructions for the mobile robots to chase an intruder. Li et al. [14] proposed a takeoff, navigation, and landing system for a UAV by following LEDs attached on a UGV and using the on-board camera of the UAV. Instead of a UAV, a system consisting of a wall-climbing robot and UGVs was also proposed [21]. However, the sharing of roles in multi-robot systems according to the performance of each robot and the navigation in various environments including an ISE and a N-ISE have not been discussed in detail in previous studies.

The multi-robot system proposed by Parker et al. [22, 23] is a closed system with a navigation system. This multi-robot system is the basis of our study. The proposed system of Parker et al. consists of parent and child robots. The parent robot is equipped with a laser sensor or a camera and has relatively high measurement performance. On the other hand, the child robot is equipped with a microphone and the performance is lower than that of the parent robot. Several child robots are guided by the parent robot and develop a sensor network in an indoor environment. However, the proposed system in this paper adopts a different approach. Mobile robots that have high measurement performance are deployed and develop a sensor network automatically in advance. Then, a personal mobility robot with lower measurement performance is guided by the sensor network. Thus, the number of measurement robots for developing the sensor network is not limited and the structure of the sensor network is dynamically adapted to various situations.

Portable Go and personal mobility robot

Portable Go

We designed and built a small omni-directional mobile sensor terminal (Fig. 4) equipped with a laser range finder (UST-20LX, Hokuyo, Table 1), a gyro (myAHRS+, Odroid, Table 2), a board PC (Odroid-XU4, Odroid, Table 3), a lithium polymer battery, a DC–DC converter, a wireless communication system, LEDs, and three omni-directional rollers. We named this mobile sensor terminal “1Portable Go robot”. The lower body of the Portable Go robot consists of three omni-directional rollers (Fig. 5), a base controller (Arduino 328), geared motors, and encoders (Fig. 6). The upper body of the Portable Go robot can be detached and used as a stand-alone sensor terminal (Fig. 7). It is also possible to use it as a controller of various types of mobile robots (Fig. 8) [24] such as a standing ride type personal mobility robot (Fig. 9). Using the laser range finder and the gyro, the Portable Go robot can identify its own position by a scan matching technique, detect obstacles, and measure the positions of pedestrians and other robots such as a personal mobility robot.

In total we built 11 Portable Go robots and named this group of robots “Portable Go” (Fig. 10).

Fig. 4
figure 4

A mobile sensor terminal (Portable Go robot). The upper body can be detached from the base and used as a stand-alone sensor

Table 1 Specifications (UST-20LX, Hokuyo)
Table 2 Specifications (myAHRS+, Odroid)
Table 3 Specifications (Odroid-XU4, Odroid)
Fig. 5
figure 5

Base robot with three omni-directional rollers and Arduino 328 base controller

Fig. 6
figure 6

Geared motors and encoders

Fig. 7
figure 7

A stand-alone sensor terminal (Portable) detached from Portable Go robot. Walking persons in a room are tracked by the laser range finders

Fig. 8
figure 8

Portable can be used as a controller for various types of personal mobility robots

Fig. 9
figure 9

Navigation of a personal mobility robot by Portable

Fig. 10
figure 10

“Portable Go” consisting of eleven Portable Go robots

Personal mobility robot

We developed a personal mobility robot, shown in Fig. 11. The robot is based on an electric wheelchair for supporting the movement of a disabled person.

In B-sen (an ISE), the position of the personal mobility robot is measured by the optical tracking system mentioned above, and manual control by a joy stick and automatic control by the ROS-TMS are both possible. In addition, by attaching a retroreflector board on the side of the wheels, we can detect and extract the wheels from range data measured by the laser range finder on the Portable Go robots using the strengths of the reflected laser power.

Note that, on the top of the personal mobility robot, an omni-directional laser scanner (HDL-32e, Velodyne) and GPS are installed and position identification in an outdoor environment is available by comparing 3D point clouds on the map with the measured range data [25]. However, as mentioned in “Introduction” section, we think that a personal mobility robot should be low cost as much as possible, and these expensive sensors such as the laser scanner should not be installed in each robot. Instead, these sensors should be provided in the environment so that the environment is informationally structured.

Fig. 11
figure 11

Personal mobility robot (wheelchair) and wheel detection using retroreflector board by laser range finder on Portable Go robots

Navigation system

The structure of the control software is shown in Fig. 12. In ISE such as B-sen, the position information measured by embedded sensors such as the optical tracking system (Fig. 2) is sent to the personal mobility robot. This optical tracking information is fused with the wheel odometry information taken by the Kalman filter, and then the position (2 DoFs) and orientation (1 DoF) are estimated. Next, the task planner in the ROS-TMS (TMS_RP, Robot Planning module) [3] plans the trajectory along the Voronoi boundaries to reach to the desired destination while keeping enough distance to avoid obstacles. Finally, the personal mobility robot controls its motion to move on the desired trajectory.

In the corridor in the COI building, no sensors are installed beforehand. Therefore, the Personal Go robots deploy in the N-ISE first and develop the sensor network. After the deployment, the personal mobility robot starts to move from the ISE (B-sen) and go into the newly developed ISE. The Personal Go robots find the personal mobility robot and measure the position and the orientation of the personal mobility robot by the on-board laser range finder. The position of the personal mobility robot is calculated by combining the measured position by the Portable Go robots and the position measured by the wheel odometry using the particle filter as shown in Fig. 12. Each particle contains position (2 DoFs) and orientation (1 DoF) information. The number of particles is 100 and the update frequency is 20 Hz in the following experiments.

At the same time, pedestrians and obstacles are detected in the task space, and the proper trajectory is planned by the Navigation Stack in the ROS [26]. The Navigation Stack enables the personal mobility robot to avoid collisions with pedestrians and obstacles, which are placed even in blind areas from the personal mobility robot.

Fig. 12
figure 12

Software configuration

Navigation experiment

Expansion of ISE in N-ISE using Portable Go

ISE can be expanded locally and temporally in the N-ISE by using the Portable Go robots. Figure 13 shows the strategy to expand the ISE into N-ISE by using the Portable Go robots. First, the Portable Go robots are placed in the ISE (Fig. 13a) and the positions of all robots are identified by the optical tracking system (Fig. 2). Next, the Portable Go robots start to move to the assigned positions, which are planned off-line, while identifying their position by the on-board laser range finder and the gyro using Adaptive Monte Carlo Localization (AMCL) [5] (Fig. 13b). In addition, we utilizes the local planner based on the DWA (dynamic window approach) in the ROS navigation stack to avoid collision with obstacles such as pedestrians. When the robots reach their target positions, they rotate on their spots to identify their position precisely and then stop. Then, each Portable Go robot starts to monitor the assigned area by the on-board laser range finder and sends the measured information to the ROS-TMS (Fig. 13c). Finally, the personal mobility robot starts to move from the ISE to the new ISE which is just created by the Portable Go robots (Fig. 13d).

Fig. 13
figure 13

The strategy for expanding the informationally structured environment (white) in the informationally unstructured environment (gray) by Portable Go robots. a All robots are in the informationally structured environment. b Portable Go robots start moving in the informationally unstructured environment. c Portable Go robots stop and start to monitor the assigned areas. Then, a new informationally structured environment is developed. d Personal mobility robot moves in the area monitored by Portable Go robots

Figure 14 shows the trajectories of 11 Portable Go robots from B-sen to the corridor in a N-ISE by autonomous driving. The deployment of the Portable Go needs 320 s in this experiment. The time required for the deployment is a disadvantage of this system. However, once the Portable Go robots deploy, we do not need to repeat it until the situation will be changed.

Figure 15 shows the tracking results of pedestrians using the laser range finders on the Portable Go robots. The position information of pedestrians is sent to the database in ROS-TMS and used for collision avoidance by other mobile robots.

Fig. 14
figure 14

Deployment of Portable Go robots in an informationally unstructured environment (N-ISE)

Fig. 15
figure 15

Pedestrian detection and tracking

The problem of finding the optimum target positions of the Portable Go robots is closely related to an “art gallery problem (AGP)” [27,28,29,30,31] which is a problem of finding a minimal number of guards and their locations to monitor everywhere in a gallery. However, the problem definition in our case is slightly different from the AGP. For example, the number of the Portable Go robots is determined previously, it is not necessary to observe the whole area since the personal mobility robot can roughly estimate the current position by the odometry, and the Portable Go robots should be placed close to a wall or a pillar so as not to disturb the personal mobility robot and persons. More formally, \({\mathcal {A}}\) is a set of possible positions where the Portable Go robots can be placed and \({\mathcal {W}} \subset {\mathcal {R}}^2\) is a space that can be monitored from at least one Portable Go robot. The problem is to find the optimum positions \({\mathcal {P}}\) which maximize the space \({\mathcal {W}}\).

$$\begin{aligned} {\mathcal {P}} = \mathop {\mathrm{argmax}}\limits _{{\mathcal {A}}}{\mathcal {W}} \end{aligned}$$

This problem can be solved with a greedy algorithm [32]. In the following experiments, we define the target positions of the Portable Go robots manually while considering the accuracy of the odometry of the personal mobility robot.

Navigation of personal mobility robot

We carried out navigation experiments of the personal mobility robot moving from the B-sen to the corridor where the Portable Go robots were deployed.

First, the user gets into the personal mobility robot from a bed in the B-sen and the personal mobility robot starts to move to the door in the B-sen. In B-sen (Fig. 13a), the optical tracking system (Fig. 2) is installed and the position of a retroreflective marker is measured with an accuracy that is less than 1 mm. By attaching several markers on the personal mobility robot (Fig. 11), the position and orientation of the personal mobility robot can be measured at more than 10 Hz. The measured position is fused with the wheel odometry information taken by the Kalman filterm as explained in “Navigation system” section.

When the personal mobility robot passes through the door of the B-sen, the Portable Go robots start to navigate the personal mobility robot. The positions of the personal mobility robot and objects such as pedestrians are measured by the Portable Go robots and the personal mobility robot is navigated according to the measured data. As mentioned above, the personal mobility robot can be detected in the range data by the retroreflector board attached on the side of the wheels (Fig. 11) as follows. The range data of the personal mobility robot is extracted by using the strength of the reflected laser power. By applying collinear approximation to these points, the position and the orientation of a wheel and a body is determined. The position of the personal mobility robot is calculated by combining the measured position by the Portable Go robots and the position measured by the wheel odometry using the particle filter as explained in “Navigation system” section.

Figure 16 shows the trajectory of the personal mobility robot in this experiment. In the B-sen, the personal mobility robot is guided by the optical tracking system (Fig. 17\((1)-(3)\)). In the corridor on the first and second floors where the Portable Go robots are deployed, the personal mobility robot moves from the B-sen to the front of the elevator through the corridor (Fig. 17\((4)-(9)\)). Getting on and off the elevator is done manually by the joy stick attached on the armrest of the personal mobility robot. Especially, according to the guidance by the Portable Go robots, the personal mobility robot can avoid to collide with a walking person without on-board sensors such as laser range finders (Fig. 17\((5)-(7)\)). On the first floor, the personal mobility robot is guided again by the Portable Go robots deployed beforehand in the same way as on the second floor, and moves to the gate of the building (Fig. 17\((10)-(12)\)). To prepare the ISE on the first floor, we carried the Portable Go robots from the second floor to the first floor by hand and the Portable Go robots started to deploy from the front of the elevator on the first floor. Finally, the personal mobility robot goes outdoors from the gate (Fig. 17\((13)-(16)\)). In this area, although the personal mobility robot moves by itself by using the on-board omni-directional laser scanner, this is out of scope of the paper. In the experiments, we set several target points along the entire route, such as the door from B-sen to the corridor, the front position of the elevator, the gate of the building, and the final goal in outdoor. When the personal mobility vehicle reaches these points, it changes the navigation system in Fig. 12 automatically.

Fig. 16
figure 16

Planned routes of personal mobility robot from the bed in B-sen to the outdoors

Fig. 17
figure 17

Navigation of personal mobility robot from Bsen to the outdoor environment

Through the experiments, we verified the personal mobility robot goes out from the B-sen to the corridor and moves to the first floor by an elevator, and moves to the gate of the building without the collision with obstacles and unexpected stop according to the guidance by the Portable Go robots.

Next, we conducted the experiments for global path planning using Portable Go. After the Portable Go robots deployed in the second floor, we put some obstacles (human and box) in the corridor, which cannot be seen from the initial position of the personal mobility robot. In this situation, according to the information of obstacles obtained by the Portable Go robots, the ROS-TMS planned a detour route to the destination by avoiding the obstacles and guided the personal mobility robot safely as shown in Fig. 18.

Fig. 18
figure 18

Global path planning using Portable Go robots

Conclusions

In this paper, we proposed a multi-robot system named Portable Go, which expands the ISE in the N-ISE. Next, navigation experiments of the personal mobility robot were carried out and we confirmed that, by using the Personal Go robots, an information unstructured environment is changed to an information structured environment and the personal mobility robot can be navigated safely and stably from the inside of a room to an outdoor environment.

Future works will include optimal placement of the Portable Go robots in order to cover a wide area efficiently and completely. In addition, we will combine the proposed system with conventional surveillance systems that are installed currently in streets or stations, and then we will conduct navigation experiments in crowded situations in our daily environments.

References

  1. Sakamoto J, Kiyoyama K, Matsumoto K, Pyo Y, Kawamura A, Kurazume R (2018) Development of ros-tms 5.0 for informationally structured environment. ROBOMECH J 5:24. https://doi.org/10.1186/s40648-018-0123-9

    Article  Google Scholar 

  2. Pyo Y, Nakashima K, Kuwahata S, Kurazume R, Tsuji T, Morooka K, Hasegawa T (2015) Service robot system with an informationally structured environment. Robot Auton Syst 74(Part A):148–165

    Article  Google Scholar 

  3. Kurazume R, Pyo Y, Nakashima K, Tsuji T, Kawamura A (2017) Feasibility study of iort platform “big sensor box”. In: Proc. IEEE international conference on robotics and automation (ICRA2017), pp 3664–3671

  4. Thrun S, Burgard W, Fox D (2005) Probabilistic robotics (intelligent robotics and autonomous agents). The MIT Press, Cambridge

    MATH  Google Scholar 

  5. ROS: AMCL package. http://wiki.ros.org/amcl(). Accessed 1 June 2019

  6. Parker LE, Rus D, Sukhatm G (2016) Chapter 53. Multiple mobile robot systems. Springer, Berlin, pp 1335–1380

    Google Scholar 

  7. Singh K, Fujimura K (1993) Map making by cooperating mobile robots. In: Proc. IEEE international conference on robotics and automation 1993, pp 254–259. https://doi.org/10.1109/ROBOT.1993.292155

  8. Simmons R, Apfelbaum D, Fox D, Goldman RP, Haigh KZ, Musliner DJ, Pelican M, Thrun S (2000) Coordinated deployment of multiple, heterogeneous robots. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2000, vol 3, pp 2254–2260

  9. Howard A, Parker LE, Sukhatme GS (2006) Experiments with a large heterogeneous mobile robot team: exploration, mapping, deployment and detection. Int J Robot Res 25(5–6):431–447

    Article  Google Scholar 

  10. Dorigo M, Floreano D, Gambardella LM, Mondada F, Nolfi S, Baaboura T, Birattari M, Bonani M, Brambilla M, Brutschy A et al (2013) Swarmanoid: a novel concept for the study of heterogeneous robotic swarms. IEEE Robot Autom Mag 20(4):60–71

    Article  Google Scholar 

  11. Sukhatme GS, Montgomery JF, Vaugha RT (2001) Experiments with cooperative aerial-ground robots. Robot Teams Divers Polymorph 345–368

  12. Chaimowicz L, Grocholsky B, Keller JF, Kumar V, Taylor CJ (2004) Experiments in multirobot air-ground coordination. In: Proc. IEEE international conference on robotics and automation 2004, vol 4, pp 4053–4058

  13. Grocholsky B, Keller J, Kumar V, Pappas G (2006) Cooperative air and ground surveillance. IEEE Robot Autom Mag 13(3):16–25. https://doi.org/10.1109/MRA.2006.1678135

    Article  Google Scholar 

  14. Li W, Zhang T, Kühnlenz K (2011) A vision-guided autonomous quadrotor in an air-ground multi-robot system. In: Proc. IEEE international conference on robotics and automation 2011, pp 2980–2985

  15. Garzon M, Valente J, Zapata D, Barrientos A (2013) An aerial-ground robotic system for navigation and obstacle mapping in large outdoor areas. Sensors 13(1):1247–1267. https://doi.org/10.3390/s130101247

    Article  Google Scholar 

  16. Pinciroli C, O’Grady R, Christensen AL, Dorigo, M (2009) Self-organised recruitment in a heteregeneous swarm. In: Proc. 2009 international conference on advanced robotics, pp 1–8

  17. Stegagno P, Cognetti M, Rosa L, Peliti P, Oriolo G (2013) Relative localization and identification in a heterogeneous multi-robot system. In: Proc. IEEE international conference on robotics and automation 2013, pp 1857–1864. https://doi.org/10.1109/ICRA.2013.6630822

  18. Morbidi F, Ray C, Mariottini GL (2011) Cooperative active target tracking for heterogeneous robots with application to gait monitoring. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2011, pp 3608–3613 . https://doi.org/10.1109/IROS.2011.6094579

  19. Cognetti M, Oriolo G, Peliti P, Rosa L, Stegagno P (2014) Cooperative control of a heterogeneous multi-robot system based on relative localization. In: 2014 IEEE/RSJ international conference on intelligent robots and systems, pp 350–356. https://doi.org/10.1109/IROS.2014.6942583

  20. Stegagno P, Cognetti M, Oriolo G, Bulthoff HH, Franchi A (2016) Ground and aerial mutual localization using anonymous relative-bearing measurements. IEEE Trans Robot 32(5):1133–1151. https://doi.org/10.1109/TRO.2016.2593454

    Article  Google Scholar 

  21. Feng Y, Zhu Z, Xiao J (2007) Self-localization of a heterogeneous multi-robot team in constrained 3D space. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2007, pp 1343–1350

  22. Parker LE, Kannan B, Tang F, Bailey M (2004) Tightly-coupled navigation assistance in heterogeneous multi-robot teams. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2004, vol 1, pp 1016–1022. https://doi.org/10.1109/IROS.2004.1389486

  23. Parker LE, Kannan B, Fu X, Tan Y (2003) Heterogeneous mobile sensor net deployment using robot herding and line-of-sight formations. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2003, vol 3, pp 2488–2493. https://doi.org/10.1109/IROS.2003.1249243

  24. Yamada H, Hiramatsu T, Masato I, Kawamura A, Kurazume R (2019) Sensor terminal “portable” for intelligent navigation of personal mobility robots in informationally structured environment. In: Proc. 2019 IEEE/SICE international symposium on system integrations (SII)

  25. Oishi S, Jeong Y, Kurazume R, Iwashita Y, Hasegawa T (2013) ND voxel localization using large-scale 3D environmental map and RGB-D camera. In: 2013 IEEE international conference on robotics and biomimetics (ROBIO), pp 538–545

  26. ROS: Navigation package. http://wiki.ros.org/navigation(). Accessed 1 June 2019

  27. Aggarwali A (1984) The art gallery theorem: its variations, applications, and algorithmic aspects. PhD thesis, Johns Hopkins University

  28. Rourke JO (1987) Art gallery theorems and algorithms. Oxford Univ. Press, Oxford

    Google Scholar 

  29. Krause A, Singh A, Guestrin C (2008) Near-optimal sensor placements in Gaussian processes: theory, efficient algorithms and empirical studies. J Mach Learn Res 9(Feb):235–284

    MATH  Google Scholar 

  30. González-Banos H (2001) A randomized art-gallery algorithm for sensor placement. In: Proceedings of the seventeenth annual symposium on computational geometry. SCG ’01. ACM, New York, pp 232–240. https://doi.org/10.1145/378583.378674

  31. Erickson L, LaValle S (2012) An art gallery approach to ensuring that landmarks are distinguishable, vol 7. MIT Press Journals, Cambridge, pp 81–88

  32. Kurazume R, Oshima S, Nagakura S, Jeong Y, Iwashita Y (2017) Automatic large-scale three dimensional modeling using cooperative multiple robots. Comput Vis Image Underst 157:25–42. https://doi.org/10.1016/j.cviu.2016.05.008

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by a JSPS KAKENHI Grant Number JP26249029 and The Japan Science and Technology Agency (JST) through its “Center of Innovation Program (COI Program) JPMJCE1318”.

Funding

JSPS KAKENHI Grant Number JP26249029 The Japan Science and Technology Agency (JST) “Center of Innovation Program (COI Program)  JPMJCE1318.”

Author information

Authors and Affiliations

Authors

Contributions

YW, AS, and KM developed the system and carried out the experiments. AK managed the study. RK constructed the study concept and drafted the manuscript. All members verified the content of their contributions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ryo Kurazume.

Ethics declarations

Availability of data and materials

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Watanabe, Y., Shigekane, A., Matsumoto, K. et al. Development of mobile sensor terminals “Portable Go” for navigation in informationally structured and unstructured environments. Robomech J 6, 6 (2019). https://doi.org/10.1186/s40648-019-0134-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-019-0134-1

Keywords