Skip to content

Advertisement

  • Research Article
  • Open Access

Development of mobile sensor terminals “Portable Go” for navigation in informationally structured and unstructured environments

ROBOMECH Journal20196:6

https://doi.org/10.1186/s40648-019-0134-1

  • Received: 8 November 2018
  • Accepted: 28 May 2019
  • Published:

Abstract

This paper proposes a navigation system of a personal mobility robot in informationally structured and unstructured environments. First, we introduce a group of robots named Portable Go, which are equipped with laser range finders, gyros, and omni-directional wheels. The Portable Go robots expand the informationally structured environment by deploying in the informationally unstructured environment in advance. Then, a personal mobility robot based on an electric wheelchair is guided by eleven Portable Go robots in the new informationally structured environment which is just created by the Portable Go robots. Through navigation experiments, we verify that the proposed system navigates the personal mobility robot from the informationally structured environment to the informationally unstructured environment smoothly by using the Portable Go robots.

Keywords

  • Informationally structured environment
  • Service robot
  • Intelligent space
  • Personal mobility robot
  • Multiple robots

Introduction

Due to the growth of our aging population, the expectation for service robots, which provide various service tasks and support our daily lives, is increasing more and more. Various types of service robots, such as PR2 (Willow Garage) and HSR (TOYOTA), have been developed so far to perform daily tasks. Basically, these robots are self-contained, which means they are equipped with a processing unit and a number of sensors, such as a laser range finder, a stereo camera, or a tactile sensor. However, the service robot is expected to perform service tasks that are complicated and cover a wide range. Users also have various demands according to the situation. In addition, the daily life environment is also quite complicated and varies dynamically. Therefore, limitations exist for a self-contained robot to provide proper services every time due to its underdeveloped sensing and processing capabilities. Instead of a self-contained robot, another approach using an informationally structured environment (ISE) has been proposed to support a robot providing service activities. In the ISE, a variety of sensors are embedded beforehand in the surroundings of the service robot, and service tasks are planned and executed according to the rich sensory information obtained from not only the on-board sensors of the robot but also the embedded sensors in the environment.

We have been developing a software platform named the ROS-Town Management System (ROS-TMS) [1] for the ISE [2, 3]. ROS-TMS consists of several hierarchical layers that have functions for the control system of robots and sensors: the understanding of sensory information, task planning and execution, human interfaces, and a database. All these functions are implemented as execution nodes based on the ROS architecture. The information gathered by the embedded sensors are registered to the database in ROS-TMS and shared among service robots performing various service tasks in the environment. Therefore, the robots can provide various service tasks quite efficiently by using the common and rich information in the database.

We also have been developing a hardware platform for the ISE named Big Sensor Box (B-sen), shown in Fig. 1, and have been conducting service experiments by various robots [3]. In B-sen, for example, an optical tracking system consisting of eighteen infrared cameras (Bonita, Vicon), some of which are shown in Fig. 2, are installed on the ceiling, and the positions of objects, robots, and humans are measured with an accuracy that is less than 1 mm. RFID tags are attached to all the objects and RFID tag readers are installed in cabinets or refrigerators to detect the objects placed in them. A B-sen is located on the second floor of an academic building in Japan (Center for Co-Evolutional Social Systems, Kyushu University, Ito Campus). The setup is shown in Fig. 3.
Fig. 1
Fig. 1

Big Sensor Box (B-Sen) [3]

Fig. 2
Fig. 2

Optical tracking system in the B-sen

However, in the near future, when service robots are introduced into our daily life environments, we cannot expect that many sensors will be installed anywhere in an environment beforehand. Instead, most environments where the service robot will provide service tasks will have very few or no embedded sensors. Even in B-sen, once the robot leaves B-sen, no sensors are embedded in the corridor (Fig. 3), and it is quite costly to install a lot of sensors in these environments afterward. In addition, in some areas, the service robot will seldom be operated and dense embedded sensors are not required.

As an example of a service robot, we investigate a personal mobility robot that moves in various environments automatically. If many sensors are installed beforehand (ISE), the personal mobility robot moves automatically by using the information obtained only by the embedded sensors. On the other hand, if no sensors are installed in the environment, which can be called as an informationally unstructured environment (N-ISE), the personal mobility robot has to move by using its installed on-board sensors [4].
Fig. 3
Fig. 3

Informationally structured and unstructured environments (ISE and N-ISE)

In this paper, we propose a group of mobile robots named “Portable Go”, which expands the ISE in the N-ISE by spreading and monitoring an environment using on-board laser range finders. “Portable Go” consists of 11 small mobile robots named Portable Go robots, which are equipped with laser range finders and are able to move in the N-ISE by themselves by using Adaptive Monte Carlo Localization (AMCL) [5]. Thus, “Portable Go” can spread and manage a new ISE locally and temporally in a N-ISE. Next, as a case study, we conducted experiments of autonomous driving of a personal mobility robot, which is a service robot that a human can ride on (i.e., a wheelchair), through an ISE such as B-sen, and a N-ISE such as a corridor.

Note that one may think it is enough for a service robot to be equipped with on-board sensors, and neither embedded sensors nor an ISE are required. However, we think that the ISE will become a standard and fundamental facility for service robots in the future, and the N-ISE, where a service robot must perform service tasks by on-board sensors and processing units, should be informationally structured as much as possible. This is because, if the task space is informationally structured once, a variety of robots can be useful in our daily lives even if the sensing performance of each robot is inadequate. In addition, if a N-ISE can be converted to an ISE with low cost, we do not need to install expensive sensors such as a laser scanner in individual robots and even a simple and low cost robot with no or very limited sensors can perform intelligent service tasks in ISE. Otherwise, it is necessary to equip all service robots with expensive sensors, and the total cost will be higher than the initial cost for constructing ISE, especially if the number of robots increases. However, it is quite costly to install a lot of sensors anywhere in the environment afterward. In addition, in some areas, the service robot will seldom be operated and dense embedded sensors are not required. Therefore, it is meaningful that the environment is informationally structured adaptively and temporarily. Consequently, the task space should be informationally structured as much as possible even if it is temporal; thus, the proposed approach is meaningful to consider as a possible situation.

The purpose of this research is to develop a group of small sensor robots to acquire the position information of other moving objects such as robots and human in order to realize the autonomous driving of a robot. The position and velocity of moving objects is fundamental information and can be used in various applications for not only the autonomous driving but also the people flow/counting analysis. However, in order to realize higher-level intelligence, for example, natural human interaction or decision making in a complex environment, it is not enough to simply provide the position information and more advanced sensing systems such as speech recognition, object detection, or behavior estimation will be required.

Related work

Several studies have reported multi-robot collaboration by heterogeneous multiple robots not only to perform complex tasks that cannot be completed by a single robot but also to increase efficiency by sharing roles [69].

Dorigo et al. proposed a heterogeneous multi-robot system named Swarmnoid [10]. In this system, mobile robots (foot-bots), arm robots (hand-bots), and flying robots (eye-bots) are used collaboratively and cooperatively. For example, multi-robots working together could perform complex tasks such as taking a book from a bookshelf. Especially, the flying robot is able to hang on a ceiling and watch the situation in a room from above. Thus, other robots can know the situation around them even in an unknown environment. The fundamental idea is that these robots can be localized and navigate by themselves by using on-board sensors or by the observations of relative positions between other robots. No studies have sufficiently addressed the case of robots without a localization function, or the case of navigation in various environments including ISE and N-ISE.

On the other hand, collaboration of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAV) has been proposed in many studies [1120]. Sukhatme et al. [11] proposed a surveillance and navigation system of UGVs by a UAV. They utilized two mobile robots and a helicopter. The helicopter carried the mobile robots, landed on the ground, and issued instructions for the mobile robots to chase an intruder. Li et al. [14] proposed a takeoff, navigation, and landing system for a UAV by following LEDs attached on a UGV and using the on-board camera of the UAV. Instead of a UAV, a system consisting of a wall-climbing robot and UGVs was also proposed [21]. However, the sharing of roles in multi-robot systems according to the performance of each robot and the navigation in various environments including an ISE and a N-ISE have not been discussed in detail in previous studies.

The multi-robot system proposed by Parker et al. [22, 23] is a closed system with a navigation system. This multi-robot system is the basis of our study. The proposed system of Parker et al. consists of parent and child robots. The parent robot is equipped with a laser sensor or a camera and has relatively high measurement performance. On the other hand, the child robot is equipped with a microphone and the performance is lower than that of the parent robot. Several child robots are guided by the parent robot and develop a sensor network in an indoor environment. However, the proposed system in this paper adopts a different approach. Mobile robots that have high measurement performance are deployed and develop a sensor network automatically in advance. Then, a personal mobility robot with lower measurement performance is guided by the sensor network. Thus, the number of measurement robots for developing the sensor network is not limited and the structure of the sensor network is dynamically adapted to various situations.

Portable Go and personal mobility robot

Portable Go

We designed and built a small omni-directional mobile sensor terminal (Fig. 4) equipped with a laser range finder (UST-20LX, Hokuyo, Table 1), a gyro (myAHRS+, Odroid, Table 2), a board PC (Odroid-XU4, Odroid, Table 3), a lithium polymer battery, a DC–DC converter, a wireless communication system, LEDs, and three omni-directional rollers. We named this mobile sensor terminal “1Portable Go robot”. The lower body of the Portable Go robot consists of three omni-directional rollers (Fig. 5), a base controller (Arduino 328), geared motors, and encoders (Fig. 6). The upper body of the Portable Go robot can be detached and used as a stand-alone sensor terminal (Fig. 7). It is also possible to use it as a controller of various types of mobile robots (Fig. 8) [24] such as a standing ride type personal mobility robot (Fig. 9). Using the laser range finder and the gyro, the Portable Go robot can identify its own position by a scan matching technique, detect obstacles, and measure the positions of pedestrians and other robots such as a personal mobility robot.

In total we built 11 Portable Go robots and named this group of robots “Portable Go” (Fig. 10).
Fig. 4
Fig. 4

A mobile sensor terminal (Portable Go robot). The upper body can be detached from the base and used as a stand-alone sensor

Table 1

Specifications (UST-20LX, Hokuyo)

Voltage

12/24 VDC

Current

< 150 mA

Light source

905 nm class 1

Accuracy

± 40 mm

Scan angle

270°

Scan speed

25 ms

Angular resolution

0.25°

Table 2

Specifications (myAHRS+, Odroid)

Triple axis 16-bit gyroscope

± 2000 dps

Triple axis 16-bit accelerometer

± 16 g

Triple axis 13-bit magnetometer

± 1200 μT

Size

21 × 27 mm

Table 3

Specifications (Odroid-XU4, Odroid)

Processor

2.0 GHz, ARMCorex-A15 Cotex A7 8 core

Memory

2 GB

Size

83 × 58 × 22

Fig. 5
Fig. 5

Base robot with three omni-directional rollers and Arduino 328 base controller

Fig. 6
Fig. 6

Geared motors and encoders

Fig. 7
Fig. 7

A stand-alone sensor terminal (Portable) detached from Portable Go robot. Walking persons in a room are tracked by the laser range finders

Fig. 8
Fig. 8

Portable can be used as a controller for various types of personal mobility robots

Fig. 9
Fig. 9

Navigation of a personal mobility robot by Portable

Fig. 10
Fig. 10

“Portable Go” consisting of eleven Portable Go robots

Personal mobility robot

We developed a personal mobility robot, shown in Fig. 11. The robot is based on an electric wheelchair for supporting the movement of a disabled person.

In B-sen (an ISE), the position of the personal mobility robot is measured by the optical tracking system mentioned above, and manual control by a joy stick and automatic control by the ROS-TMS are both possible. In addition, by attaching a retroreflector board on the side of the wheels, we can detect and extract the wheels from range data measured by the laser range finder on the Portable Go robots using the strengths of the reflected laser power.

Note that, on the top of the personal mobility robot, an omni-directional laser scanner (HDL-32e, Velodyne) and GPS are installed and position identification in an outdoor environment is available by comparing 3D point clouds on the map with the measured range data [25]. However, as mentioned in “Introduction” section, we think that a personal mobility robot should be low cost as much as possible, and these expensive sensors such as the laser scanner should not be installed in each robot. Instead, these sensors should be provided in the environment so that the environment is informationally structured.
Fig. 11
Fig. 11

Personal mobility robot (wheelchair) and wheel detection using retroreflector board by laser range finder on Portable Go robots

Navigation system

The structure of the control software is shown in Fig. 12. In ISE such as B-sen, the position information measured by embedded sensors such as the optical tracking system (Fig. 2) is sent to the personal mobility robot. This optical tracking information is fused with the wheel odometry information taken by the Kalman filter, and then the position (2 DoFs) and orientation (1 DoF) are estimated. Next, the task planner in the ROS-TMS (TMS_RP, Robot Planning module) [3] plans the trajectory along the Voronoi boundaries to reach to the desired destination while keeping enough distance to avoid obstacles. Finally, the personal mobility robot controls its motion to move on the desired trajectory.

In the corridor in the COI building, no sensors are installed beforehand. Therefore, the Personal Go robots deploy in the N-ISE first and develop the sensor network. After the deployment, the personal mobility robot starts to move from the ISE (B-sen) and go into the newly developed ISE. The Personal Go robots find the personal mobility robot and measure the position and the orientation of the personal mobility robot by the on-board laser range finder. The position of the personal mobility robot is calculated by combining the measured position by the Portable Go robots and the position measured by the wheel odometry using the particle filter as shown in Fig. 12. Each particle contains position (2 DoFs) and orientation (1 DoF) information. The number of particles is 100 and the update frequency is 20 Hz in the following experiments.

At the same time, pedestrians and obstacles are detected in the task space, and the proper trajectory is planned by the Navigation Stack in the ROS [26]. The Navigation Stack enables the personal mobility robot to avoid collisions with pedestrians and obstacles, which are placed even in blind areas from the personal mobility robot.
Fig. 12
Fig. 12

Software configuration

Navigation experiment

Conclusions

In this paper, we proposed a multi-robot system named Portable Go, which expands the ISE in the N-ISE. Next, navigation experiments of the personal mobility robot were carried out and we confirmed that, by using the Personal Go robots, an information unstructured environment is changed to an information structured environment and the personal mobility robot can be navigated safely and stably from the inside of a room to an outdoor environment.

Future works will include optimal placement of the Portable Go robots in order to cover a wide area efficiently and completely. In addition, we will combine the proposed system with conventional surveillance systems that are installed currently in streets or stations, and then we will conduct navigation experiments in crowded situations in our daily environments.

Declarations

Acknowledgements

This research is supported by a JSPS KAKENHI Grant Number JP26249029 and The Japan Science and Technology Agency (JST) through its “Center of Innovation Program (COI Program) JPMJCE1318”.

Funding

JSPS KAKENHI Grant Number JP26249029 The Japan Science and Technology Agency (JST) “Center of Innovation Program (COI Program)  JPMJCE1318.”

Authors' contributions

YW, AS, and KM developed the system and carried out the experiments. AK managed the study. RK constructed the study concept and drafted the manuscript. All members verified the content of their contributions. All authors read and approved the final manuscript.

Availability of data and materials

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Kyushu University, 744 Motooka, Nishi-ku, Fukuoka-shi 819-0395, Fukuoka, Japan

References

  1. Sakamoto J, Kiyoyama K, Matsumoto K, Pyo Y, Kawamura A, Kurazume R (2018) Development of ros-tms 5.0 for informationally structured environment. ROBOMECH J 5:24. https://doi.org/10.1186/s40648-018-0123-9 View ArticleGoogle Scholar
  2. Pyo Y, Nakashima K, Kuwahata S, Kurazume R, Tsuji T, Morooka K, Hasegawa T (2015) Service robot system with an informationally structured environment. Robot Auton Syst 74(Part A):148–165View ArticleGoogle Scholar
  3. Kurazume R, Pyo Y, Nakashima K, Tsuji T, Kawamura A (2017) Feasibility study of iort platform “big sensor box”. In: Proc. IEEE international conference on robotics and automation (ICRA2017), pp 3664–3671Google Scholar
  4. Thrun S, Burgard W, Fox D (2005) Probabilistic robotics (intelligent robotics and autonomous agents). The MIT Press, CambridgeMATHGoogle Scholar
  5. ROS: AMCL package. http://wiki.ros.org/amcl(). Accessed 1 June 2019
  6. Parker LE, Rus D, Sukhatm G (2016) Chapter 53. Multiple mobile robot systems. Springer, Berlin, pp 1335–1380Google Scholar
  7. Singh K, Fujimura K (1993) Map making by cooperating mobile robots. In: Proc. IEEE international conference on robotics and automation 1993, pp 254–259. https://doi.org/10.1109/ROBOT.1993.292155
  8. Simmons R, Apfelbaum D, Fox D, Goldman RP, Haigh KZ, Musliner DJ, Pelican M, Thrun S (2000) Coordinated deployment of multiple, heterogeneous robots. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2000, vol 3, pp 2254–2260Google Scholar
  9. Howard A, Parker LE, Sukhatme GS (2006) Experiments with a large heterogeneous mobile robot team: exploration, mapping, deployment and detection. Int J Robot Res 25(5–6):431–447View ArticleGoogle Scholar
  10. Dorigo M, Floreano D, Gambardella LM, Mondada F, Nolfi S, Baaboura T, Birattari M, Bonani M, Brambilla M, Brutschy A et al (2013) Swarmanoid: a novel concept for the study of heterogeneous robotic swarms. IEEE Robot Autom Mag 20(4):60–71View ArticleGoogle Scholar
  11. Sukhatme GS, Montgomery JF, Vaugha RT (2001) Experiments with cooperative aerial-ground robots. Robot Teams Divers Polymorph 345–368Google Scholar
  12. Chaimowicz L, Grocholsky B, Keller JF, Kumar V, Taylor CJ (2004) Experiments in multirobot air-ground coordination. In: Proc. IEEE international conference on robotics and automation 2004, vol 4, pp 4053–4058Google Scholar
  13. Grocholsky B, Keller J, Kumar V, Pappas G (2006) Cooperative air and ground surveillance. IEEE Robot Autom Mag 13(3):16–25. https://doi.org/10.1109/MRA.2006.1678135 View ArticleGoogle Scholar
  14. Li W, Zhang T, Kühnlenz K (2011) A vision-guided autonomous quadrotor in an air-ground multi-robot system. In: Proc. IEEE international conference on robotics and automation 2011, pp 2980–2985Google Scholar
  15. Garzon M, Valente J, Zapata D, Barrientos A (2013) An aerial-ground robotic system for navigation and obstacle mapping in large outdoor areas. Sensors 13(1):1247–1267. https://doi.org/10.3390/s130101247 View ArticleGoogle Scholar
  16. Pinciroli C, O’Grady R, Christensen AL, Dorigo, M (2009) Self-organised recruitment in a heteregeneous swarm. In: Proc. 2009 international conference on advanced robotics, pp 1–8Google Scholar
  17. Stegagno P, Cognetti M, Rosa L, Peliti P, Oriolo G (2013) Relative localization and identification in a heterogeneous multi-robot system. In: Proc. IEEE international conference on robotics and automation 2013, pp 1857–1864. https://doi.org/10.1109/ICRA.2013.6630822
  18. Morbidi F, Ray C, Mariottini GL (2011) Cooperative active target tracking for heterogeneous robots with application to gait monitoring. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2011, pp 3608–3613 . https://doi.org/10.1109/IROS.2011.6094579
  19. Cognetti M, Oriolo G, Peliti P, Rosa L, Stegagno P (2014) Cooperative control of a heterogeneous multi-robot system based on relative localization. In: 2014 IEEE/RSJ international conference on intelligent robots and systems, pp 350–356. https://doi.org/10.1109/IROS.2014.6942583
  20. Stegagno P, Cognetti M, Oriolo G, Bulthoff HH, Franchi A (2016) Ground and aerial mutual localization using anonymous relative-bearing measurements. IEEE Trans Robot 32(5):1133–1151. https://doi.org/10.1109/TRO.2016.2593454 View ArticleGoogle Scholar
  21. Feng Y, Zhu Z, Xiao J (2007) Self-localization of a heterogeneous multi-robot team in constrained 3D space. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2007, pp 1343–1350Google Scholar
  22. Parker LE, Kannan B, Tang F, Bailey M (2004) Tightly-coupled navigation assistance in heterogeneous multi-robot teams. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2004, vol 1, pp 1016–1022. https://doi.org/10.1109/IROS.2004.1389486
  23. Parker LE, Kannan B, Fu X, Tan Y (2003) Heterogeneous mobile sensor net deployment using robot herding and line-of-sight formations. In: Proc. IEEE/RSJ international conference on intelligent robots and systems 2003, vol 3, pp 2488–2493. https://doi.org/10.1109/IROS.2003.1249243
  24. Yamada H, Hiramatsu T, Masato I, Kawamura A, Kurazume R (2019) Sensor terminal “portable” for intelligent navigation of personal mobility robots in informationally structured environment. In: Proc. 2019 IEEE/SICE international symposium on system integrations (SII)Google Scholar
  25. Oishi S, Jeong Y, Kurazume R, Iwashita Y, Hasegawa T (2013) ND voxel localization using large-scale 3D environmental map and RGB-D camera. In: 2013 IEEE international conference on robotics and biomimetics (ROBIO), pp 538–545Google Scholar
  26. ROS: Navigation package. http://wiki.ros.org/navigation(). Accessed 1 June 2019
  27. Aggarwali A (1984) The art gallery theorem: its variations, applications, and algorithmic aspects. PhD thesis, Johns Hopkins UniversityGoogle Scholar
  28. Rourke JO (1987) Art gallery theorems and algorithms. Oxford Univ. Press, OxfordGoogle Scholar
  29. Krause A, Singh A, Guestrin C (2008) Near-optimal sensor placements in Gaussian processes: theory, efficient algorithms and empirical studies. J Mach Learn Res 9(Feb):235–284MATHGoogle Scholar
  30. González-Banos H (2001) A randomized art-gallery algorithm for sensor placement. In: Proceedings of the seventeenth annual symposium on computational geometry. SCG ’01. ACM, New York, pp 232–240. https://doi.org/10.1145/378583.378674
  31. Erickson L, LaValle S (2012) An art gallery approach to ensuring that landmarks are distinguishable, vol 7. MIT Press Journals, Cambridge, pp 81–88Google Scholar
  32. Kurazume R, Oshima S, Nagakura S, Jeong Y, Iwashita Y (2017) Automatic large-scale three dimensional modeling using cooperative multiple robots. Comput Vis Image Underst 157:25–42. https://doi.org/10.1016/j.cviu.2016.05.008 View ArticleGoogle Scholar

Copyright

© The Author(s) 2019

Advertisement