Skip to main content
  • Research Article
  • Open access
  • Published:

A vision system with wide field of view and collision alarms for teleoperation of mobile robots

Abstract

Vision systems are an important component to teleoperate mobile robots. We previously proposed a vision system with a wide field of view by combining the camera images of two cameras. This paper describes the features of our vision system that are useful for teleoperation, and we propose attachment devices for our system. The proposed devices are used to visualize the risk of collisions to the operator of robots. We performed two experiments to evaluate our vision system and the proposed devices.

Background

Teleoperated mobile robots enable humans to explore, inspect, and monitor hazardous and remote environments. Various applications have been proposed and the working environments of robots have varied such as hazardous environments [1]-[3], uncomfortable environments for humans [4],[5], and remote areas [6]. Information on the environment and surroundings of robots is necessary for operators of robots to safely complete tasks.

Vision systems are well-known methods of acquiring environmental information. Operators can intuitively find target objects, landmarks, free spaces, and obstacles from images acquired by vision systems. However, a major drawback of vision systems is the narrow field of view limited by the angle of view of the camera used in the systems. Since observable areas in the environment are restricted by the narrow field of view, following typical scenarios can be predicted when operators try to avoid obstacles in front of robots.

  1. 1.

    when an operator changes the direction of the robot, the detected obstacle disappears from the image.

  2. 2.

    the detected obstacle is not observed in the image while it is passed by the robot.

  3. 3.

    the operator needs to stop and rotate the robot or rotate the camera on the robot to check the existence of the obstacle passed by the robot.

  4. 4.

    the operator may lose the proper route of the robot while he/she is frequently rotating the robot or the camera.

A similar situation is predicted when robots enter doorways or pass through narrow paths.

A well-known approach to enlarge the field of view is by using multiple cameras and single cameras with special optics. Small size systems with homogeneous cameras have been proposed [7],[8] as multi-camera approaches. Multi-camera systems can achieve a wide field of view to acquire precise images but the amount of image data increases. Cameras with fisheye lenses [9],[10], forward-hemispherical vision [11], and omni-directional vision [12],[13] have been proposed as other approaches. A single image with a wide field of view approximately or wider than 180 degrees can be acquired with a small amount of data but the acquired image is not precise compared with the multi-camera approach. Wide-angle lenses with fovea have proposed to simultaneously enlarge a field of view and achieve a high resolution attention zone in the center of the image [14]-[17]. This approach is effective for precise observation of objects in front. However, the field of view is not large enough to observe the left and right sides compared with the other special optics approaches.

Although both approaches, the muti-camera approach and the single camera with special optics approach enlarge the field of view, the operator does not have a bird’s eye view which would be helpful to recognize the direction of robots in the environment. Shiroma et al. [18] attached a fisheye camera at a high position above a robot to observe the robot itself with its surroundings. Since a long extension was necessary to place the camera at a proper height, it moderately increased the height of the robot.

Another drawback of vision systems is that the image displayed to the operator does not provide sufficient distance information from the observed objects. When the robot avoids an obstacle or passes an object, a short range of distance information is necessary for the operator. A typical solution is the fusion of images with additional sensing devices such as laser range finders [19],[20]. This approach can provide a precise map with the exact distance of objects when careful operation is required. However, the process of map generation consumes much time in matching and calculation, and may take too long if the operator wishes to quickly avoid or pass obstacles.

We previously proposed a vision system that consisted of two cameras for teleoperation of mobile robots [21] that could acquire a single image with a wide field of view and could observe the robot itself and its surroundings at a low camera height. In this paper, we propose a collision alarm, a method of visualization that warns about the risk of collisions by attaching a device to our vision system while enlarging the field of view. It provides signs in the acquired image and the operator can quickly and intuitively see the risk of collisions. We evaluated the effect of our vision system and the proposed method of warning about collisions through experiments.

The rest of this paper is organized as follows. The details of our vision system and the proposed collision alarm are described first. Then a prototype of the proposed system and acquired sample images are presented. Finally, experiments we conducted to evaluate the system and conclusions are described.

Methods

Vision system with collision alarms

We previously proposed a vision system that consisted of two cameras and image processing unit shown as Figure 1 to enlarge the field of view [21]. The proposed system called a Synthesized Extra-Wide Vision System (abbreviated to zeta-vision) combined the images of two cameras to generate a single image with a wide field of view in horizontal and vertical. The configuration for the two cameras and the process of image combination are detailed below.

Figure 1
figure 1

Configuration of vision system. Proposed vision system consists of wide-angle camera, omni-directional camera, and image processing unit. Two cameras are placed by aligning optical axes and facing them in opposite directions.

Configuration of camera unit

The camera unit was configured by using two heterogeneous cameras, i.e., a camera with a wide-angle lens (a wide-angle camera) and an omni-directional camera with a hyperboloid mirror. The omni-directional camera was placed behind the wide-angle camera and the optical axes of these cameras were aligned as seen in Figure 1. The cameras faced opposite directions and both cameras mutually complemented the unobservable areas of the other. The omni-directional camera acquired an image around its optical axis and it covered the outside of the field of view of the wide-angle camera i.e., left, right, upward, and downward sides. On the other hand the omni-directional camera had an unobservable area in the center of its acquired image and this unobservable area was complemented by the wide-angle camera. The angle of view of the configured camera unit was approximately or more than 180 degrees depending on the angle of view of the configured omni-directional camera.

Process of image combination

The processing unit generated a single image with a wide field of view by combining two images from the camera unit. The five-step combination process is outlined in Figure 2 and is detailed below.

  1. 1.

    The image of the omni-directional camera was reflected since the camera captured a mirrored image. Then the reflected image was used as a background image.

  2. 2.

    An overlay region was determined in the center of the background image obtained in process 1. This region was used to combine the image from the wide-angle camera.

  3. 3.

    The image from the wide-angle camera was scaled down.

  4. 4.

    An overlay image was extracted by clipping the scaled down wide-angle camera image.

  5. 5.

    The overlay region determined in process 2 in the background image was replaced by the overlay image extracted in process 4.

Figure 2
figure 2

Image combination process. Combination process (1) reflects image of the omni-directional camera, (2) determines overlay region in reflected image, (3) scales down image of wide-angle camera, (4) extracts overlay image by clipping scaled down image, and (5) replaces the overlay region with overlay image.

The numbers listed above correspond to the numbers in Figure 2. The projection model of the combination process is detailed by Suzuki [21].

Features of proposed vision system

The horizontal and vertical angles of view of our vision system are approximately or more than 180 degrees. When the system is mounted on a mobile robot as shown in Figure 3, the left and right sides of the robot can be observed in addition to the front. The operator can check objects in the image when the robot passes an obstacle to avoid or passes a wall or a doorway. The upward direction of the robot can be observed as well. This helps the operator when the robot is traveling under some object.

Figure 3
figure 3

Mounting on robot. When vision system is mounted on robot, it can observe its front, left, right, and upward directions. Downward direction is occluded by robot’s body.

The angle of view of our system is equivalent to that of a fisheye camera and the forward-hemispherical vision. The main difference of our system from these systems is in its acquisition of a precise image in the center of the field of view. Since the image acquired by our system is combined with images of two cameras, the image from the wide-angle camera can be used as a precise center image. Our system has similar characteristics to human eyes that consist of a central region for fine observations and a peripheral region for wide and coarse observations. Wide-angle lenses with fovea are more convenient to acquire a precise center image and can achieve these characteristics with simpler configurations than ours. However, their field of view is smaller compared with our system.

Another feature of our system is its similar characteristics to those with bird’s eye views. When the camera unit is fixed to the robot, as shown in Figure 3, part of the robot’s body occludes the downward side of the camera unit. This means that part of the image is always occluded by the robot. However, this occluded area occupies the same region in the image as long as the camera unit is fixed to the robot. The robot and its surroundings are simultaneously observed in the image and the operator can estimate the direction of the robot in the environment. The camera unit is fixed at a low height but the acquired image can be used as the bird’s eye view. We can say it is a follower’s eye view rather than a bird’s eye view.

The main disadvantage of our system is a blind area in the field of view, which does not exist in conventional systems. The blind area is an overlapping area of the unobservable areas of both cameras. This is illustrated as an acute triangle beside the two cameras in Figures 1 and 3, and its geometry is determined by the length of the camera unit and the parameters of image combination. Although discontinuous image is acquired when an object passes this area, it can provide a visual sign to the operator when the object is close to the camera unit. In addition, an object larger than a sphere with a diameter of the length of the camera unit can be observed even if part of it is hidden in the blind area. Since the length of the camera unit limits the size of the blind area. Of course, small objects hidden completely inside the blind area cannot be observed in the image, but this property can be used to hide small attachments to support the vision system without interfering with the acquired image. For example, a ring of LEDs could be attached as lighting for the camera unit.

Collision alarms to visualize risk of collisions

We propose two types of collision alarms for our vision system, the first is a contact type and the second is a non-contact type. The collision alarms are designed to visualize the risk of collisions in the image. They provide visual signs to warn about collisions instead of displaying the exact distance of objects. Since it is important to pass objects during the avoidance of objects in front, our collision alarms are adjusted to warn about collisions on the left and right sides of the robot.

The contact type of collision alarm is a whisker made of an elastic stick material. It is similar to artificial whiskers [22], antennas [23], or tails [24] proposed as a tactile sensor. The main difference with these devices is that our device has no sensing device and no actuators in the base. Our whiskers are attached as their tips are visible in the image. The tips of the whiskers do not move in the image when they do not touch any objects. Once an object touches and pushes a whisker, the whisker is bent and the position of its tip in the image is moved. The whisker resumes its shape after it is away from the touched object. The change in the observed position of the tip of a whisker works as a visual sign to warn of collisions. Figure 4 outlines the function of the whiskers. The alarm distance is adjusted by the length of the whiskers which are limited to short due to the size of the robot and their weight.

Figure 4
figure 4

Contact type of collision alarm. Whisker made of sticky elastic material is used to warn about collisions. Its shape is bent when object pushes it and its observed position in image is moved.

The non-contact type of collision alarm is based on triangulation by light. Emitters of visible collimated light beams are attached to the blind area of our vision system. Beams are emitted to the left and right to check objects on the left and right sides of the robot. Reflections of the emitted light are observed in the image when there is an object at a certain distance, as shown in Figure 5. The positions to which the reflections move depend on the distance to the object. They move toward the center of the image when an object is approaching and outside the image when an object is leaving. No reflections are observed when an object is far away or no object exists. The positions of the reflections work as visual signs to warn about the approach of objects and the risk of collisions. The alarm distance can be adjusted by the direction of the emitted light.

Figure 5
figure 5

Non-contact type of collision alarm. Emitters for collimated light beams are used to warn about the risk of collisions. Reflections of emitted light are observed when object is close to robot. No reflections are observed when objects are far from robot or no object exists in view.

Two collision alarms work to complement each other when these are attached, as seen in Figure 6. The attached angles of the whiskers γ and the emitters of collimated light beams θ determine the alarm distance. The reflections of the light beams can be observed at longer distance than the length of the whiskers. However, the reflections are weak or are not observed if objects are transparent, black, or mirror planes. The whiskers in these cases complement the reflections of light. The whiskers are placed in front of the emitters to prevent the two devices from interfering with each other.

Figure 6
figure 6

Mounting of collision alarms. Layout for whiskers and emitters. Whiskers are placed ahead of emitters so that their observations do not interfere with one another. Attached angles γ determines alarm distance of whiskers and θ determines alarm distance of emitters.

Reulsts and discussion

Implementation of prototype

We developed a prototype vision system with two NTSC cameras, a Vstone VS-C14N omni-directional camera with 65 degrees of angle of view, and a Watec W-01CDB3 board camera with a Nittoh Kogaku SY110M wide-angle lens with 110 degrees of angle of view. The configured camera unit is shown in Figure 7(a). Its dimensions were 50 mm in width, 50 mm in height, 150 mm in length. The image combination processor was implemented with a field-programmable gate array (FPGA) and installed in a box with dimensions of 150 mm in width, 50 mm in height, 120 mm in length, as seen in Figure 7(b). The box had two NTSC video signal inputs and one NTSC signal output. The combined image was 640×480 pixels and generated to include the wide-angle camera image scaled down by 0.25. The overlay region was 200×140 pixels and its shape could be changed to a rectangle or an oval.

Figure 7
figure 7

Implemented vision system. (a) The prototype camera unit and (b) image processing unit.

A White Box Robotics 914 PC-BOT was used as the platform of the mobile robot. The robot was 340 mm wide, 430 mm long, and 534 mm high. We installed our vision system and collision alarms on the robot, as shown as Figure 8, and the height of the robot with the vision system reached 560 mm. A metal spring with 300 mm in length was used as the whisker. However, we found that the observed image of the tip was too small to check whether it made contact with other objects. We needed to enlarge the tip size without increasing its weight so that its visibility in the image was improved while maintaining the flexibility of the spring. We therefore attached a 70 mm cube box folded from a sheet of paper to it (a light sphere object such as a foam ball could be used instead of a paper box). IMAC LBF-LX30Rs were selected to emit the collimated light beams, which could emit a visible collmated light beam with a diameter of 50 mm. The angle at which the whiskers were attached was 45 degrees forward and the angle of the emitters was 30 degrees in backward. The alarm distance of the reflected light was adjusted at 400 mm from the robot’s body and the alarm distance of the whisker was adjusted at approximately 200 mm from the robot’s body.

Figure 8
figure 8

Implemented vision system on robot. (a) Vision system mounted on robot and (b) collision alarms on it.

We introduced SANRITZ AUTOMATION’s TPIP2 as a controller board to teleoperate the robot via Wi-Fi link, which has various input/output interfaces including two channels of serial links, three channels of NTSC video signal inputs, and a slot for a Wi-Fi communication card. The system for teleoperation is outlined in Figure 9. The image acquired by the vision system was transmitted to the PC for operation via the Wi-Fi link and displayed to the operator. The operator used a keyboard to send motion commands to the robot. The motion commands were transmitted from the PC via Wi-Fi link to the TPIP2 and were transferred to the robot via a serial link. The robot accepted and executed the commands of move forward, move backward, stop, rotate left, and rotate right. The distance and the angle of motion were predefined.

Figure 9
figure 9

Configured system for teleoperation. TPIP2 is installed to connect robot and PC for operation via Wi-Fi link. TPIP2 inputs a NTSC video signal and sends it to the PC, which is used to receive and transfer motion commands to robot. Operator sends motion commands by keyboard while watching displayed image from vision system on robot.

Acquired images of prototype

The sample images that were acquired by our vision system are explained in this subsection.

Wide field of view

The proposed vision system can acquire images on the left and right of the robot in addition to the front. Figure 10(b) is a sample image acquired in the hallway in Figure 10(a). Seven poles with 500 mm in height were aligned in front of the robot at a distance of 2.0 m. The hall was 12.0 m wide and the gaps between the poles were 1.0 m. All poles in the hall could be observed in the image.

Figure 10
figure 10

Acquired image in hall environment. (a) Hall environment with seven white poles and (b) image acquired by robot in hall. All poles are observed in image.

Figure 11 shows images during obstacle avoidance executed in the corridor in Figure 12. Figures 11(a), (b), (c), and (d) are images acquired at the positions indicated by (a), (b), (c), and (d) in Figure 12 respectively. The obstacles are three cones with different colors and they were captured in the images while the robot was avoiding them.

Figure 11
figure 11

Images acquired during obstacle avoidance. Detected obstacles are indicated by red rectangles. (a) Robot finds obstacles in front, (b) changes its direction to avoid them, (c) passes by obstacles, and (d) reaches to end point of obstacles. Obstacles are always observed during avoidance.

Figure 12
figure 12

Environment for obstacle avoidance. Three cones with different colors are placed as obstacles in center of corridor environment. Green pentagons indicate trajectory of robot to avoid obstacles.

Collision alarms

When the robot is moving by following a wall it passes the wall, as shown as Figure 13. A large area in the acquired image is occupied by the wall in this case and the operator finds it difficult to determine a safe distance of the wall from the image. Figure 14 shows images acquired with the non-contact collision alarms. Reflections of the emitted light can be observed when the robot approaches the wall and they disappear from the image when the robot is away from the wall.

Figure 13
figure 13

Environment to follow wall. Green pentagons indicates trajectory for following wall. Robot must keep safe distance to avoid collisions.

Figure 14
figure 14

Images acquired when following wall. Observed reflections are indicated by red circles. (a) No reflections are observed when robot maintains safe distance. (b) Reflection appears when robot is close to wall, and (c) it moves toward center of image when robot approaches wall. (d) Reflection moves away from the center when robot is away from wall.

Experiments for evaluation

We carried out two experiments for evaluation; the first was to evaluate a wide field of view (Experiment Result of Experiment 1) and the second was to evaluate the collision alarms (Experiment 2).

The prototype robot in Figure 8 was used as the platform and one of three types of vision systems was equipped on the robot. The provided vision systems were a wide-angle camera, the proposed vision system without collision alarms, and the proposed vision system with collision alarms. They determined the type of operated robot and correspond to W, Z, and D in Table 1. The image from the wide-angle camera before combination in our system was used as the image of type W vision system.

Twelve participants in their twenties took part the experiments, none of whom had experience with teleoperation of robots. All of them were asked to teleoperate a robot twice with different types of vision.

Environment

Two rooms separated by a corridor were provided for the experiment, as shown in Figure 15. One room was used as the operation area where participants were asked to stay and operate the robot by PC. The other room was used as the work area for the robot. The operators could not directly see or hear the robot during the experiment, instead they could monitor the image transmitted via Wi-Fi link from the robot as shown in Figure 16. One of the authors stayed in the operation area as an observer to observe the participants and to capture their operations on video. Another video camera was installed in the work area to capture the moving robot.

Figure 15
figure 15

Environment for experiments. Two rooms separated by a corridor are used for experiments. One is for operators and another is for robot. Video cameras are installed in rooms to take videos during the experiment.

Figure 16
figure 16

Operation of robot. (a) Operator in the operation area and (b) display of PC for operation.

The layout for the work area is described in Figure 17. It is 4.6×3.65 m of rectangular area surrounded by partition boards with 1.8 m in height so that the operator could not estimate the direction of the robot from the walls. Two lower partitions with 1.2 m in height were added to form a narrow path in the center of the work area. Cones with a maximum diameter of 300 mm and 700 mm in height were placed inside the work area. Ten white cones and two yellow cones were placed as obstacles and each cone was separated by gaps longer than 800 mm so that the robot could pass through the area. A blue cone was placed to indicate the starting point and a red cone was placed to indicate the goal point for the robot’s task. The red cone was hidden behind the lower partition so that the goal point could not be seen from the starting point.

Figure 17
figure 17

Configuration of work area and robot. Work area is surrounded by walls and cones are placed in it as objects. Starting point and goal point are different in two configurations in (a) and (b). Initial position and direction of robot are indicated by green pentagons. Different configurations for environment and robot are used in two trials in one experiment.

Procedure

The task given to the operators was to navigate the robot from the starting point to the goal point. Two trials were performed in each experiment with a different type of robot. The trials in Experiment Result of Experiment 1 can be identified by 1(W) and 1(Z), which correspond to the operation of the robot with the wide-angle camera and the proposed vision system without collision alarms. The robot with the proposed vision system was used twice in Experiment 2 and the difference in the trials was the existence of collision alarms. The two trials in Experiment 2 can be identified by 2(Z) and 2(D), which correspond to robots without and with collision alarms.

Participants were required to follow an eight-step procedure in the experiments.

  1. 1.

    A few minutes of practice time were given to check the operation of the PC and the motion of the robot in the operation area after the task had been explained to the participants.

  2. 2.

    The robot for the first trial was placed in the work area. The starting point, the goal point, and the initial status of the robot were configured, as shown in Figure 17(a).

  3. 3.

    Participants were asked to operate the robot to reach the goal point in the first trial. The trial was finished when the robot reached the goal or the participants abandoned operation.

  4. 4.

    Participants were asked to answer the questions shown in Table 2 after the first trial.

  5. 5.

    The vision system on the robot was replaced. The starting point, the goal point, and the initial status of the robot were configured, as shown in Figure 17(b).

  6. 6.

    Participants were asked to operate another robot with a different vision system in the second trial to reach the goal point in the environment shown in Figure 17(b). The trial was finished when the robot reached the goal or the participants abandoned operation.

  7. 7.

    Participants were asked to answer the same questions as those in the first trial listed in Table 2 after the second trial.

  8. 8.

    Participants were interviewed by observers.

The robot in the work area is shown in Figure 18 and the acquired images are shown in Figure 19.

Figure 18
figure 18

Robot operated in work area. (a) Robot passes through cones without colliding and (b) left whisker touches cone.

Figure 19
figure 19

Images acquired during experiments. (a) Walls and obstacles are observed but starting and goal point are not. (b) Goal point is observed in front and reflection from wall on right is also observed.

Six of the twelve participants were involved in Experiment Result of Experiment 1 and the other six were involved in Experiment 2. Six participants were divided into two groups in each experiment to operate the robot twice in different order, as summarized in Table 3. Groups can be distinguished by A and B, and participants can be identified by ‘a’ to ‘l’ with the group identifier. For example, the participant identified by e(B) was involved in Experiment Result of Experiment 1 and belonged to group B: he was asked to operate type Z in the first trial and type W in the second trial.

Table 3 Order of operation in experiments

Participants were asked to answer the questions listed in Table 2 at the end of each trail on a Likert scale in a range from one (strongly disagree) to five (strongly agree). They were also asked to write a short comment or a reason for each question. We expected that Q3 and Q5 would reveal the effect of the wide field of view and Q4 would reveal the effect of the collision alarms. Q1 and Q2 were added to separate these factors from others such as the design of the user interface, delay in communication links, and unstable motion control of the robot.

We extracted information from the video taken in the work area after all trials had been completed by participants. We measured the criteria in Table 4, which were the travelling time from the starting point to the goal point, the number of pauses of the robot, the number of rotations of the robot, and the number of collisions with objects. Pauses and rotations of the robot were the results from executing commands sent by the operator. We assumed that these numbers decreased if vision systems with a wide field of view could enable the operator to obtain sufficient information to find the goal point and obstacles with fewer rotations of the camera. Collisions were counted when part of the robot’s body touched a cone or a wall. We assumed that the number of collisions decreased if the wide field of view and collision alarms effectively worked to find and avoid objects.

Our three main hypotheses in this research were:

• A wider field of view provided advantages in finding the goal.

• A wider field of view provided advantages in finding and avoiding obstacles.

• Collision alarms were effective in avoiding collisions.

We evaluated whether these hypotheses were acceptable of not from the experimental results.

Result of Experiment 1

Figure 20 shows the scores for the questionnaire administered to the participants in Experiment Result of Experiment 1 and they are classified by the type of operated robot. Figure 21 shows the results measured from the video and they are classified by criterion in Table 4.

Figure 20
figure 20

Results from questionnaire on Experiment Result of Experiment 1 . (a) Results from operation of robot with wide-angle camera, and (b) results from robot with the proposed vision system. Blue bars are participants in group A and brown bars are participants in group B.

Figure 21
figure 21

Results from measurements in Experiment Result of Experiment 1 . Individual results from measurements are indicated in (a) travelling time, (b) number of pauses, (c) number of rotations, and (d) number of collisions. Blue bars are trials of robot with wide-angle camera and red bars are trials of robot with proposed vision system.

The mean and the standard deviation of scores are summarized in Table 5(a) and Figure 22. The differences of scores in two trials are listed in Table 5(b) and they are classified by the group of participants. The row ‘W’ is the number of participants that the score of type W was higher than type Z. The row ‘Z’ and ‘even’ corresponds to the case that the score of type Z was higher than type W, and same score in both cases respectively. The mean and standard deviation of measurements are listed in Table 6. As the trends in scoring and measurements are similar for both groups A and B, we considered that the order of operation did not affect the results.

Figure 22
figure 22

Mean of scores for questionnaire in Experiment Result of Experiment 1 . Mean of scores with standard deviation for Experiment Result of Experiment 1 are indicated. Blue bars are for trial 1(W) and red bars are for trial 1(Z).

Table 5 Mean and difference of scores of questionnaire in Experiment Result of Experiment 1
Table 6 Mean of measurements in Experiment Result of Experiment 1

Result of Experiment 2

Figure 23 presents the scores for the questionnaire administered to the participants in Experiment 2 and they are classified by the type of operated robot. Figure 24 has the results measured from the video and they are classified by criterion in Table 4.

Figure 23
figure 23

Results for questionnaire on Experiment 2 . Proposed vision system is used in two trials. (a) Results for operation of robot without collision alarms, and (b) results for that with collision alarms. Blue bars represent participants in group A and brown bars represent those in group B.

Figure 24
figure 24

Results from measurements in Experiment 2 . Individual results from measurements are (a) travelling time, (b) number of pauses, (c) number of rotations, and (d) number of collisions. Proposed vision system is used in two trials. Orange bars represent trials without collision alarms and green bars represent trials with collision alarms.

The mean and the standard deviation of scores are summarized in Table 7(a) and Figure 25. The differences of scores in two trials are listed in Table 7(b) and they are classified by the group of participants. The row ‘Z’ is the number of participants that the score of type Z was higher than type D. The row ‘D’ and ‘even’ corresponds to the case that the score of type D was higher than type Z, and same score in both cases respectively.

Figure 25
figure 25

Mean of scores for questionnaire in Experiment 2 . Mean scores with standard deviation in Experiment 2. Orange bars represent trial 2(Z) and green bars represent trial 2(D).

Table 7 Mean and difference of scores of questionnaire in Experiment 2

Participant l(B) took a long time to operate the robot in trial 2(Z) in the criterion ‘time’ in Figure 24(a). We evaluated the measurements of participant l(B) as to whether they differed from the mean of five other participants by using a t-test. A significant differences (P<0.05) was found in the criterion ‘time’ in trail 2(Z) and ‘coll’. in trial 2(Z). No significant differences were not found in the other criteria in trial 2(Z) or all criteria in trial 2(D). We decided to analyze the measurements for the five participants except for l(B).

The mean and the standard deviation of measurements without participant l(B) are listed in Table 8. As the trends in scoring and measurements except for participant l(B) were similar in both groups A and B, we considered that the order of operation did not affect the results.

Table 8 Mean of measurements in Experiment 2

Discussions

Effect of wide field of view

We compared the mean of two trials, 1(W) and 1(Z), after a F-test for homogeneity of variance in all measurement criteria. We used a Welch’s test for the difference between means in the criterion ‘rot’. in which homogeneity of variance was not assumed. We used a t-test in the other criteria in which homogeneity of variance was assumed. The results are summarized in Table 6. A significant difference (P<0.05) was found in the criterion ‘pause’ and a marginal significant difference (0.05≤P<0.1) was found in the criterion ‘rot’.

As we assumed that the operator needed fewer direction changes of the camera with the wider field of view, it was predictable that the number of pauses and rotations decreased if the operator easily found the goal. The differences in the criterion ‘pause’ and ‘rot’. indicate that the number of pauses and rotations of the robot decreased and the robot with the proposed camera could reach the goal with fewer direction changes of the camera. The highest score was given by all participants to the question Q5 in trial 1(Z). It suggests that participants thought the goal point could be found more easily with the proposed camera than the wide-angle camera. In other words, the wide-angle camera was not sufficient to obtain information from the surroundings of the robot although the camera had 110 degrees of view, which is approximately double the angle of a standard camera. As a result the number of pauses and rotations of the robot were increased in trial 1(W). Therefore, we considered the hypothesis of ‘a wider field of view provided advantages in finding the goal’ to be acceptable.

However, we predicted that the number of collisions with objects would decrease if the wide field of view effectively worked to find and avoid collisions. Nevertheless, no significant difference was found (P>0.1) in the criterion ‘coll’. The trends in scoring in Q3 and Q4 were similar in two trials. Consequently, we considered the hypothesis of ‘a wider field of view provided advantages in finding and avoiding obstacles’ to be unacceptable. A possible reason is that the difficulty of detecting and avoiding obstacles was similar in both trials. The wide-angle camera was considered sufficient to identify objects as long as the operator observed the frontal direction to reach the goal point in the experimental environment. This could clearly be demonstrated by comparison with a standard camera.

Effect of collision alarms

We compared the mean of two trials, 2(Z) and 2(D), after a F-test for homogeneity of variance in all measurement criteria. We used a Welch’s test for difference between means in the criterion ‘coll’. in which homogeneity of variance was not assumed. We used a t-test in the other criteria in which homogeneity of variance was assumed. The results are summarized in Table 8. A significant difference (P<0.05) was found in the criterion ‘coll’.

The trends in scoring in Q4 was similar in two trials. It suggests that participants did not find differences in the two trials or the effect of collision alarms. However the difference in the criterion ‘coll’. indicates that the robot with the collision alarms could reach the goal with fewer collisions or robot motion improved with increasing safety. It can be interpreted as the collision alarms worked. Therefore, we considered the hypothesis of ‘collision alarms were effective in avoiding collisions’ to be acceptable.

In addition, we compared the mean of trials, 1(Z) and 2(Z), by using a F-test and a t-test. No significant differences were found in any measurement criteria. The mean number of collisions could have been reduced if the participants in Experiment Result of Experiment 1 had operated the robot with collision alarms, which was determined by comparing 2(Z) and 2(D).

Effect of experience in operation

Travelling time in trial 2(Z) was much longer and the number of collisions in trial 2(Z) was more for participant l(B) than those for the other five participants. No significant differences were not found in the other criteria in trial 2(Z) or all criteria in trial 2(D) as described above. This reveals a possibility that the order of trials affected the results or another possibility that collision alarms helped novice operators. This is since participants l(B) operated 2(D) with collision alarms in the first trial and losing the collision alarms could result increasing the number of collisions in the second trial. Additional trials could be helpful to clarify this point.

Conclusions

We described about our vision system to teleoperate mobile robots, which was configured with two cameras to generate a single image with a wide filed of view. The experimental results revealed that our system was useful for obtaining information on the surroundings of robots especially to find target objects in the environment. Collision alarms were proposed to help operators visualize the risk of collisions as extensions to our vision system. Other experimental results demonstrated that the proposed devices worked to reduce the number of collisions.

In the future work, we need to evaluate the influence of the discontinuous images cased by the blind area during teleoperation. Although we did not experience serious problems in our experiments, research based on the geometry of blind area is necessary to increase the reliability of our system. We need to consider enlarging the detection range of collision alarms to improve our vision system and efficiently use two types of images, i.e., the wide-angle images and combined images with the wide field of view. The effect of training in teleoperation should be discussed and evaluated in addition to these considerations.

References

  1. Matsuno F, Tadokoro S (2004) Rescue robots and systems in Japan. In: IEEE international conference on robotics and biomimetics, 12–20. doi:10.1109/ROBIO.2004.1521744.

    Google Scholar 

  2. Murphy RR, Kravitz J, Stover S, Shoureshi R: Mobile robots in mine rescue and recovery. IEEE Robot Autom Mag 2009,16(2):91–103. doi:10.1109/MRA.2009.932521 10.1109/MRA.2009.932521

    Article  Google Scholar 

  3. Nagatani K, Kiribayashi S, Okada Y, Tadokoro S, Nishimura T, Yoshida T, Koyanagi E, Hada Y (2011) Redesign of rescue mobile robot Quince. In: IEEE international symposium on Safety, Security, and Rescue Robotics (SSRR), 13–18. doi:10.1109/SSRR.2011.6106794.

    Google Scholar 

  4. Yang S, Jin S, Kwon S (2008) Remote control system of industrial field robot. In: 6th IEEE international conference on industrial informatics, 442–447. doi:10.1109/INDIN.2008.4618140.

    Google Scholar 

  5. Ohya A, Yuta S, Yoshida T, Koyanagi E, Imai T, Kitamura S, Takeuchi A, Minamikawa T (2009) Development of inspection robot for under floor of house. In: IEEE international conference on robotics and automation, 1429–1434. doi:10.1109/ROBOT.2009.5152340.

    Google Scholar 

  6. Maeyama S, Yuta S, Harada A (2000) Experiments on a remote appreciation robot in an art museum. In: IEEE/RSJ international conference on intelligent robots and systems, 1008–1013. doi:10.1109/IROS.2000.893151.

    Google Scholar 

  7. Midorikawa N, Ohno K, Saga S, Tadokoro S (2008) Development of on-line simulation system for multi camera based wide field of view display. In: IEEE/RSJ international conference on intelligent robots and systems, 2097–2102. doi:10.1109/IROS.2008.4651010.

    Google Scholar 

  8. Yuan H, Wang B, Zhang J, Hui H (2010) A novel method for geometric correction of multi-cameras in panoramic video system. In: Measuring technology and mechatronics automation (ICMTMA), 248–251. doi:10.1109/ICMTMA.2010.677.

    Google Scholar 

  9. Courbon J, Mezouar Y, Eck L, Martinet P (2007) A generic fisheye camera model for robotic applications. In: IEEE/RSJ international conference on Intelligent Robots and Systems (IROS 2007), 1683–1688. doi:10.1109/IROS.2007.4399233.

    Google Scholar 

  10. Sun J, Zhu J (2008) Calibration and correction for omnidirectional image with a fisheye lens. In: Fourth International Conference on Natural Computation (ICNC ‘08), 133–137. doi:10.1109/ICNC.2008.771.

    Google Scholar 

  11. Eino J, Araki M, Takiguchi J, Hashizume T (2004) Development of a forward-hemispherical vision sensor for acquisition of a panoramic integration map. In: IEEE international conference on robotics and biomimetics, 76–81. doi:10.1109/ROBIO.2004.1521755.

    Google Scholar 

  12. Yamazawa K, Yagi Y, Yachida M (1995) Obstacle detection with omnidirectional image sensor Hyperomni vision. In: IEEE international conference on robotics and automation, 1062–1067. doi:10.1109/ROBOT.1995.525422.

    Google Scholar 

  13. Yoshida K, Nagahara H, Yachida M (2006) An omnidirectional vision sensor with single viewpoint and constant resolution. In: IEEE/RSJ international conference on Intelligent Robots and Systems (IROS 2006), 4792–4797. doi:10.1109/IROS.2006.282351.

    Google Scholar 

  14. Suematu Y, Yamada H, Ueda T (1993) A wide angle vision sensor with fovea - design of distortion lens and the simulated images -. In: International conference on Industrial Electronics, Control, and Instrumentation (IECON), 1770–1773. doi:10.1109/IECON.1993.339342.

    Google Scholar 

  15. Kuniyoshi Y, Kita N, Suehiro T, Rougeaux S: Active stereo vision system with foveated wide angle lenses. In Recent development in computer vision, lecture notes in computer science vol. 1035. Springer, Heidelberg; 1996:191–200. doi:10.1007/3–540–60793–5_74

    Google Scholar 

  16. Shimizu S, Kato T, Ocmula Y, Suematu R (2001) Wide angle vision sensor with fovea - navigation of mobile robot based on cooperation between central vision and peripheral vision -. In: 2001 IEEE/RSJ international conference on intelligent robots and systems, 764–771. doi:10.1109/IROS.2001.976261.

    Google Scholar 

  17. Shimizu S, Kiyohara M, Hashizume T (2012) Development of micro wide angle fovea lens. In: International conference on Industrial Electronics, Control, and Instrumentation (IECON), 3796–3801. doi:10.1109/IECON.2012.6389286.

    Google Scholar 

  18. Shiroma N, Sato N, Yu-huan C, Matsuno F (2004) Study on effective camera images for mobile robot teleoperation. In: 13th IEEE international workshop on robot and human interactive communication, 107–112. doi:10.1109/ROMAN.2004.1374738.

    Google Scholar 

  19. Tsubouchi T, Tanaka A, Ishioka A, Tomono M, Yuta S (2004) A slam based teleoperation and interface system for indoor environment reconnaissance in rescue activities. In: 2004 IEEE/RSJ international conference on intelligent robots and systems, 1096–11022. doi:10.1109/IROS.2004.1389543.

    Google Scholar 

  20. Solea R, Veliche G, Cernega D, Teaca M (2013) Indoor 3d object model obtained using data fusion from laser sensor and digital camera on a mobile robot. In: 17th International Conference System Theory, Control and Computing (ICSTCC), 479–484. doi:10.1109/ICSTCC.2013.6689007.

    Google Scholar 

  21. Suzuki S (2011) A vision system for remote control of mobile robot to enlarge field of view in horizontal and vertical. In: 2011 IEEE international conference on robotics and biomimetics, 8–13. doi:10.1109/ROBIO.2011.6181254.

    Google Scholar 

  22. Russell RA (1992) Using tactile whiskers to measure surface contours. In: 1992 IEEE international conference on robotics and automation, 1295–1299. doi:10.1109/ROBOT.1992.220070.

    Google Scholar 

  23. Kaneko M (1994) Active antenna. In: 1994 IEEE international conference on robotics and automation, 2665–2671. doi:10.1109/ROBOT.1994.351112.

    Google Scholar 

  24. Guarnieri M, Debenest P, Inoh T, Takita K, Masuda H, Kurazume R, Fukushima E, Hirose S (2009) HELIOS carrier: Tail-like mechanism and control algorithm for stable motion in unknown environment. In: 2009 IEEE/RSJ international conference on intelligent robots and systems, 1851–1856. doi:10.1109/ROBOT.2009.5152513.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sho’ji Suzuki.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SS designed and implemented the vision system, designed the collision alarms, and analyzed experimental results. RS designed and implemented the collision alarms, examined the questionnaires and the measurement criteria, and organized and carried out the experiments. Both authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Suzuki, S., Suda, R. A vision system with wide field of view and collision alarms for teleoperation of mobile robots. Robomech J 1, 8 (2014). https://doi.org/10.1186/s40648-014-0008-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-014-0008-5

Keywords