Skip to main content
  • Research Article
  • Open access
  • Published:

A framework of physically interactive parameter estimation based on active environmental groping for safe disaster response work

Abstract

Disaster response robots are expected to perform complicated tasks such as traveling over unstable terrain, climbing slippery steps, and removing heavy debris. To complete such tasks safely, the robots must obtain not only visual-perceptual information (VPI) such as surface shape but also the haptic-perceptual information (HPI) such as surface friction of objects in the environments. VPI can be obtained from laser sensors and cameras. In contrast, HPI can be basically obtained from only the results of physical interaction with the environments, e.g., reaction force and deformation. However, current robots do not have a function to estimate the HPI. In this study, we propose a framework to estimate such physically interactive parameters (PIPs), including hardness, friction, and weight, which are vital parameters for safe robot-environment interaction. For effective estimation, we define the ground (GGM) and object groping modes (OGM). The endpoint of the robot arm, which has a force sensor, actively touches, pushes, rubs, and lifts objects in the environment with a hybrid position/force control, and three kinds of PIPs are estimated from the measured reaction force and displacement of the arm endpoint. The robot finally judges the accident risk based on estimated PIPs, e.g., safe, attentional, or dangerous. We prepared environments that had the same surface shape but different hardness, friction, and weight. The experimental results indicated that the proposed framework could estimate PIPs adequately and was useful to judge the risk and safely plan tasks.

Introduction

For engaging in disaster response work in emergencies caused by earthquakes, tsunamis, and volcanic eruptions, disaster response robots are required to have a high ability to quickly and safely perform debris disposal and life-saving work on complicated terrains and in narrow places [1]. In response to these needs, various kinds of robots have been developed with special hardware suited to disaster response work [2,3,4,5], like the electrically-driven OCTOPUS (e-OCTOPUS) we have developed [5] (Fig. 1). As disaster sites are often unknown and unstructured, many of these robots have been operated by fully manual teleoperation with video support [3, 6] or a semiautomatic remote-control system [7,8,9,10]. Ultimately, it would be desirable for the robots to have a fully automated system, including prominent sensing, inference, and planning capabilities, to control them safely and efficiently. To establish such automated systems, the environmental recognition technologies must be more sophisticated because environmental information is essential as inputs for automated systems. Improving such technologies would also increase the performance of the current teleoperation and semiautomated systems [9]. At present, laser range finders (LRFs) and RGB-D (red, green, blue, and depth) cameras are widely used for object recognition [11, 12] and simultaneous localization and mapping (SLAM) [13,14,15]. Such visual information can greatly help to make a control plan for movement and manipulation [14]. For instance, visual information can provide robots with information on the surface shape of the environment, such as whether it is a slope, rough terrain, step, and object (Fig. 2a).

Fig. 1
figure 1

Electrically-driven OCTOPUS (e-OCTOPUS)

Fig. 2
figure 2

Importance of estimating physically interactive parameters (PIPs)

However, by only visual information, the robot cannot know the physical attributes, such as the material constituting the surface and what exists under the surface, which means that disaster response robots relying on only visual information will be less safe, less effective, and less adaptable. As an example of a dangerous case, we give a situation where the robot tries to pass along a rough road (Fig. 2b-ii). From visual information (i.e., a surface survey), the robot recognizes that the rough road has sufficiently low roughness to traverse it and does not have any large holes where the robot would fall. However, the road was just an apparent road incidentally made by wooden or steel debris, and a large unfilled space existed under the surface. If the robot moves along the road, the debris will collapse, and the robot will fall into the pit made by the collapse of the debris. In this case, if the robot can estimate the hardness of the road in advance, it can take an alternative safer route. Consequently, to ensure the safety and efficiency of disaster response work, it is quite important to obtain physical attribute parameters.

Here, we analyze the physical parameters to be estimated while referring to the typical types of accidents in disaster sites. The ground and objects to be collected (hereinafter, target objects) with less hardness would be deformed by physical interaction with the robot. This would make the robot fall into a pit and crush a target object, so the robot must know the hardness (Fig. 2b-ii, iii, iv). Moreover, the ground and target objects might be slippery due to water or oil. This would make the robot rollover by slipping on slopes and drop a grasped object, so the robot must estimate the friction (Fig. 2b-i, iii, iv). The robot must manipulate many objects, such as target objects (to be safely collected) and obstacles (to be removed to make a path). If the robot cannot generate enough force to transport a target object, the robot may drop and damage it, so the robot must estimate the weight (Fig. 2b-iv). From the analysis, we found that the hardness, friction, and weight can greatly affect the safety of the robot tasks, but these parameters cannot be basically obtained from visual-perceptual sensors since they can only be revealed from the results of physical interaction with the environment, e.g., reaction force and deformation [16] (Fig. 2b). Here, we call them the physically interactive parameters (PIPs).

Some studies estimate the hardness, friction, or weight to predict the traversability and effective manipulation for robot hands (details are in Sect. “Related and required work”). But, to our best knowledge, there are no studies on estimating all these parameters before executing tasks in a unique system by using the physical touch of disaster response robots. In this study, we thus propose a fundamental framework to estimate PIPs, including the hardness, friction, and weight, by active environmental touch. For the experiments, we used e-OCTOPUS with four arms, four flippers, and two crawlers [5] (Fig. 1c), as a disaster response robot equipped with one or more arms. We developed an estimation system by making full use of the benefit of the hardware of e-OCTOPUS, but we also made the system so that it can be applied to any kind of robot with one or more arms and crawlers, like Quince [17] and PackBot [18].

Related and required work

This study falls into the category of environmental recognition, and which can be roughly divided into non-touch (visual-perception) and touch (haptic-perception) methods [19]. Here, we investigate the related works on estimating PIPs and derive the required works.

Related works on environmental recognition

We investigate the related works on estimating PIPs and analyze the advantages and limitations.

Visual-perceptual methods

As stated in Sect. “Introduction”, a limited number of studies have tried to estimate PIPs by the visual perception method. In [20], real-time road friction estimation was achieved by using convolutional neural networks based on camera images from vehicles. In [21], visually-classified terrain types based on a Gaussian process were proposed for slip prediction in planetary rovers. In [22], a friction prediction method was proposed based on image features, material class, and text mining. In [23], the weight of pigs was detected by using significant features, such as color, texture, and centroid, based on statistics from the original database. These methods have large constraints, including that they needed many datasets, the target environments were simpler compared with disaster sites, and they focused on estimating only one out of three parameters. PIP estimation based on visual-perceptual methods has been expected to implement for robots since it can remotely detect PIPs without physical contacts, but this is still a big challenging issue.

Haptic-perceptual methods

Since physical touch can directly elicit information about PIPs, there are many studies on haptic-perceptual methods. For hardness-related estimation, the hardness, elasticity, and stiffness were estimated by knocking on a surface with an accelerometer-equipped device [24], and a shape-independent estimation method was proposed based on deep learning and a gel-based tactile sensor [25]. For friction-related estimation, the slippage was predicted based on the force distribution for legged-robot locomotion [26]. For weight-related estimation, the mass, center of mass, and friction coefficient of the objects were estimated based on force and position information for object manipulation [27]. For applications using PIP estimation, a method for setting the grasping force without knowing the object’s weight, static friction coefficient, and stiffness was proposed based on the moment of deformation and deflection of a mechanical passive element [28]. Most conventional studies focused on estimating only one out of three parameters [29, 30]. Some of the above studies focused on estimating multiple parameters [27, 28], but the robot and/or the environment were less complex compared with those for disaster response robots and disaster sites.

Required works for disaster response robots

From the analysis in the previous section, we derived the requirements for PIP estimation methods and a total environmental recognition system for disaster response robots.

PIP estimation for disaster response robots

As mentioned above, disaster sites are typically complicated, so PIP estimation methods should be robust toward the disturbance and simple for ease of implementation. To achieve this, it is desirable for the system to use common parameters in estimating three PIPs by not using special sensors and special ways of active environmental touch. For hardness estimation, there are two types of tests: the indentation test, where a probe is pushed into an object, and the rebound test. Some studies adopted the rebound test [24], but this needs an accelerative motion, which is unsuitable for unstable environments. Thus, we adopt the indentation test due to its high robustness and implementability. For friction estimation, we adopt a rubbing method as humans do. We estimate the static friction coefficient to judge if an object is liftable and the dynamic friction coefficient to judge if a ground or slope is traversable. For weight estimation, we adopt a lifting-up method as the simplest way of measuring weight. These three methods could estimate three PIPs, commonly from the reactive normal and shear forces applied to the contact point as well as the displacement of the contact point. Note that each measurement principle is not novel, but the contribution is that we integrate hardness, friction, and weight estimation methods into one framework in a simpler and more robust way.

Environmental recognition system

For safer and more efficient work, the robot must obtain not only the shape information of the environment, but also PIPs such as hardness, friction, and weight. The PIPs are estimated through active environmental touch by the robot, but performing this manually requires teleoperators to exert a huge amount of physical and mental effort, so we introduce a semiautomatic control system. Active environmental touch applied to shape recognition is called ‘groping’ [31,32,33]. By reference to this, we call active environmental touch for estimating PIPs ‘groping’ too. The purpose of PIP estimation is to estimate the degree of the accident risk, so the system must output the accident risk in several classes. By considering these requirements, we develop an environmental recognition system consisting of the following four functions (Fig. 3).

  • Surface recognition. The system obtains point clouds around the robot from a visual sensor and precisely measures the ground coordinates by using haptic information (Sect. “Surface recognition”).

  • Groping control. The system semi-automatically performs active environmental touches with the robot arms, considering preciseness and time efficiency (Sect. “Groping control”).

  • PIP estimation. The system then estimates three PIPs, based on data obtained from groping control, i.e., the normal and shear force and displacement of the arm endpoint (Sect. “Estimation of PIPs”).

  • Accident risk judgement. Based on the estimated PIPs, the system finally outputs the accident risk, i.e., safe, attentional, or dangerous, for safe disaster response tasks (Sect. “Accident risk judgement”).

Fig. 3
figure 3

System diagram of physically interactive parameter and risk estimation

Method

In this section, we explain a method of PIP estimation by performing active environmental touch with obtained point cloud information (the left lower of Fig. 3).

Surface recognition

The vertical coordinate of the surface position in the robot coordinate system \({P}_{g}\) can be obtained from visual sensors, but floating gas or dust would generate noise in the signal. Thus, the robot corrects it based on haptic information. Here, we explain the two-step procedures.

Surface shape recognition by visual perception

To estimate an approximate surface shape, SLAM is performed; the robot first measures the travel distance (x, y, z) and rotation angle (yaw, pitch, roll) per unit time from point clouds obtained from three-dimensional (3D) light detection and ranging (LiDAR) based on the iterative closest point (ICP) algorithm, and the point group for surface shape mapping is selected. Here, to reduce the error of the measured self-position due to the error of the calculated rotation angle, which is a weakness of the ICP algorithm, we correct the pitch and roll angle by using data from the inertial measurement unit installed on the robot. Moreover, LiDAR specializes in obtaining a wide range of distance information, so the number of point clouds is sometimes insufficient to estimate the object shape. We thus use a depth sensor to obtain more point clouds for a closer range.

Surface coordination estimation by haptic perception

To accurately measure the surface position, the robot sets the visual perception-based surface position as an initial value and corrects it by touch with its arm. From the analysis in Sect. “Required works for disaster response robots”, combining position and force data is essential to estimate PIPs. Thus, we adopted the hybrid position/force control for obtaining the surface position [34]. To measure the contact force, we implemented a 3-axis force sensor inside the end-effector of the arm (Fig. 1b) (details are in Sect. “Design of end-effector”). Regarding the vertical coordinate in the robot coordinate system, as shown in Fig. 1a, \(P\) is the position of the arm endpoint, \({P}_{d}\) is the target endpoint position, \(F\) is the measured force, and \({F}_{d}\) is the target force. \({M}_{p}\) is the inertia matrix of the arm, \({D}_{d}\) is the virtual viscosity coefficient, \({K}_{d}\) is the virtual spring coefficient, \({K}_{f}\) is the force feedback gain, and \(\alpha\) is the control mode setting parameter (0 \(<\alpha \le\) 1). The control equation is thus given by

$${M}_{p}\ddot{P}+{D}_{d}\left(\dot{P}-\dot{{P}_{d}}\right)+{K}_{d}\left(P-{P}_{d}\right)=-F+\alpha {F}_{d}-\alpha {K}_{f}\left(F-{F}_{d}\right).$$
(1)

\(\alpha\)=1 (\(\alpha\)= 0) means a complete force (near-complete position) control. We dynamically change \(\alpha\) according to the target PIPs to provide a suitable control. The system knows the surface position obtained from point clouds \({P}_{s}\), so we set \(\alpha\) to 0.2, which is close to the position control, and target position \({P}_{d}\) to \({P}_{s}\). We set \({K}_{d}\)=200 and \({K}_{f}\)=250 from the exploratory experiments. The robot arm reaches \({P}_{d}\) and continues to push the surface from the above. The endpoint displacement when \(F\) becomes \({F}_{d}\) (= 5 N, as a value to surely confirm the existence of a surface) is \({P}_{g}\).

$${P}_{g}=P \left[\mathrm{when} F={F}_{d} \left(\alpha =0.2\right)\right].$$
(2)

The hybrid position/force control is also used for groping control, as explained in Sect. “Groping control”.

Groping control

Disaster response work can be divided into movement, e.g., traversing and climbing up, and manipulation tasks, e.g., removing and transporting objects, so we classify the groping into the ground groping mode (GGM) and object groping mode (OGM).

Design of end-effector

For groping, the arm touches, pushes, rubs, and lifts objects with various types of surfaces. Thus, the end-effector should be common in all groping controls. As a preliminary design, we developed an end-effector with a 3-axis force sensor (USL08-H6-1kN, Tec Gihan). The end-effector was hemispherical so that the force could be uniformly detected even if it was received from the normal and shear directions (Fig. 1b). Currently, the optimum material, which greatly affects the performance of the friction estimation, is not known, so we adopted acrylonitrile butadiene styrene (ABS) resin because it is a 3D printable, lightweight, and disposable material. Moreover, to make the robot successfully lift an object (by increasing the payload and stability), a rectangular pad with a rubber surface was added to the side of the end-effector (Fig. 1b). When the two arms hold an object, the robot can obtain both the normal and shear force from the force sensor.

Ground groping mode (GGM)

It is impractical to grope the whole range of environments by the arm, so the robot only gropes the area needed for confirming safe movement. As shown in Fig. 4a, the region made by extending the turning radius of the robot (1100 mm at the tip of the crawler) that is the largest movable region of the robot is set as a GGM search area. To execute GGM efficiently, we divide the GGM search area into the internal search area \({A}_{I}\) and external search area \({A}_{E}\)

Fig. 4
figure 4

Design of ground groping mode (GGM). Groping order of right and left arm are the same

\({A}_{I}\) in front of the robot is inspected with two front arms to check the safety of the crawler tracks. From the specifications of e-OCTOPUS, \({A}_{I}\) is 500 mm in width in total (250 mm for each arm). \({A}_{E}\) (outside the robot’s crawler) is inspected with two rear arms to expand the safety margin. \({A}_{E}\) is 600 mm in width in total (300 mm for each arm). The four arms simultaneously grope within the search area, respectively. To precisely measure the reaction force, the endpoint touches the surface from the above. From the arm configuration and robot rollover stability, we set the anteroposterior distance for the groping areas for \({A}_{I}\) and \({A}_{E}\) to 500 mm. In this preliminary study, we investigate the PIPs thoroughly in the search area, so the depth of the groping area was set to 50 mm since the diameter of the end-effector is 48 mm. Thus, we defined the groping areas for \({A}_{I}\) and \({A}_{E}\) (the green area of Fig. 4a).

Here, we explain the groping control for each PIP. For hardness estimation, each arm pushes down in the groping area. The minimum inspectable area is 50 \(\times\) 50 mm, as stated above, so we can define five groping points in \({A}_{I}\) (the right side of Fig. 4a). \({A}_{E}\) should have six groping points, but the groping in \({A}_{I}\) and \({A}_{E}\) must be simultaneously finished. Thus, we also set five groping points in \({A}_{E}\). As shown in the left side of Fig. 4b, the robot pushes each groping point from left to right based on the hybrid control presented in (1) (details are in Sect. “Hardness”). For the friction detection (the right part of Fig. 4b), the robot rubs the surface in a round trip from left \(\to\) right \(\to\) left, while applying a force to the surface by using (1) (details are in Sect. “Friction”). After finishing groping by pushing at all five points or rubbing a round trip with four arms, the robot moves forward 50 mm, and gropes again at the next groping area (Fig. 4c). The groping at one groping area (50 \(\times\) 500 mm) is defined as one set.

Object groping mode (OGM)

Image segmentation to identify the semantic things from an image has been implemented in automobiles [35] and surveillances, but disaster sites include more complex situations, so the implementation of these technologies is not easy. Actual operations require teleoperators and they (humans) are good at object segmentation, so we assume that the humans find where objects exist and judge the object type. Target objects and obstacles have different properties, so we adopted different groping methods. Also, actual objects have complex shapes, but for the simplicity of evaluating the PIP estimation system, we assume that the objects are simple rectangular, the center of gravity is the geometric center, and the friction coefficient of each side is the same. After the designation of the target object or obstacle by the human, the robot recognizes its shape and each side of the object (Fig. 5a) and measures the surface position in the same way as in Sect. “Surface recognition”.

Fig. 5
figure 5

Design of object groping mode (OGM)

Figure 5b shows the groping process for a target object. Target objects must be transported by lifting them with the arms. To avoid crushing and dropping them, the robot carefully estimates the hardness, friction, and weight before transporting them. The robot first pushes the center of the left and right sides of an object with two arms and estimates the hardness once by using control (1) (Fig. 5b-i). If the object is not hard enough, the robot selects other ways to collect the object (e.g., using a special tool). If the object is hard enough, the robot lifts it a little. From the normal and shear force measured by the force sensor, the robot estimates the weight and friction (Fig. 5b-ii). If the object slips from the grasp of the robot or is not lifted, the robot also selects other ways to collect the object. If the object is successfully lifted, the robot starts to transport it (Fig. 5b-iii). This groping process including these steps is defined as one set.

Figure 5c shows the groping process for an obstacle. The obstacles should be removed from the moving path of the robots. It is reasonable to remove an obstacle to the side, so the robot first tries to push it from the either left or right side (Fig. 5c-i). If it is moved to a position outside the path, the groping process is completed. If it is not moved sufficiently, the robot then tries to drag it in the longitudinal direction or get over it. These actions must be selected depending on the surrounding environments, so the human selects a suitable action. If dragging is selected, the robot pushes (Fig. 5c-ii) or pulls the front side of the obstacles (Fig. 5c-iii). If getting over it is selected, the robot estimates the hardness and friction of the top side by using the same way as for the GGM (Fig. 5c-iv). The robot pushes once at the center of the top side (hardness estimation) and rubs in a round trip in the lateral direction at the center of the top side (friction estimation). If the hardness and friction are enough, the robot gets over it. If the obstacle is not successfully dragged and its hardness and friction are not enough, the robot selects other ways to remove the obstacle or other routes. This groping process including these steps is defined as one set.

Estimation of PIPs

We here explain the method for estimating three PIPs on the basis of the GGM and OGM.

Hardness

As stated above, the arm pushes at all the groping points on the ground for GGM and pushes once at the center of the target side of the target object or obstacle for OGM by using (1). After confirming the ground position by the method explained in Sect. “Surface recognition”, the robot sets \(\alpha\) in (1) to 0.5, which is close to the force control. For hardness in the vertical direction (the left part of Fig. 6a), \({F}_{d}\) was set to 80 N (the maximum force for the vertical direction), and for hardness in the lateral direction (the left part of Fig. 6a), \({F}_{d}\) was set to 10 N (the maximum force for the lateral direction). We denote the position where the target force was obtained by pushing the arm as \(P\) and the ground surface coordinate as \({P}_{g}\) (Fig. 6a). The difference \({d}_{g}\) between them is defined as the hardness at a groping point, and it is given by

Fig. 6
figure 6

Method of estimating PIPs

$${d}_{g}=P-{P}_{g}.$$
(3)

The range of \({d}_{g}\) is 0–200 mm (due to the link length). A smaller (larger) \({d}_{g}\) means harder (softer). Environments with a large \({d}_{g}\) are easily deformed when applying an external force. In this study, one groping area has five groping points, so we use their mean value as the hardness of the groping area.

Friction

To estimate the dynamic friction coefficient \({\mu }_{d}\), the robot makes the arms rub one round trip in the lateral direction on the ground surface for GGM or on the obstacle surface for OGM by using (1) (\(\alpha\)= 0.5). To precisely estimate \({\mu }_{d}\), the mixture of the static friction and stick–slip phenomena should be removed. According to \* MERGEFORMAT [36], the sliding speed \(v\) and normal force \(N\)(= \({F}_{d}\)) should be adequately selected to avoid stick–slip. Here, \(v\) should be as fast as possible in terms of time efficiency, so we set \(v\) to 7 mm/s, which is the maximum stable speed of the endpoint. Then, we explored suitable \(N\) in various surfaces and finally set it to 15 N. As shown in the left part of Fig. 6b, we measured the shear force when rubbing the groping area, calculated \({F}_{S}\) as the mean value in the round trip, and derived \({\mu }_{d}\) from \({F}_{S}={\mu }_{d}N\). Here, the materials differ between the end-effector (resin) and the crawler (rubber). In this study, we assume that the friction coefficient of the crawler shoe \({\mu }_{rubber}\) (0.5) and that of the end-effector \({\mu }_{resin}\) (0.38) are known in advance. By using the scaling factor \({\mu }_{rubber}/{\mu }_{resin}\), the approximaly-converted dynamic friction coefficient for the crawler \({\mu }_{d}^{^{\prime}}\) is obtained by

$${\mu }_{d}^{\mathrm{^{\prime}}}=\frac{{\mu }_{rubber}}{{\mu }_{resin}}{\mu }_{d}.$$
(4)

To estimate the static friction coefficient \({\mu }_{s}\), the robot makes the arm lift a target object while applying a holding force to its lateral sides for OGM. As shown in the right part of Fig. 6b, on the basis of the force applied in the vertical (shear) direction of the left arm \({F}_{gL}\), right arm \({F}_{gR}\), and the holding force in the normal direction \({F}_{H}\), the static friction coefficient is given by

$${\mu }_{s}=\frac{{F}_{gL}+ {F}_{gR}}{2{F}_{H}} .$$
(5)

Here, the holding force \({F}_{H}\) is set to 10 N, which is the same as in the hardness estimation.

Weight

In the manipulation task for target objects, the robot needs information to judge whether the robot can lift the object. First, as shown in the right part of Fig. 6a, the hardness is estimated by applying \({F}_{H}\)=10 N. If the hardness was enough, while still applying 10 N, the robot lifts the object 50 mm in the vertical direction (Fig. 6c). The system knows from (5) if the surface has enough static friction coefficient for stable grasping by the two arms. If the object was successfully lifted, the weight \(M\) is simply given by

$$M={F}_{gL}+{F}_{gR.}$$
(6)

If \(M\) is heavier than the payload of the robot, it must be unstable because the object slid from the end-effector, which means that \({F}_{g}\) frequently reached zero. We use the mean value in 10 s as the weight of the object.

Experimental settings

We define the accident risk based on the estimated PIPs and perform three kinds of experiments (Figs. 7, 8). The control system was built by Robot Operating System (ROS).

Fig. 7
figure 7

Experimental setup

Fig. 8
figure 8

Groping actions to estimate PIPs including hardness, friction, and weight

Environmental conditions

We prepared three conditions of ground surface and object, including safe, attentional, or dangerous. The relationship between the value of each PIP and the state of the robot (absolutely-executable, marginally-executable, or non-executable) was already obtained from our preparation experiments. Thus, we here evaluate if the estimation system could output the correct category of accident risk state according to each environmental condition.

  • Hardness: The ground situation varies with the location. We thus prepared the flat ground composed of wood board (safe), polyurethane on the wood board (attentional), or polyurethane only (dangerous) (Fig. 7a).

  • Friction: When the robot moves up a slope, the friction is quite important to prevent slip-fall. We thus prepared a slope of 15\(^\circ\), with a surface a wood board (safe), iron plate coated with lubricant (attentional), or steel plate with Teflon (dangerous) (Fig. 7b).

  • Weight: When transporting and removing objects, the robot must know their weights. The arm payload of e-OCTOPUS is 3 kg, so we prepared 1 kg (safe), 2.5 kg (attentional), or 5 kg (dangerous) weights in the same cardboard box (Fig. 7c).

For hardness and friction estimation, as shown in Figs. 7a, b, the robot gropes two groping areas in front of the robot by using the left and right arms, respectively. To evaluate fundamental measurement performance, i.e., accuracy and preciseness, the robot gropes the same groping area ten sets. For weight estimation, as shown in Fig. 7c, the robot gropes an object in front of the robot by cooperatively using the left and right arms. To evaluate fundamental measurement performance, the robot gropes the same object ten sets.

Accident risk judgement

One of the purposes of estimating PIPs is to judge whether the robot can safely perform the task. Thus, the risk (safe, attentional, or dangerous) was defined by using \({Th}_{S-A}\) (the boundary of safe and attentional) and \({Th}_{A-D}\) (the boundary of attentional and dangerous).

For the hardness, \({Th}_{A-D}\) was derived from the maximum safe roll angle of the robot \({\theta }_{m\_Roll}\) (= 7\(^\circ\)) and the lateral width of the flipper \({L}_{Flipper}\) (= 500 mm), as shown in Fig. 7a. The risk depends on environments or tasks, so it is difficult to theoretically define. In this study, for simplification, \({Th}_{S-A}\) was set to half of \({Th}_{A-D}\) and we obtained \({Th}_{A-D}\) = 60.93 mm and \({Th}_{S-A}\) = 30.47 mm. The categories of accident risk are thus given by:

$$\begin{array}{cc}\begin{array}{l}\mathrm{Dangerous }:\\ \mathrm{Attentional }:\\ \mathrm{Safe }:\end{array}& \begin{array}{l}{d}_{g}\ge {L}_{Flipper}\times \mathrm{sin}({\theta }_{m\_Roll})\\ {d}_{g}\ge {L}_{Flipper}\times \mathrm{sin}\left({\theta }_{m\_Roll}\right)\times 0.5\\ {d}_{g}<{L}_{Flipper}\times \mathrm{sin}({\theta }_{m\_Roll})\times 0.5.\end{array}\end{array}$$
(7)

For the friction, \({Th}_{A-D}\) is derived from the possibility of preventing slip-fall on the slope concerned \({\theta }_{Slope}\) (= 15\(^\circ\), in our setup). To keep a safety margin, \({Th}_{S-A}\) was set to the dynamic friction coefficient \({\mu }_{d}^{^{\prime}}\). Like the hardness, the risk depends on the environments or tasks, so it is difficult to theoretically define. For simplification, \({Th}_{D}\) was set to half of \({Th}_{A}\) and we obtained \({Th}_{A-D}\)= 0.1339 and \({Th}_{S-A}\)= 0.2679. The risk is thus given by:

$$\begin{array}{cc}\begin{array}{l}\mathrm{Dangerous }:\\ \mathrm{Attentional }:\\ \mathrm{Safe }:\end{array}& \begin{array}{l}{\mu }_{d}^{\mathrm{^{\prime}}}\le \mathrm{tan}\left({\theta }_{Slope}\right)\times 0.5\\ {\mu }_{d}^{\mathrm{^{\prime}}}\le \mathrm{tan}\left({\theta }_{Slope}\right)\\ {\mu }_{d}^{\mathrm{^{\prime}}}>\mathrm{tan}\left({\theta }_{Slope}\right).\end{array}\end{array}$$
(8)

For the weight, \({Th}_{A-D}\) was set based on the maximum holding force of the arm \({F}_{m\_Hold}\) (= 20 N (= 10 N \(\times\) 2)) and the friction coefficient \({\mu }_{hand}\) (= \({\mu }_{s}\)=0.8 (rubber)) of the end-effector to prevent dropping the object. Like hardness and friction, the risk depends on environments or tasks, so \({Th}_{S-A}\) was simply set to half of \({Th}_{D}\) and we obtained \({Th}_{A-D}\) = 16 N and \({Th}_{S-A}\) = 8 N. The risk is thus given by:

$$\begin{array}{cc}\begin{array}{l}\mathrm{Dangerous }:\\ \mathrm{Attentional }:\\ \mathrm{Safe }:\end{array}& \begin{array}{l}M\ge {\mu }_{hand}\times {F}_{m\_Hold}\\ M\ge {\mu }_{hand}\times {F}_{m\_Hold}\times 0.5\\ M<{\mu }_{hand}\times {F}_{m\_Hold}\times 0.5.\end{array}\end{array}$$
(9)

Results and discussion

Figures 9, 10, 11 show the experimental results. We discuss them in terms of precise categorization of the risk accident and the time spent for groping.

Fig. 9
figure 9

Estimated hardness and accident risk

Fig. 10
figure 10

Estimated friction and accident risk

Fig. 11
figure 11

Estimated weight and accident risk

Hardness

Figure 9 shows the estimated hardness \({d}_{g}\) for the left and right arms in three situations, and we found from the figure that the hardness could be estimated stably and the difference among the materials could be clearly seen. The colored solid lines show the means for each condition. The dotted line shows \({Th}_{S-A}\) and \({Th}_{A-D}\), respectively, which were derived from (7). On the basis of the judged accident risk, the robot on the wooden ground can pass safely, the robot on the polyurethane on the wooden ground must be careful, and the robot on the polyurethane ground is at high risk for rollover. In the last case, the robot should change the route to be safe. We confirmed that the system could estimate the hardness adequately. Moreover, we found that the groping in one set took about 90 s (safe), 150 s (attentional), and 250 s (dangerous) because the time taken was directly related to the traveled vertical distance of the endpoints. We also confirmed that the hardness was determined by just one set of groping, so the robot can obtain the hardness in 250 s at the longest.

Friction

Figure 10a shows the estimated dynamic friction coefficient \({\mu }_{d}^{^{\prime}}\) for the left and right arms in three situations. The colored solid lines show the means for each condition. The dotted line shows \({Th}_{S-A}\) and \({Th}_{A-D}\), respectively, which were derived from (8). We found that each value was not stable although the varied range could be seen. This seems due to the slippage, stick–slip, and minute vibration of the arm although we have considered ways to deal with them. Thus, the accident risk could not be clearly identified. For surfaces with higher friction, the friction force tends to dynamically change, so the estimated \({\mu }_{d}^{^{\prime}}\) varies. Therefore, for a stable output, we calculated the mean of the friction \({\mu }_{d}^{^{\prime}}(n)\), which is given by

$${\mu }_{d}^{^{\prime}}\left(n\right)=\left({\sum }_{i=1}^{n}{\mu }_{d\_i}^{^{\prime}}\right)/n ,$$
(10)

where \(n\) is the number of groping sets, and \({\mu }_{d\_i}^{^{\prime}}\) is \({\mu }_{d}^{^{\prime}}\) at the \(i\)-th groping. Figure 10b shows the mean, and we found that, in our case, \({\mu }_{d}^{^{\prime}}\) (\(n\)=2), the mean at the 2nd groping, could be used as the dynamic friction coefficient considering the stability of the outputs. From the accident risk judgement, the robot can climb the wooden ground safely, requires attention to climb the iron plate with the oil, and cannot climb the iron plate with the Teflon due to there being less friction. The groping in one set took about 40 s independently of the conditions because the arm could rub the surfaces at a constant speed. We confirmed that the friction could be determined by 2-set groping, so the robot can obtain the friction in about 80 s.

Weight

Figure 11a shows the estimation result of the weight \(M\) with three situations. The dotted line shows \({Th}_{S-A}\) and \({Th}_{A-D}\), respectively, which were derived from (9). For 1 and 2.5 kg objects, the system could output the weight stably because the static friction force was larger than the weight of the object. However, for the 5 kg object, the result was instability because the slip occurred between the end-effector and object even with applying the maximum holding force. Specifically, in this slipping situation in the vertical direction, the robot cannot estimate if the weight of the object is over the payload of the robot arm because the weight of the object is calculated from the dynamic friction force [37]. Then, we calculated the mean of the estimation result in the same way as the friction estimation (Fig. 11b). For the 1 and 2.5 kg objects, the outputs were stable, but for the 5 kg object, the output was still unstable. To evaluate the stability of the measured value, we calculated the standard deviation \(\sigma\) of raw data (Fig. 11c). For the 1 and 2.5 kg objects, \(\sigma\) was extremely small, which means that the physical interaction was reproducible and high-reliability estimation was possible. On the other hand, for the 5 kg object, \(\sigma\) became larger than the others. In our ease, we found that the system could estimate whether the object could be grasped (for example, by using \({T}_{g}\)<2 N for graspable and \({T}_{ng}\)>5.5 N for not-graspable) by groping twice. The groping in one set took about 90 s independently of the conditions because the arm just held and lifted the object. We confirmed that the weight can be determined by two-sets groping at least, so the robot can obtain the weight in 180 s.

Discussion: contribution and limitation

The proposed framework is simple in terms of each technique but highly integrated as a framework consisting of sensing, groping control, and PIP estimation, while considering practical implementation. The experimental results showed that the proposed system could estimate PIPs and distinguish grounds and objects with different physical properties. The framework of active environmental touch is an important strategy to understand closer surrounding situations, in particular, where external sensors cannot properly work (due to smoke, etc.) and unknown environments. We believe that the simplicity could bring the robot controls the robustness requiring in disaster response works. Moreover, in disaster sites, these pieces of information are useful for not only themselves but also other robots and even human rescue teams, with a different weight and friction coefficient for interacting parts. However, the proposed framework has some limitations that are needed to be addressed in the future as follows.

  • Accuracy. For accident risk estimation, friction and weight were necessary to process the raw data (by using mean and standard deviation). Moreover, the accuracy of the weight estimation was not high. This would be mainly caused by unmodeled friction occurring between the robot end-effector and ground/object. This can be solved by modifying the shape and material of the end-effector for groping as well as modifying the control strategy to optimize the stability of the contact state between them. We need to consider an adaptive groping system to adjust the groping parameters such as the target force and groping speed, according to the result of initial groping.

  • Time efficiency. The groping took 80–250 s for each groping area. In terms of ensuring safety in disaster response works where the failures such as rollover are not allowed, the completion time might be acceptable, but faster estimation is of course desirable. This would be achieved by more integrating force obtained from groping and surface information from visual sensors to optimize a groping strategy including the number of groping points (groping resolution) and groping areas. As the accuracy of groping increases, the time efficacy naturally increases due to decreasing the number of groping sets.

  • Environmental complexness and more automation. In this study, we assumed that the ground was a flat surface and objects were rectangular. However, actual disaster sites have an uneven, jagged, and ragged surface and are made up of a complex mixture of different materials. Moreover, we assumed that teleoperators designate the type of environments, such as the ground, target object, or obstacles. In the future, thus, we will incorporate a high-level object shape recognition, automatic object segmentation, and adaptive groping system, as stated above, to adapt to more complex environments.

Conclusion and future works

We proposed a fundamental framework to estimate the hardness, friction, and weight by active environmental touch (groping motion), as physically interactive parameters (PIPs) between a robot and an environment that cannot be obtained from only visual-perceptual methods (surface survey). The robot actively touched, pushed, rubbed, and lifted objects in the environment based on the hybrid position/force control, and estimated the PIPs from the measured force and the position of the end-effector of the arm. We designed the ground and object groping mode for effective PIP estimation. The robots judged the accident risk as safe, attentional, or dangerous based on estimated PIPs. In the experiment, we prepared environments that had the same surface shape but different hardness, friction, and weight. The results indicated that the proposed framework could estimate PIPs and be useful to judge the accident risk. Moreover, we could derive effective information processing for improving the PIP estimation accuracy and robustness.

In the future, we will more increase the estimation accuracy by improving the material and shape of the end-effector. We will also optimize the number of groping points and groping speed to increase the time efficiency. Using both visual and haptic information would increase the capability of environmental recognition, so we will also further consider an integration method. We investigate more on the relationship between each PIP and accident risk (e.g., traversability, graspablility, etc.) by defining individual criteria to achieve safer and more efficient disaster response work.

Availability of data and materials

Not applicable.

References

  1. Murphy RR, Kravitz J, Stover SL, Shoureshi R (2009) Mobile robots in mine rescue and recovery. IEEE Robot Autom Mag 16(2):91–103

    Article  Google Scholar 

  2. Kamegawa T, Yamasaki T, Igarashi H, Matsuno F (2004) Development of the snake-like rescue robot “KOHGA”. In: 2004 IEEE Int Conf Robotics and Automation. pp 5081–5086

  3. Hiramatsu Y, Aono T, Nishio M (2002) Disaster restoration work for the eruption of Mt Usuzan using an unmanned construction system. Adv Robot 16(6):505–508

    Article  Google Scholar 

  4. Kamezaki M, Ishii H, Ishida T et al (2016) Design of four-arm four-crawler disaster response robot OCTOPUS. In: 2016 IEEE Int Conf Robotics and Automation. pp 2840–2845

  5. Kamezaki M, Chen K, Azuma K et al (2017) Development of a prototype electrically-driven four-arm four-flipper disaster response robot OCTOPUS. In: 2017 IEEE Conf Control Technology and Applications. pp 1019–1024

  6. Kamezaki M, Yang J, Iwata H, Sugano S (2016) Visibility enhancement using autonomous multicamera controls with situational role assignment for teleoperated work machines. J Field Robotics 33(6):802–824

    Article  Google Scholar 

  7. Chen K, Kamezaki M, Katano T et al (2017) A semi-autonomous compound motion pattern using multi-flipper and multi-arm for unstructured terrain traversal. In: 2017 IEEE/RSJ Int Conf Intell Robots and Systems. pp 2704–2709

  8. Kamezaki M, Iwata H, Sugano S (2014) A Practical operator support scheme and its application to safety-ensured object break using dual-arm machinery. Adv Robot 28(23):1599–1615

    Article  Google Scholar 

  9. Rohmer E, Ohno K, Yoshida T, Nagatani K, Konayagi E, Tadokoro S (2010). Integration of a sub-crawler’s autonomous control in Quince highly mobile rescue robot. In: 2010 IEEE/SICE Int Symp System Integration. pp 78–83

  10. Kamezaki M, Katano T, Chen K, Ishida T, Sugano S (2020) Preliminary study of a separative shared control scheme focusing on control-authority and attention allocation for multi-limb disaster response robots. Adv Robot 34(9):575–591

    Article  Google Scholar 

  11. Sanguino TJM, Gómez FP (2015) Improving 3D object detection and classification based on Kinect sensor and hough transform. In: 2015 IEEE Int Symp Innovations in Intell Syst & Applications. pp 1–8

  12. Manap MSA, Sahak R, Zabidi A, Yassin I (2015) Object detection using depth information from Kinect sensor. In: 2015 Int Colloquium on Signal Processing and its application. pp 160–163

  13. Nagai M, Tianen C, Shibasaki R, Kumagai H, Ahmed A (2009) UAV-borne 3-D mapping system by multisensor integration. IEEE Trans Geosci Remote Sens 47(3):701–708

    Article  Google Scholar 

  14. Dube R, Gawel A, Cadena C, Siegwart R (2016) 3D localization, mapping and path planning for search and rescue operations. In: 2016 IEEE Int Symp Safety, Security and Rescue Robotics. pp 272–273

  15. Khalife J, Ragothaman S, Kassas ZM (2017) Pose estimation with LiDAR odometry and cellular pseudoranges. In: 2017 IEEE Intell Vehicles Symp. pp 1722–1727

  16. Klatzky RL, Susan JL (1992) Stages of manual exploration in haptic object identification. Percept Psychophys 52(6):61–70

    Article  Google Scholar 

  17. Nagatani K, Kiribayashi S, Okada Y, Tadokoro S, Nishimura T, Yoshida T, Koyanagi E, Hada Y (2001) Redesign of rescue mobile robot Quince. In: 2001 IEEE Int Symp Safety, Security, and Rescue Robotics. pp 13–18

  18. Yamauchi BM (2004) PackBot: a versatile platform for military robotics. Proc SPIE, Unmanned Ground Vehicle Technol VI 5422:228–237

    Article  Google Scholar 

  19. Okamoto S, Nagano H, Yamada Y (2013) Psychophysical dimensions of tactile perception of textures. IEEE Trans Haptics 6(1):81–93

    Article  Google Scholar 

  20. Roychowdhury S, Zhao M, Wallin A, Ohlsson N, Jonasson M (2018) Machine learning models for road surface and friction estimation using front-camera images. In: 2018 IEEE Int J Conf Neural Networks. pp 1–8

  21. Cunninghham C, Ono M, Nesnas I, Yen J, Whittaker WL (2017) Loccally-adaptive slip prediction for planetary rovers using Gaussian processes. In: 2017 IEEE Int Conf Robotics & Autom. pp 5487–5494

  22. Brandao M, Hashimoto K, Takanishi A (2016) Friction from vision: a study of algorithmic and human performance with consequences for robot perception and teleoperation. In: 2016 IEEE-RAS Int Conf Humanoid Robots. pp 1–8

  23. Suwannakhun S. Daungmala P (2018) Estimating pig weight with digital image processing using deep learning. In: 2018 Int Conf Signal-Image Technology & Internet-Based Systems. pp 320–326

  24. Windau J, Shen WM (2010) An inertia-based surface identification system. In: Proc IEEE Int Conf Robot Autom. pp 2330–2335

  25. Yuan W, Zhu C, Owens A, Mandayam A. Srinivasan MA, Adelson EH (2017) Shape-independent hardness estimation using deep learning and a GelSight tactile sensor. In: 2017 IEEE Int Conf Robot Autom. pp 951–958

  26. Ambe Y, Matsuno F (2012) Leg-grope-walk–walking strategy on weak and irregular slopes for a quadruped robot by force distribution. In: 2012 IEEE/RSJ Int Conf Intell Robots and Systems. pp 1840–1845

  27. Murooka M, Nozawa S, Kakiuchi Y, Okada K, Inaba M (2017) Feasibility evaluation of object manipulation by a humanoid robot based on recursive estimation of the object’s physical properties. In: 2017 IEEE Int Conf Robotics and Automation. pp 4082–4089

  28. Sugaiwa T, Fujii G, Iwata H, Sugano S (2010) A Methodology for setting grasping force for picking up an object with unknown weight, friction, and stiffness. In: 2010 IEEE-RAS Int Conf Humanoid Robots. pp 288–293

  29. Maeno T, Kawamura T, Cheng S-H (2004) Friction estimation by pressing an elastic finger-shaped sensor against a surface. IEEE Trans Robot Automat 20(2):222–228

    Article  Google Scholar 

  30. Kamezaki M, Iwata H, Sugano S (2017) Condition-based less-error data selection for robust and accurate mass measurement in large-scale hydraulic manipulators. IEEE Trans Instrumentation and Measurement 66(7):1820–1830

    Article  Google Scholar 

  31. Chen K, Kamezaki M, Katano T, et al (2017) A preliminary study on a groping framework without external sensors to recognize near-environmental situation for risk-tolerance disaster response robots tolerance disaster response robots. In: 2017 IEEE Int Symp Safety, Security, and Rescue Robotics. pp 181–186

  32. Murooka M, Ueda R, Nozawa S, Kakiuchi Y, Okada K, Inaba M (2016) Planning and execution of groping behavior for contact sensor based manipulation in an unknown environment. In: 2016 Proc IEEE Int Conf Robotics and Automation. pp 3955–3962

  33. Bae J-H, Park S-W, Kim D, Baeg M-H, Oh S-R (2012) A grasp strategy with the geometric centroid of a groped object shape derived from contact spots. In: 2012 IEEE Int Conf Robot Autom. pp 3798–3804

  34. Ferretti G, Magnani G, Rocco P (1997) Toward the implementation of hybrid position/force control in industrial robots. IEEE Trans Robot Automat 13(6):838–845

    Article  Google Scholar 

  35. Hoang D-C, Lilienthal AJ, Stoyanov T (2020) Panoptic 3D mapping and object pose estimation using adaptively weighted semantic information. IEEE Robot Automat Lett 5(2):1962–1968

    Article  Google Scholar 

  36. Lee DW, Banquy X, Israelachvili JN (2013) Stick–slip friction and wear of articular joints. PNAS 110(7):567–574

    Article  Google Scholar 

  37. Chen W, Khamis H, Birznieks I, Lepora NF, Redmond SJ (2018) Tactile sensors for friction estimation and incipient slip detection—toward dexterous robotic manipulation: a review. IEEE Sensors J 18(22):9049–9064

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported in part by JSPS KAKENHI Grant Number 18KT0063, in part by the Industrial Cluster Promotion Project in Fukushima Pref., in part by the Institute for Disaster Response Robotics, Future Robotics Organization, Waseda University, in part by the Research Institute for Science and Engineering, Waseda University.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

MK took the lead in proposing a system, experimentation, programming, and wrote this paper as the corresponding author. YU and KA helped to build electric OCTOPUS, implement experiments and analyze data. SS provided technical advice. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mitsuhiro Kamezaki.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors agree.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kamezaki, M., Uehara, Y., Azuma, K. et al. A framework of physically interactive parameter estimation based on active environmental groping for safe disaster response work. Robomech J 8, 22 (2021). https://doi.org/10.1186/s40648-021-00209-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-021-00209-1

Keywords