Skip to main content

Human-based framework for the assembly of elastic objects by a dual-arm robot

Abstract

This paper proposes a new framework for planning assembly tasks involving elastic parts. As an example of these kind of assembly tasks, we deal with the insertion of ring-shaped objects into a cylinder by a dual-arm robot. The proposed framework is a combination of human movements to determine the overall assembly strategy and an optimization-based motion planner to generate the robot trajectories. The motion of the human’s hands, more specifically, the motion of the fingers gripping the object is captured by a Leap Motion Controller. Then, key points in the recorded trajectory of the position and orientation of the human’s fingers are extracted. These points are used as partial goals in the optimization-based motion planner that generates the robot arms’ trajectories which minimize the object’s deformation. Through experimental results it was verified the validity of the extracted key points from the human’s movements that enable the robot to successfully assemble ring-shaped elastic objects. We compared these results with the assembly done by purely repeating all of the human’s hands movements.

Introduction

As stated by Napier [1]: “the hand of man is, the most perfect and complete mechanical organ that nature has yet produced” and it also comprises a very fine tactile (somatic) sense. Indeed most of the tasks that only humans are able to accomplish are due to the skillfulness of our hands. Furthermore, if we add up that we humans do grasp and task planning [2, 3] (although most of us are not aware of this), then we can realize that wanting robots to be able to do human-like tasks is quite a challenge. Because of these reasons (among others), recently, much more attention has been given to the developed of human tele-operated systems, haptic interfaces, human knowledge transfer to robots, etc, where the complex planning strategies are done by humans. Using these techniques, the burden of developing complex motion planners, control systems and/or implementing sensory systems can be reduced considerably (in some cases even completely unneeded). In the particular case of assembly tasks, knowledge of all the parts to be assembled is indispensable. Since most of the assembly tasks have a very small margin of error (in the order of mm), vision sensors and force sensors are frequently needed for a robot to successfully complete the assembly. In our previous work [4], we developed an assembly planner able to insert an elastic o-ring into a cylinder. Our planner first computes a key position (or middle point) based on the object’s position, and then this key position is used as a partial goal in an optimization-based motion planner which computes a collision-free trajectory that at the same time minimizes the ring-shaped object’s deformation through an elastic energy based objective function. The algorithm for computing these key positions is purely heuristic. It was demonstrated that the developed planner was able to successfully insert ring-shaped objects into a cylinder.

Fig. 1
figure 1

Outline of the proposed framework for planning the assembly of an elastic object into a cylinder

In this paper, we discuss a novel strategy for generating the key positions used as partial goals in the motion planning algorithm that generates a collision-free trajectory for the robot and minimizes the deformation of the object. Figure 1 illustrates the proposed framework for the assembly of elastic parts in this work. This framework represents a novel, cheap and easy solution for planning complicated assembly tasks. The complex and tedious planning for inserting the elastic object into a cylinder is done by a human. A Leap Motion Controller [5] is used to record the trajectory of human’s hands when inserting the ring-shaped object into the cylinder. The recorded data is processed and the hand’s trajectories of the assembly task are identified. From these trajectories, key positions (points) are extracted (as many points as needed to guarantee that the assembly task is successfully accomplished) and used as partial goals for the motion planner that minimizes the ring-shaped object’s deformation. Experimental results confirmed the validity of the proposed framework with ring-shaped objects of two different materials. Finally, we make a quantitative analysis of the proposed assembly planner in comparison with the assembly based on human’s movements only.

This paper is organized as follows: in the "Related work" section, we briefly review related work. In the "Human-based assembly strategy" section, we show how to record the human movements while assembling the ring-shaped object and how to extract the key points needed by the motion planning algorithm. In the "Experimental results" section, we demonstrate the validity of the proposed assembly planner, using a Baxter Research robot. Finally, in the "Conclusion" section, we summarize the main contributions of this work.

Related work

The Leap Motion Controller [5] accuracy and capability to track the human’s hands has attracted many researchers. In recent years, the Leap Motion Controller has been used for the remote control of robots [6], as a human–robot-interface [78], for teleoperation [9], etc. In most of these work the Leap Motion Controller is always placed on a flat surface and the human operator carries out the task on the air, which is compensated using different kind of controllers to estimate the desired position and orientation. On the contrary, in this work the human will actually use the same object used by the robot to carry out the task, and its movements will be recorded by the Leap Motion Controller placed in a tripod to improve its capture range, thus increasing its accuracy and reliability. It will be demonstrated in the "Experimental results" section that the position and orientation obtained from the Leap Motion Controller can be used to reproduce the human’s movements without any prediction or estimation algorithm, but simply translating from Leap coordinates into world coordinates.

Regarding assembly tasks, previous work has discussed the insertion of a flexible beam [10] and a flexible wire [11] into a hole. The insertion of a vibrating linear deformable object into a hole by using a force/torque sensor mounted on the robot’s wrist has been studied by Yue and Henrich [12]. The assembly of a rubber belt and fixed pulleys, where a rubber belt is inserted into a small pulley, and then the belt is stretched so as to be inserted into a bigger pulley, have been discussed by Miura and Ikeuchi [13, 14]. The assembly strategy for complex-shaped parts has been discussed by Song et al. [15], where a force control based on visual geometric information of the parts was developed. Cho et al. [16] developed a sensor-less force control for industrial robots, that was applied to a peg-in-hole assembly task by using a disturbance observer that estimates disturbance torques at each robot’s joints.

Although there are specialized grippers for assembling o-rings [17], as far as we know, there is no work discussing the assembly of deformable ring-shaped objects by a dual-arm robot focusing on minimizing the deformation of the object. Using a dual-arm robot does not limit the type of tasks that can be achieved, thus representing a cheaper solution than specialized machines that can only insert o-rings. In addition, focusing on the object’s deformation is very important, since the main role of o-rings is to seal pipes against liquids and gases, which is why the o-ring diameter is smaller than that of the pipe/tube it is inserted in (like this the o-ring exerts pressure on the pipe/tube leaving no gap between them). Too much deformation can permanently change the size of the object leading to undesired gaps between the object and the pipe/tube, or can deteriorate the object leading to a reduced life span. For these reasons in this work we proposed a combination of human demonstration (leverage the burden of planning complex tasks) with an optimization-based planning (minimize the object’s deformation) for inserting ring-shaped objects.

Human-based assembly strategy

In this section we explain the human-based strategy for assembling an elastic ring-shaped object into a cylinder. First, we briefly show the methodology used to capture the human’s finger movements when carrying out the assembly task and the necessary conversion to robot’s position-orientation. Then, we show how to extract the key points used as partial goals in the optimization-based motion planner to achieve the assembly task.

Human’s movements acquisition

To capture the human’s movement when assembling an elastic ring-shaped object into a cylinder, we employed a Leap Motion Controller  [5]. This is a relatively new device that can track human’s hands movements as well as other finger-like devices such as pens, probes, etc. It has three infrared LEDs and two infrared cameras, it’s tracking accuracy has been reported to be under 0.7 mm [18]. The Leap Motion Controller is connected through an USB cable to the computer, where the tracking data can be recorded through the API (Application Programmer Interface) available from the manufacturer [5], we employed the software release version 2.2.3. The tracking data is available in Cartesian coordinates with respect to the Leap Motion Controller’s reference frame located at its center.

Fig. 2
figure 2

Experimental setup for tracking the assembly of ring-shaped objects into a cylinder. Front view (a) and side view (b) of the Leap Motion Controller and the cylinder

Figure 2 shows the experimental setup for tracking the assembly of a ring-shaped object into a cylinder (fixed on a table). Typically [6,7,8], the Leap Motion Controller is placed on a flat surface facing upwards. However, to improve the Leap Motion Controller field of view, we placed it in a tripod above the cylinder’s surface to reduce the occlusion of the hands by the cylinder, as shown in Fig. 2. It should be pointed out that when some part(s) of the hand(s) are occluded, the Leap Motion Controller will fit the observed data into a hand model and estimate the position and orientation of the occluded part(s) (the confidence of the fitted data is also available at all times from the Controller’s API provided by the manufacturer). As the Leap Motion Controller reference frame (hereafter called \(\Sigma _{\text {leap}}\)) has a different orientation from the robot’s reference frame \(\Sigma _\mathrm {w}\), we define a reference frame \(\Sigma _\mathrm {c}(x_\mathrm {c}, y_\mathrm {c}, z_\mathrm {c})\) located at the cylinder’s surface as shown in Fig. 2b which has the same z axis and \(x-y\) plane orientation as \(\Sigma _\mathrm {w}\). To translate the tracked data coordinates from \(\Sigma _{\text {leap}}\) into \(\Sigma _{\mathrm {w}}\), we first determine the orientation of \(\Sigma _{\text {leap}}\) with respect to the orientation of \(\Sigma _{\mathrm {c}}\). For this purpose, we carried out the following experiments:

  1. 1.

    Place the index finger along the height of the cylinder (pointing upwards and without moving it) and track its tip position and direction vector (unit vector pointing in the direction of the finger tip and available from the Leap Motion Controller API) for some seconds, which is in the z direction of \(\Sigma _{\mathrm {c}}\).

  2. 2.

    Place the index finger on the cylinder’s surface center and along the x direction of \(\Sigma _{\mathrm {c}}\) without moving it and track its tip position and direction vector for some seconds (positive x direction for right hand and negative x direction for left hand).

  3. 3.

    Move the tip of the index finger on the cylinder’s surface along the positive y direction of \(\Sigma _{\mathrm {c}}\) (with the finger pointing in the x direction,Footnote 1 positive for the left hand and negative for the right hand) from one edge of the cylinder to the opposite one and track its tip position.

Each of these experiments was repeated 12 times (6 times with each hand). Furthermore, the hand confidence level which as mentioned before is a measure of how well fitted is the observed data into the hand model of the controller (which value ranges between 0 and 1.0) was also recorded. Using the hand confidence level, we selected only the experiments with highest hand confidence level (top six) for each of the three types of experiments.

Position

For the x and z directions (experiments 2 and 1, respectively), we compute their average direction vector (through the recorded time) and we call these unit vectors \(\hat{\varvec{X}}\) and \(\hat{\varvec{Z}}\), respectively. In the case of y direction (experiment 3), as the finger is not pointing in the direction of y, we compute its average direction vector \(\varvec{Y}\) based on the position of the tip of the finger and then by normalizing \(\varvec{Y}\) we can obtained the unit vector \(\hat{\varvec{Y}}\), that together with \(\hat{\varvec{X}}\) and \(\hat{\varvec{Z}}\) can be written as:

$$\begin{aligned} {\tilde{\varvec{R}}}_{\mathrm {c}} = \begin{bmatrix} \hat{\varvec{X}}&\hat{\varvec{Y}}&\hat{\varvec{Z}} \end{bmatrix} \end{aligned}$$
(1)

where \(\tilde{\varvec{R}}_{\mathrm {c}}\) represents the estimated rotation matrix from the cylinder’s reference frame \(\Sigma _{\mathrm {c}}\) to the Leap Motion Controller reference frame \(\Sigma _{\text {leap}}\). As the obtained experimental data does not yield a perfect rotation matrix (orthonormal matrix), we first compute the roll, pitch and yaw (RPY) angles associated to \(\tilde{\varvec{R}}_{\mathrm {c}}\) and then construct a rotation matrix \(\varvec{R}_{\mathrm {c}}\) from the computed RPY angles. Therefore, we can rotate the tracked data from \(\Sigma _{\text {leap}}\) to \(\Sigma _{\mathrm {c}}\) by using \(\varvec{R}_{\mathrm {c}}^T\).

Next, to determine the position of \(\Sigma _{\mathrm {c}}\) with respect to the rotated frame \(\Sigma _{\text {leap}'}\) (same orientation as \(\Sigma _{\mathrm {c}}\), but located at the center of the Leap Motion Controller), we use the tracked data of the tip position of each of the previous experiments. After rotating the tip positions by \(\varvec{R}_{\mathrm {c}}^T\), from experiment 1 we can obtained the \(x_{\mathrm {c}}\) and \(y_{\mathrm {c}}\) coordinates of the position of \(\Sigma _{\mathrm {c}}\), from experiment 2 we can obtained \(z_{\mathrm {c}}\) and the coordinate \(x_{\mathrm {cc}}\) corresponding to the center of the cylinder, and finally from experiment 3 we can obtained also \(x_{\mathrm {c}}\), \(y_{\mathrm {c}}\) and \(z_{\mathrm {c}}\). Using the average of the three experiments and the cylinder’s radius we determine the cylinder’s surface center \(^{\text {leap}'}\!\! \varvec{p}_{\mathrm {c}}\) relative to the rotated frame \(\Sigma _{\text {leap}'}\) (same orientation as \(\Sigma _{\mathrm {c}}\), but located at the center of the Leap Motion Controller). Since we know the position of the cylinder in the robot’s reference frame \(^\text{w}\varvec{p}_{\mathrm {c}}\), the position of the finger’s tip \(^{\mathrm {w}}\varvec{p}_{\text {tip}}\) relative to the robot’s reference frame \(\Sigma _{\mathrm {w}}\) can be obtained as:

$$\begin{aligned} ^{\mathrm {w}}\varvec{p}_{\text {tip}} = \varvec{R}_{\mathrm {w}}\big ( \varvec{R}_{\mathrm {c}}^T (^{\text {leap}}\varvec{p}_{\text {tip}}) - \, ^{\text {leap}'}\varvec{p}_{\mathrm {c}} \big ) + \, ^{\mathrm {w}}\varvec{p}_{\mathrm {c}} \end{aligned}$$
(2)

where \(\varvec{R}_w\) is the rotation matrix from \(\Sigma _{\text {leap}'}\) to \(\Sigma _{\mathrm {w}}\).

Orientation

The Leap Motion Controller provides the roll, pitch and yaw angles that represent a rotation around z, x and y axis of \(\Sigma _{\text {leap}}\) of the hand orientation using the projection of the palm normal into the \(x-y\) plane to compute the roll, and the projection of the direction vector of the palm into the \(y-z\) and \(x-z\) plane to compute the pitch and the yaw angles, respectively (Fig. 3). As the reference frame \(\Sigma _{\text {leap}}\) is rotated with respect to \(\Sigma _{\mathrm {c}}\), using the RPY angles computed for constructing \(\varvec{R}_{\mathrm {c}}\), the orientation of the hand in \(\Sigma _{\text {leap}}\) is transformed to an orientation relative to \(\Sigma _{\mathrm {c}}\). Finally, the orientation of the hand is converted to a finger orientation, as the finger’s would be the equivalent to the robot’s grippers.

Using this experimental setup of the Leap Motion Controller we do not only get better tracking results of the hands but also we get a calibration between the Leap Motion Controller coordinate frame and the world frame that will help overcome possible tracking errors as pointed out by Kim et al. [9].

Key points extraction

First, we identify the starting and ending points of the recorded human’s hands movements using a similar criteria as in the work done by Nakaoka et al. [19]. Using the magnitude of the average velocity \(|\overline{\varvec{v}}(t)|\) between the tip of the index finger and the tip of the thumb finger of each hand (only these fingers are grasping the object as can be seen in Fig. 3), we define the following trajectory segments:

Fig. 3
figure 3

Skeletal representation of the tracked hand by the Leap Motion Controller. The hand orientation is defined by the normal vector to the palm and the direction vector from the palm to the fingers

$$\begin{aligned} |\overline{\varvec{v}_{\mathrm {L}}}(t)|&\ge v_{\text {th}} \;\;\; \forall \;\;\; t_\mathrm {a} \le t \le t_\mathrm {b}, \nonumber \\ |\overline{\varvec{v}_{\mathrm {R}}}(t)|&\ge v_{\text {th}} \;\;\; \forall \;\;\; t_\mathrm {c} \le t \le t_\mathrm {d}, \end{aligned}$$
(3)

where the subscripts \(_\mathrm {L}\) and \(_\mathrm {R}\) indicate left and right hand, respectively. The threshold \(v_{\mathrm {th}}\) of the velocity magnitude is set to \(|{\varvec{v}(t_\mathrm {h})}|\) \(\approx 0.72\) cm/s based on previous assembly experiments [4] for \(t_\mathrm {h}=0.05\) s. The starting and ending points are defined as:

$$\begin{aligned} \varvec{X}_{{\mathrm {i}}} = \varvec{X}(t_\mathrm {A} - t_\mathrm {h}) \;\;\; \mathrm {for} \;\;\; t_\mathrm {A} = {\text {min}}\, \{t_\mathrm {a}, \; t_\mathrm {c}\}, \nonumber \\ \varvec{X}_\mathrm {f};= \varvec{X}(t_\mathrm {B} + t_\mathrm {h}) \;\;\; \mathrm {for} \;\;\; t_\mathrm {B} = {\text {max}}\,\{t_\mathrm {b}, \; t_\mathrm {d}\}, \end{aligned}$$
(4)
Fig. 4
figure 4

Velocity magnitude average between the index and the thumb fingers’ tips throughout the recorded assembly. The horizontal solid line represents \(|v_{\mathrm {th}}|\) and the vertical solid lines represent the starting and ending points obtained by Eq. (3)

respectively, where \(\varvec{X}(t) = [\varvec{X}_\mathrm {p}^T(t) \; \varvec{X}_\mathrm {o}^T(t)]^T\), \(\varvec{X}_\mathrm {p}(t)\) denotes the average position vector of the thumb and the index fingers’ tips and \(\varvec{X}_\mathrm {o}(t)\) is the orientation vector (RPY angles) of the fingers at time t. Figure 4 shows an example of the velocity magnitude average \(|\overline{\varvec{v}}(t)|\) between the index and thumb fingers’ tips of both hands, throughout the recorded assembly. The horizontal solid line represents \(|v_{\mathrm {th}}|\), and the vertical solid lines represent the starting and ending points obtained through the process described above. The starting and ending time will be the same for both hands’ trajectories.

After identifying the beginning and the ending points of the assembly trajectory, we extract the necessary points from each hand trajectory to achieve the assembly task. These points will be used as partial goals (which we call key points) to the optimization-based motion planner developed in our previous work [4]. It should be pointed out that we must be careful in where to pick the key points from the trajectory of the human’s hands so that the assembly task is successfully carried out by the robot.

In this work, we need to determine where to break the trajectory of each hand that can guarantee that the object will be inserted into the cylinder. For this reason, the synchronization between the movement of both hands is crucial. Furthermore, when the object is about to make contact and when it is in contact with the cylinder, it is vital for the trajectory planner of the arms that the position of the opposite arm is known (during planning) to guarantee a successful assembly and at the same time to minimize the object’s deformation. To cope with these critical points, both of the robot arms will not be moved at the same time.

At first, we propose to select key points such that the distance traveled between two consecutive key points does not exceed a threshold percentage \(p_{th}\) of the total distance between the starting and ending points defined by Eq. 4. For this particular task, the threshold percentage \(p_{th}\) was set to \(30\%\) based on our previous assembly planner results [4], where three steps per robot arm were needed to achieve the assembly task successfully. This threshold percentage can be set based on the geometric features of the assembly task (number of corners, inflection points, etc.). Notice that using a threshold under 30% means an increase in the number of key points, which extends the execution time of the assembly task and as a consequence the object deformation also increases. To compute the traveled distance between two points, the Euclidean distance is used, one for position vectors and another for orientation vectors (roll, pitch and yaw angles).Footnote 2 For each hand, the n-th key point \(\varvec{P}_n = [\varvec{P}_{\mathrm {p}n}^T \; \varvec{P}_{\mathrm {o}n}^T]^T\) is selected at time \(t_{n}\) as \(\varvec{X}(t_n)\) if:

$$\begin{aligned}&|\varvec{X}_\mathrm {p}(t_n) - \varvec{P}_{\mathrm {p}(n-1)}|&> \;p_{th}|\varvec{X}_\mathrm {p}(t_f) - \varvec{X}_\mathrm {p}(t_{{i}})| \nonumber \\&\mathrm {and/or} \\&|\varvec{X}_\mathrm {o}(t_n) - \varvec{P}_{\mathrm {o}(n-1)}|&> \;p_{th}|\varvec{X}_\mathrm {o}(t_f) - \varvec{X}_\mathrm {o}(t_{{i}})| ,\nonumber \end{aligned}$$
(5)

where \([\varvec{X}_\mathrm {p}^T(t_i) \;\; \varvec{X}_\mathrm {o}^T(t_i)]^T= \varvec{X}_\mathrm {i}, \; [\varvec{X}_\mathrm {p}^T(t_f) \;\; \varvec{X}_\mathrm {o}^T(t_f)]^T= \varvec{X}_\mathrm {f}\) are given by Eq. 4, and \(t_\mathrm {f}\) and \(t_{{i}}\) denote final and initial time. Before the first key point has been selected, \(\varvec{P}_{\mathrm {p}(n-1)}\) and \(\varvec{P}_{\mathrm {o}(n-1)}\) are replaced by \(\varvec{X}_\mathrm {p}(t_{{i}})\) and \(\varvec{X}_\mathrm {o}(t_{{i}})\), respectively.

After evaluating condition (5) overall the recorded trajectory and having extracted the corresponding key points for each hand; we verify that for each segment between consecutive key points, the largest normalized difference between the hand trajectory and its corresponding cubic spline interpolation does not exceed a given threshold (as proposed by Nakaoka et al. [19] for position vectors only). An extra key point will be added at \(t_\mathrm {pmax}\) or \(t_\mathrm {omax}\) in the segment between consecutive key points (\(\varvec{P}_{n-1}\) and \(\varvec{P}_n\)) when the maximum distance:

$$\begin{aligned} d_\mathrm {p}(t_\mathrm {pmax}) = \mathop {\text{max}}\limits_{{tn - 1 t tn}} |\varvec{s}_{\mathrm {p}n}(t)-\varvec{X}_\mathrm {p}(t)| , \\ d_\mathrm {o}(t_\mathrm {omax}) = \mathop {\text{max}}\limits_{{tn - 1 t tn}} |\varvec{s}_{\mathrm {o}n}(t)-\varvec{X}_\mathrm {o}(t)| , \end{aligned}$$

satisfies the following condition:

$$\begin{aligned} \frac{d_\mathrm {p}(t_\mathrm {pmax})}{d_{\mathrm {pth}}}> 1.0 \;\;\; \mathrm {and/or} \;\;\; \frac{d_\mathrm {o}(t_\mathrm {omax})}{d_{\mathrm {oth}}} > 1.0 , \end{aligned}$$
(6)

where \(\varvec{s}_n(t) = [\varvec{s}_{\mathrm {p}n}^T(t) \; \varvec{s}_{\mathrm {o}n}^T(t)]^T\) represents the point at time t that belongs to the cubic spline interpolation of the trajectory segment between key points \(\varvec{P}_{n-1}\) and \(\varvec{P}_n\), \(d_{\mathrm {pth}}\) and \(d_{\mathrm {oth}}\) are the trajectories’ distance threshold of the position and orientation of the hands, respectively. In case that both conditions in Eq. 6 are satisfied on the same segment, the extra point will be added only at the time when the first maximum occurs, i.e. at \(t = {\text {min}}\, \{t_\mathrm {pmax}, \, t_\mathrm {omax} \}\). Thus, according to condition (6), it might be necessary to add extra points to each hand trajectory. We must point out that decreasing the threshold \(p_{th}\) increases the number of key points, but that does not guarantees that no extra point will be needed, i.e. the condition in Eq. 6 is not satisfied by any point. Therefore the choice of \(p_{th}\) is very important for a successful assembly and at the same time for reducing the object’s deformation by reducing the execution time of the assembly task, as will be discussed in the next section.

Experimental results

Hand tracking

Fig. 5
figure 5

Tracking of the human hands with the Leap Motion Controller. In a before starting the assembly task and in b during the assembly task

Fig. 6
figure 6

Snapshot of the Diagnostic Visualizer software of the Leap Motion Controller

Figure 5 shows two snapshots of the assembly done by a human using an elastic band and recorded by the Leap Motion Controller. Before starting the assembly task, we placed the cylinder at the desired position with respect to the reference frame of the robot, and carried out the experiments to determine the position and orientation of the human’s hands with respect to the world frame as described in the "Human’s movements acquisition" section. Then, we carried out several assembly experiments for which we first verified that both hands holding the elastic band were correctly detected by the Leap Motion Controller through the Diagnostic Visualizer software provided by the manufacturer. As shown in Fig. 6, in the Diagnostic Visualizer (a skeletal representation of the whole hand) we are able to verify in real time whether the Leap Motion Controller has detected or not the hands and how they are being detected. After verifying that both hands have been detected, we proceed with the assembly task. Then, we check the hand confidence level to ensure the validity of the recorded data and verify that there are no empty frames of data.

Using the procedure described in the "Key points extraction" section with a threshold \(p_{th}\) of 30%, we extracted four key points for each hand trajectory (the average position of the tips of the index and thumb fingers and the hand orientation converted to a valid gripper orientation) based on the overall traveled distance and the distance between consecutive sample points (condition given at Eq. 5). Then, we verified that the hand trajectory can be represented with a cubic spline interpolation without altering considerably its path by using the condition given at Eq. (6) with \(d_{\mathrm {pth}} = 0.43\) cm and \(d_{\mathrm {oth}} = 4.33^\circ\) (equivalent to 0.25 cm per axis and \(2.5^\circ\) per orientation angle). After this procedure one extra key point was extracted for the right hand trajectory, and two extra key points for the left hand trajectory. Therefore we have five key points for the right hand trajectory and six key points for the left hand trajectory.

Assembly task

The extracted key points of the hands trajectories (the average position of the thumb and index fingers) are used to carry out the assembly motion plan for the Baxter Research robot arms. However, as we are using the tip position of the fingers, there is no constraint on the type of robotic hand that can be used, as long as the transformation between the tip and the wrist is known.

As mentioned in the "Key points extraction" section, only one arm will be moved at the same time.Footnote 3 However, this does not mean that the human only moved one arm at a time or that he/she has to be careful when doing the demonstration. In fact, at the demonstration, the human moved both hands at the same time since he has visual and tactile feedback to achieve the assembly task successfully. For the robot to achieve the same task (without any feedback), synchronization between both arms is indispensable. If one of the arms moves ahead of its corresponding time, it could cause the assembly task to fail. For this reason, the order in which the arms should move is determined based on the time stamp of each of the key points (from the Leap Motion Controller data), this means that the arms will not necessarily alternate between each other. Each key point will in turn be sent to the optimization-based motion planner (developed and detailed in our previous work [4]), which will compute a collision-free trajectory that minimizes the object’s deformation through an object’s elastic energy based objective function. After the robot executes the trajectory computed by the motion planner, the arm that should be moved next will be determined and its corresponding key point sent to the motion planner. This process is repeated until all of the key points have been sent to the motion planner and executed by the robot, thus the assembly task is regarded as completed regardless of the state of the object, i.e. it relies completely on the human demonstration data. The computation time needed by the optimization-based motion planner was 34.6 s for the complete assembly task (on average 2.66 s per segment).

Fig. 7
figure 7

Ring-shaped objects used in experiments: a an elastic band (natural rubber) and b an o-ring (silicon rubber)

The assembly task experiment was carried out using two ring-shaped objects (Fig. 7): an elastic band made of natural rubber and an o-ring made of silicon rubber. The elastic band has an undeformed inner diameter of 47.0 mm and a thickness of 1.0 mm, while the o-ring has an undeformed inner diameter of 49.7 mm and a thickness of 3.5 mm. The rigid cylinder where the ring-shaped objects are inserted has a 50.0 mm diameter and it is fixed on a table.

Fig. 8
figure 8

Snapshots of the assembly experiment using an elastic band (natural rubber) at a initial state, b at right arm key point 2, c between right arm key point 2 and left arm key point 4, d between left arm key point 4 and right arm key point 3, e at right arm key point 4, f between right arm key point 5 and left arm key point 6, g end of the assembly task and h after releasing the elastic band from the left gripper

Fig. 9
figure 9

Snapshots of the assembly experiment using an o-ring (silicon rubber) at a initial state, b at right arm key point 2, c between right arm key point 2 and left arm key point 4, d between left arm key point 4 and right arm key point 3, e at right arm key point 4, f between right arm key point 5 and left arm key point 6, g end of the assembly task and h after releasing the o-ring band from the right gripper

Figures 8 and 9 show snapshots of the assembly task using the elastic band and the o-ring, respectively. It can be verified that the robot successfully inserted both the elastic band and the o-ring. Here it must be emphasized that although the human demonstration was made using only the elastic band (which is more flexible than the silicon-made o-ring), the same extracted key points were useful to insert both objects. However, as the o-ring’s stiffness is higher than that of the elastic band, it can be seen that the ending position/orientation of the robot slightly differs from that of the elastic band (Figs. 8g, 9g). Nonetheless the o-ring was inserted on the cylinder as can be observed in Fig. 9h.

Quantitative analysis

In the previous section we showed that carefully extracting points from the hand trajectory recorded by a human, and using this points as partial goals for our optimization-based motion planner, a dual-arm robot was able to achieve the assembly of ring-shaped objects. In this section we compare the experimental results of the assembly task achieved through the combination of the extracted key points (from the human task) and our motion planner (hereafter called “proposed” framework) and the experimental results when the assembly task is done by simply following the recorded human hand’s trajectory (hereafter called “human direct” method). The role of the optimization-based motion planner is to minimize the object’s potential energy which translates in minimizing the object’s deformation. We approximate the object’s deformation based on the position and orientation of the grippers’ tips and its geometrical relation with the cylinder’s shape and position. The object’s deformation \(x_{\mathrm {d}}(t)\) is defined as the difference between its length x(t) at time t and its original length \(x_0\).

Fig. 10
figure 10

Average of the approximated deformation of the rubber band over six trials per method. The time is normalized by the assembly duration time

Fig. 11
figure 11

Average of the approximated deformation of the silicon o-ring over six trials per method. The time is normalized by the assembly duration time

We carried out four different experiments (two methods and two objects) and repeated each experiment 6 times, the success rate of the experiments was 92.7% (24 out of 26 trials). Figures 10 and 11 show the average of the approximated deformation of the rubber band and the silicon o-ring, respectively, with respect to the normalized assembly time. It can be observed that at the beginning of the assembly when the object is not in contact with the tube there is no significant difference between the proposed and the human direct method. However as the assembly process continues and the object gets deformed by the cylinder, a gap between the deformation produced by the two methods can be observed. Note that the intersection between plots happens when the gripper goes through each of the key points (the grippers position is the same in both methods). This means that if we would like to reduce more the deformation of the object we ought to reduce the number of key points. It can be seen that when using the proposed framework the deformation of the objects was reduced about 10.5% of the largest deformation, for the particular case of the silicon o-ring.

Table 1 Average execution time of the experiments [s]

Table 1 shows the average time (in seconds) taken in the experiments for executing each trajectory segment (between key points) of each method. As the pair of segments 3–4 and 7–8 are executed by the same arm, in the human direct method they are executed as one segment and therefore this method is actually executed in only 11 segments. However, it can be seen that the difference in the assembly task time spent between the proposed framework and the human direct method is approximately 4 s larger for the latter. This difference arises when the position controller of the Baxter robot changes the trajectory time if there is any joint velocity limit infringement in the requested trajectory. This could had happened if between consecutive key points there exists redundancy in the human movements and/or very quick movements in the orientation of the hands, which was eliminated by using our optimization-based motion planner to connect the robot position/orientation between those key points. Nevertheless the assembly was successfully achieved with the proposed combination of key points extracted from human movements and the optimization-based motion planner, thus validating the proposed framework.

It should be noted that even though the robot only moved one arm at the same time while the human moved both, the assembly task was successful. Moving only one arm at the same time has two important advantages: (1) avoids synchronization problems which can lead to the failure of the task, (2) simplifies the computation time of the o-ring’s deformation on the optimization-based planner. On the other hand, moving only one arm implies that the assembly task takes longer time than when moving both arms at the same time. In this work, we opted for a fast computation time and higher success rate of the assembly task (failing the assembly task would mean deforming the o-ring at least one more time).

Finally, we would like to emphasize that the assembly task was successfully reproduced by the robot when using the position/orientation directly obtained from the Leap Motion Controller (human direct method) which validates the methodology explained in the "Human’s movements acquisition" section, without using any fancy filter. Also, once the calibration of the Leap Motion Controller has been done, the system can be easily and quickly (less than 1 h) reuse to teach other complex assembly tasks. Depending on the nature of the task it might be necessary to add an extra Leap Motion Controller to have a wider range of motion for capturing the human demonstration.

Conclusion

This paper discussed the assembly planning of ring-shaped elastic objects into a cylinder based on a human strategy. The main results in this paper are summarized as follows:

  1. 1.

    We proposed a data acquisition method that allowed us to place the Leap Motion Controller in a better position to capture the human hands’ trajectories when doing an assembly task. The validity of this methodology was verified through experiments by reproducing the exact same trajectory with the robot and successfully assembling the object into the cylinder.

  2. 2.

    We introduced a criteria based on the distance traveled by the hands to extract key points of the hands trajectories. We used these points to generate an assembly plan for inserting an elastic object into a cylinder.

  3. 3.

    Through experimental results with the Baxter Research robot we verified the validity of the proposed framework using an elastic band and an o-ring. It was proven that the extracted key points were enough to achieve the desired assembly task.

  4. 4.

    We compare the proposed framework for assembling elastic objects into a cylinder with the assembly done by directly reproducing the human’s hands trajectories. It was found that when using the optimization-based motion planner the object’s deformation is smaller than when reproducing the human’s hands trajectories.

In the future we would like to analyze different assembly patterns by recording the assembly task by different people and using elastic objects with different shapes and made of different materials.

Notes

  1. Because of the cylinder, it was difficult to obtained good tracking results with the finger laying on the cylinder’s surface and pointing in the y direction (both positive and negative).

  2. Note that in the case of the Euclidean distance between orientation vectors, it does not represent the physical distance between the orientations as it is when using Cartesian points. However as the change in orientation of the hand is small (less than \(43^\circ\) overall the trajectory) it can be used to quantify how different or not are two orientation vectors, which is the purpose in this work.

  3. Note that moving one arm at a time does not restrict most of the assembly tasks, it might be desirable to have one arm fix and only one arm in motion at the same time, as is the case of the assembly task discussed in this work.

References

  1. Napier JR (1956) The prehensile movements of the human hand. J Bone Joint Surg 38(4):902–913

    Google Scholar 

  2. Cohen RG, Rosembaum D (2004) Where grasps are made reveals how grasps are planned: generation and recall of motor plans. Exp Brain Res 157(4):486–495

    Article  Google Scholar 

  3. Weigelt M, Kunde W, Prinz W (2006) End-state comfort in bimanual object manipulation. Exp Psychol 53(2):143–148

    Article  Google Scholar 

  4. Ramirez-Alpizar IG, Harada K, Yoshida E (2014) Motion planning for dual-arm assembly of ring-shaped elastic objects. In: IEEE/RAS international conference on humanoid robotics (Humanoids), Madrid, Spain, pp 594–600

  5. Leap motion, Inc. https://leapmotion.com

  6. Liu YK, Zhang YM (2015) Toward welding robot with human knowledge: a remotely-controlled approach. IEEE Trans Autom Sci Eng 12(2):769–774

    Article  Google Scholar 

  7. Du G, Zhang P (2015) A markerless human–robot interface using particle filter and Kalman filter for dual robots. IEEE Trans Ind Electron 62(4):2257–2264

    Article  Google Scholar 

  8. Rossol N, Cheng I, Shen R, Basu A (2014) Touchfree medical interfaces. In: IEEE international conference of the engineering in medicine and biology society (EMBC), Chicago, IL, pp 6597–6600

  9. Kim Y, Kim PCW, Selle R, Shademan A, Krieger A (2014) Experimental evaluation of contact-less hand tracking systems for tele-operation of surgical tasks. In: IEEE international conference on robotics and automation (ICRA), Hong Kong, China, pp 3502–3509

  10. Zheng YF, Pei R, Chen C (1991) Strategies for automatic assembly of deformable objects. In: IEEE international conference on robotics and automation (ICRA), vol 3, Sacramento, CA, USA, pp 2598–2603

  11. Nakagaki H, Kitagaki K, Ogasawara T, Tsukune H (1997) Study of deformation and insertion tasks of a flexible wire. In: IEEE international conference on robotics and automation (ICRA), vol 3, Albuquerque, NM, USA, pp 2397–2402

  12. Yue S, Henrich D (2002) Manipulating deformable linear objects: sensor-based fast manipulation during vibration. In: IEEE international conference on robotics and automation (ICRA), Washington, DC, USA, pp 2467–2472

  13. Miura J, Ikeuchi K (1995) Assembly of flexible objects without analytical models. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), vol 2, Pittsburgh, PA, pp 77–83

  14. Miura J, Ikeuchi K (1998) Task planning of assembly of flexible objects and vision-based verification. Robotica 16(3):297–307

    Article  Google Scholar 

  15. Song H-C, Kim Y-L, Song J-B (2014) Automated guidance of peg-in-hole assembly tasks for complex-shaped parts. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), Chicago, IL, USA, pp 4517–4522

  16. Cho H, Kim M, Lim H, Kim D (2014) Cartesian sensor-less force control for industrial robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), Chicago, IL, USA, pp 4497–4502

  17. SCHUNK GmbH & Co. KG. https://it.schunk.com/it_en/gripping-systems/category/gripping-systems/schunk-grippers/o-ring-grippers/

  18. Weichert F, Bachmann D, Rudak B, Fisseler D (2013) Analysis of the accuracy and robustness of the leap motion controller. Sensors 13(5):6380–6393

    Article  Google Scholar 

  19. Nakaoka S, Nakazawa A, Kanehiro F, Kaneko K, Morizawa M, Hirukawa H, Ikeuchi K (2007) Learning from observation paradigm: leg task models for enabling a biped humanoid robot to imitate human dances. Int J Robot Res (IJRR) 26(8):829–844

    Article  Google Scholar 

Download references

Authors' contributions

IGRA designed and conducted the experiments, analyzed the data and wrote the manuscript. KH and EY contributed with concepts and edited and revised this manuscript. All authors read and approved the final manuscript.

Acknowledgements

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

Not applicable.

Consent for publication

Not applicable.

Ethics approval and consent to participate

The only human data used in this work was from one of the authors and its participation was voluntarily.

Funding

Not applicable.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ixchel G. Ramirez-Alpizar.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramirez-Alpizar, I.G., Harada, K. & Yoshida, E. Human-based framework for the assembly of elastic objects by a dual-arm robot. Robomech J 4, 20 (2017). https://doi.org/10.1186/s40648-017-0088-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-017-0088-0

Keywords