Skip to main content

Dynamic manipulation of unknown string by robot arm: realizing momentary string shapes

Abstract

We propose a method to realize the dynamic manipulation of a string with unknown characteristics via a high-speed robot arm. We use a mass-spring-damper model for the string and repeat three steps: motion generation, real manipulation, and parameter estimation. Robot motion is given by the joint angular velocities expressed by the Bezier curves. Their control points are randomly positioned to generate various robot motion for dynamic string manipulation. The generated motion is performed by a wire-driven robot arm and, real string movement is captured by the camera. These time-series images are used for the parameter estimation of string. The best parameter set is determined via comparison between real and simulated string movement after changing parameter randomly and logarithmically. This parameter set is not unique, but it simulates the actual string movement well. Since the estimated string parameter is used for the robot motion generation after repeating the above 3 steps, the motion generation reflects string property and motion objective can success without special tests in advance. This is an advantage of our method because it is difficult to know all of string property with very complicated non-linearity beforehand. We focus on realizing the momentary string shape in 2 dimensions in this paper. We confirmed the effectiveness of our proposed method by realizing five momentary shapes and 3 kinds of string properties. We also discussed the reproducibility and compatibility of estimated parameters and motion generation.

Introduction

Previously, robots were primarily used for assembly tasks in factories. However, their use in homes, offices, and shared spaces has recently increased. These environments feature a variety of flexible objects, such as clothing, blankets, curtains, paper, and cables. Therefore, manipulation of flexible objects is currently critical in the field of robotics. Flexible objects differ from machine parts which can be handled as rigid bodies by robot. These objects can easily deform under small forces. The deformation takes different forms and varies with the nature of the flexible object as well as the conditions involved. For instance, string motion can include three types of deformations: simultaneous stretching, bending, and twisting. String knitting and fibers vary depending on the type, which means that strings have significantly different properties. Thus, prediction and recognition of deformation for motion planning are challenging.

Studies have been conducted on tying a knot, which is a task involving flexible objects. For robot knot-tying tasks, Takamatsu et al. described the intersection topology of the string in the form of reversible P-data based on the P-data changes and identified it depending on which of the four motion primitives could be executed. However, the information was not based on the position and shape of the string [1]. Wakamatsu et al. similarly expressed the process of knotting/unknotting a string as crossing state transitions based on four basic operations. They further proposed a method for automatically generating attainable grasping points and direction of motion of a string [2]. Katano et al. proposed breaking up a knotting sequence into steps, which combine stable states that can handle ambiguous string states of the same category, with subsequent string operations. They obtained five types of knotting using a dual-arm robot [3]. Nair et al. proposed manipulating a rope using a method that combines a human providing a robot with a sequence of images recorded from an initial to a target configuration with a learned inverse dynamics model in a CNN to execute actions and follow the demonstrated trajectory [4].

Furthermore, there are examples of insertion tasks of linear objects similar to a string. Rambow et al. used a variable admittance controller adjusted by tactile information during training to realize insertion tasks for soft linear objects by correcting the training trajectory using tactile information during the task [5]. Similarly, many studies have been conducted on static manipulation of strings or linear objects. String states can be treated in topologies. The basic actions are limited, and string parameters have little effects on the manipulation. Therefore, tasks can be realized through a combination of basic actions.

However, some tasks are greatly affected by the string parameters. For instance, both the knotting of the string and the shape of the knot need to be considered. Even for identical knotting actions, the ease of operation depends on the string properties, i.e., whether the string is easy or difficult to knot. Other operations include tasks such as assembling during substantial deformation or dynamic operations involving swinging of the string. For these types of manipulations, an action plan is necessary for modeling the string. Moreover, the string deformation and dynamics that are calculated based on the model are considered. The mass-spring model is a useful technique for this case and has been frequently used. This model can be applied to fabrics. Desbrun et al. realized movements requiring a dynamic approach regarding the collision between fabrics and objects using computer graphics [6]. Studies applying the technique of adding bending properties that vary depending on the string elongation to a simple mass-spring model include that by Sawada et al. on the casting of a rubber string [7]. Lloyd et al. proposed formulas for the dynamic parameters of a mass-spring model, which relate to the physical constitutive laws from continuum mechanics, and then identified the damping coefficients analytically [8]. In addition to the method employing the mass-spring model, other methods have been used. Yamakawa et al. formulated an equation of motion and demonstrated that when one end of a string is grasped and moved at a high and constant speed, the string motion follows the trajectory of the robot arm. They used this to achieve dynamic string operation and cloth folding operations [9,10,11]. This method has the advantage that string motion can be realized with only few parameters. However, the applicable types of strings and motions are considerably limited. Yoshida et al. performed a task where a flexible ring was fitted onto a cylindrical object. They used a finite element model, which can express large deformations of the ring, to optimize the operation trajectory and minimize ring-object sliding and collision as well as the deformation energy of the ring [12]. Jangir et al. used reinforcement learning to realize dynamic cloth manipulation, demonstrated the importance of speed and trajectory in the case of dynamic manipulations, and investigated the effectiveness of different textile state representations [13]. Yang et al. demonstrated a method of moving a rope to match a desired target state specified by an image in combination with model predictive path integral control and a dynamics model [14].

Designing a motion plan through these types of string models mainly requires identification of the model parameters. Specifically, it is important to know the properties of the string used to realize its manipulation. The aforementioned study by Sawada et al. determined the actual stretching properties through tensile testing [7]. However, this special identification task conducted in advance for establishing string properties is inefficient. To estimate the model parameters, Yabunouchi et al. used a string mass-spring model to propose a method that uses a camera to continuously observe the static shapes of the string that change with a quasi-static operation. The method progressively estimates the model parameters capable of reproducing these shapes using a real-coded GA [15]. Caldwell et al. proposed a method for optimally identifying the model parameters of a flexible object manipulated by a robotic arm and applied it for identifying the stiffness characteristics of a flexible loop [16]. Alverz et al. used the static shapes of a flexible linear object before and after manipulation to estimate the model parameters corresponding to those shapes [17].

From the foregoing, many studies considered the static deformations of a string to estimate string properties, but only few included estimations of dynamic deformation. Additionally, although some studies have achieved dynamic manipulation, they often identify or assign string parameters beforehand. In essence, there are no significant attempts made to achieve manipulation that satisfies specific motion objectives for strings with unknown parameters by estimating the string properties during manipulation for those given motion objectives.

Therefore, the present study proposes a method of achieving dynamic manipulation for strings with unknown properties. We used motion simulation of a string to achieve the desired string motion by reiterative motion generation, manipulation, and string parameter estimation. Various motion objectives can be considered; however, for this study, the objective is to dynamically manipulate a string to achieve specific shapes at a certain instant.

Realizing dynamic manipulation of an unknown string and string modeling

Proposal for realizing dynamic manipulation of an unknown string

We propose a method for achieving the dynamic manipulation of an unknown string as outlined in Fig. 1. First, the motion objective is provided as an image of the desired shape by the momentary operation of the string. Simultaneously, the initial parameters of the string model are randomly set. In motion generation, the movement of the robot arm is generated randomly, and the movement of the string is simulated from finger movements. The robot arm movement is given based on the joint angular velocity, and the initial arm position is randomly determined within the movable range for each joint. The simulated string movement and the desired string shape are compared to determine the achievement in the simulation. If a generated arm motion obtains a high achievement, the motion generation is terminated. Next, the generated motion is performed by an actual robot arm. At this stage, the manipulation is filmed using a camera. Image processing is used to extract the motion of the string alone and then, it is saved. Subsequently, the actual string motion and desired string shape are compared. This evaluation of the actual manipulation is considered as the achievement. When the manipulation can attain the desired shape, the achievement is good. In the first manipulation, the string model parameters and the actual string properties do not match. Therefore, we estimate the parameters. By providing string parameters randomly, we simulate the string motion based on the actual arm movement. The matching rate of the simulated and actual string movements is examined. The parameter combination with the highest matching rate is retained. From the second manipulation onward, motion generation and actual manipulation are performed using the estimated string parameters. By repeating this procedure, actual and simulated string movement gradually approach each other. Then manipulation was generated to reflect string property and it realize the desired momentary shape. If the achievement rate does not increase after repeating this procedure several times, the manipulation is regarded as failed.

Fig. 1
figure 1

Concept of dynamic manipulation for unknown strings

String model

The string model is used in motion generation and parameter estimation. The mass-spring model was selected because of its low computational load. Our proposed method requires repeating the string movement simulation for motion generation and parameter estimation. Moreover, our method does not aim to completely express the various string movements, but only to generally express a specific string movement. We assume that the string is homogeneous; furthermore, twisting is not considered because if the string movement and observation plane (see below) are limited to two dimensions, the effect of twisting is also contained in two dimensions. To represent the properties of elongation and bending, the string model is composed of mass points, springs, dampers, hinge springs, and hinge dampers. The mass point numbers are set as \(i=1,..,n\) starting from the grasp point. The equation of motion for mass point i is expressed as follows:

$$\begin{aligned} m{\varvec{\ddot{r}}}_{i}&={\varvec{F}}_{si}-{\varvec{F}}_{s(i-1)}+{\varvec{F}}_{di}-{\varvec{F}}_{d(i-1)}\nonumber \\&\quad +{\varvec{F}}^{r}_{b(i-1)}-{\varvec{F}}^{r}_{bi}-{\varvec{F}}^{l}_{bi}+{\varvec{F}}^{l}_{b(i+1)}\nonumber \\&\quad +{\varvec{F}}^{r}_{h(i-1)}-{\varvec{F}}^{r}_{hi}-{\varvec{F}}^{l}_{hi}+{\varvec{F}}^{l}_{h(i+1)}\nonumber \\&\quad +{\varvec{F}}_{ph}+{\varvec{F}}_{phc}+{\varvec{F}}_{g}+{\varvec{F}}_{ci} \end{aligned}$$
(1)

The elastic force \({\varvec{F_{s}}}\) is proportional to the elongation between the mass points. The damping force \({\varvec{F_{d}}}\) of the damper is proportional to the relative velocity between the mass points. These work in the direction connecting the mass points. The forces \({\varvec{F_{b}}}\) and \({\varvec{F_{h}}}\) generated by the hinge spring and hinge damper, respectively, represent the bending properties of the string, as illustrated in the expanded image in Fig. 2. That is, they are proportional to the relative angle and relative velocity between the three mass points. In the figure, only the bending force generated around mass point i is presented. For the external forces generated on the mass points, we introduce a term proportional to the velocity of the mass point as air resistance \({\varvec{F_{c}}}\) and a term proportional to the square of the speed. In addition, we consider the gravitational force \({\varvec{F_{g}}}\).

Fig. 2
figure 2

String model

Furthermore, we introduce separate hinge springs and hinge dampers between the robot hand and the string for the grasping part. The forces generated there are \({\varvec{F_{ph}}}\) and \({\varvec{F_{phc}}}\). Both sides of the equation of motion expressed in Eq. (1) are divided by mass m of the mass points, and unit mass conversion (i.e., designating a value in \(k_{s}/m\) to the spring constant) is performed for each parameter. Thus, we do not need to consider the mass itself, and there will be eight string parameters (\(k_{h}\), \(c_{h}\), \(k_{s}\), \(c_{s}\), \(C_{c1}\), \(C_{c2}\), \(k_{ph}\), and \(c_{ph}\)). By performing iterative integration of the equation of motion using Euler’s method, the time-series of location vector \({\varvec{r_{i}}}\) for each string mass point is obtained. When the string is manipulated by a robot arm, time series data about orientation and position of the robot finger are given. A numerical calculation of the equation of motion is performed when the time series of the first mass point coordinates is given. This is the simulation of the string movement.

Robot arm motion generation

This section describes the method of robot arm motion generation. Random joint velocity curves are used for motion generation. However, the range of random change is varied between the initial and succeeding time. Moreover, to optimize motion generation, we use the previous motion generation results from the succeeding time. It is a progressive generation.

Generating motion the first time

The initial angles for each joint of the robot arm are randomly selected from within the movable range and specified as the initial position. Subsequently, a joint velocity curve is generated using a Bezier curve, as illustrated in Fig. 3. Time T is determined randomly within a certain range (e.g., 0.2–1.5 s), and the time from 0 to T is divided into five equal parts (\(t_{0}\sim t_{5}\), \(t_{0}=0,t_{5}=T\) ). Acceleration \(\alpha _{k}\) at time \(t_{k}\) is randomly determined from a range of limit accelerations of the robot arm. We consider that the robot moves at a uniform speed in each time interval \(\Delta t\), and the joint velocity for the control points is determined by Eq. (2).

$$\begin{aligned} V_{k}=V_{k-1}+\alpha _{k}\Delta t,\quad \Delta t=t_{k}-t_{k-1},\quad k=1...4 \end{aligned}$$
(2)

Note that \(V_{0}=V_{5}=0\). Using the control points \(V_{0} \sim V_{5}\), a Bezier curve is generated and taken as the joint velocity curve. This is performed for all joints, and we confirm that the last generated movement of the arm does not exceed the limits of the robot’s movable range or speed. When there are no problems, simulation from the robot arm motion to the string motion is performed. If the achievement in the simulation exceeds a certain threshold, then that arm motion is regarded to have achieved the desired motion and the motion generation is terminated. If the desired motion is not yet achieved, a new initial position and a joint velocity curve are generated. This is repeated until the desired motion is achieved.

Fig. 3
figure 3

How to generate joint velocity curve

Generating motion the second time and onward

For the second and subsequent motion generations, the initial position generated during the first time is used unchanged. The previously generated velocity curve is used for the joint velocity curve with a slight change. Specifically, motion finish time T is randomly changed within 1/2 of the range of the first one and based on the previous value. The joint velocity for each of the control points \(V_{1} \sim V_{4}\) is randomly changed based on the previous value and in approximately a fraction of 1 of the range of the previous one. At this point also, it is confirmed that the finger motion does not exceed the limits of the robot’s movable range and speed. The achievements in subsequent simulations are judged in the same manner as that of the first one.

Judging the end of motion generation based on achievement in simulation

We explain how to calculate the achievement in the simulation, which is used to judge when to terminate the motion generation. When images of the desired string shapes are provided, the achievement is obtained by comparing the mass point positions of the string model in the simulation with the desired shape image, as shown in Fig. 4. The closer the mass point positions to the desired shape image, the higher the simulation achievement value.

Fig. 4
figure 4

Evaluation of achievement in simulation

To evaluate in this way, dilation-processing of desired shape image is applied multiple times, and weighted scores (\(p_{max}\),…2, 1, 0) are assigned in accordance with the dilation times. A mass point position in the string model corresponds to score \(p_{i}\) based on the expanded area in which it falls. Thus, the achievement in simulation \(A_{s}\) is calculated as follows.

$$\begin{aligned} A_{s}=\dfrac{\sum ^{n}_{i=1} p_{i}}{p_{max} \cdot n} \end{aligned}$$
(3)

\(A_{s}\) is calculated in relation to the time series of the string model shape obtained during the string simulation. The desired motion is deemed to have been achieved during the moment when the threshold is exceeded.

Estimation of string parameters

This section describes the parameter estimation following the string manipulation. First, the image of the string is extracted on the basis of grasp point of the robot arm. The angular velocity obtained from the encoder data on each joint is used to determine the actual arm motion. The value for each string model parameter is selected randomly, and motion simulation of the string is performed using the robot arm motion. The matching rate E is calculated by comparing the point positions in the string model obtained from the simulation and image series of the actual string motion. This is repeated while changing the parameters. After a fixed number of repetitions, the eight parameters with the highest matching rates are output as the estimated parameters. This method does not estimate the parameter for expressing various movements, but only for a specific movement in the manipulation. Therefore, the estimated parameter depends on the string movement.

Parameter selection method

When randomly selecting each parameter in the string model, its value is determined using exponential form. This allows varying the parameter range widely. However, if the range of the estimated parameters is extremely wide, parameter convergence will require much time. Therefore, we narrow the parameter estimation range in a stepwise manner. The parameters are randomly determined using the following equations: For manipulation times M, the number of parameter changes shall be m and a certain parameter shall be \(P_{\mathrm {a}}\).

$$\begin{aligned} P_{a}&=P_{min}\left( \dfrac{P_{max}}{P_{min}} \right) ^{\chi _{m}} \end{aligned}$$
(4)
$$\begin{aligned} \chi _{m}&= \chi _{\mathrm best} + \chi _{\mathrm {w}} \cdot RAND(-1,1) \end{aligned}$$
(5)
$$\begin{aligned} \chi _{\mathrm {w}}&=\dfrac{\chi _{\mathrm {w}0}\cdot \beta ^{m}}{M} \end{aligned}$$
(6)

The maximum and minimum parameter values \(P_{max}\) and \(P_{min}\), respectively, are determined in advance. \(\chi _{m}\) is varied between − 1\(\sim\)1. The initial value \(\chi _{w0}\) for search range \(\chi _{w}\) when determining \(\chi _{m}\) is chosen. RAND(− 1,1) expresses random numbers − 1\(\sim\)1. \(\beta\) is a value slightly under 1 and works to narrow the search range every time the parameters are updated. \(\chi _{\mathrm{{best}}}\) is the final estimated parameter value (exponent) in the previous manipulation. In this way, the parameters can be estimated while narrowing down the search range, focusing on the parameters selected in the previous manipulation.

Calculating the matching rate

The method of calculating the matching rate E for the actual string motion (image series) and string model motion in the simulation is presented in Fig. 5. In the same way that the achievement is calculated for a single image in an image series, the matching rate is obtained for each individual image. The ratio of mass points in the string model \(E_{{\mathrm {f}}}\) that fall into the string area of the binarized images is evaluated as follows. Dilation processing of the time-series image is performed multiple times, and weighted scores are assigned in accordance with the application times of dilation; thus, the nearer the area to the center, the higher the score (\(p_{max}\),…2, 1, 0). The mass point positions in the string model correspond to score \(p_{i}\) based on the expanded area in which they fall. At this point, different from calculating the achievement, the scores are weighted depending on the mass point number (i = 1,…, n). This is because if they are evaluated on an equal footing without any weighting, then the mass points near the grasping area are likely to constantly overlap in the simulation and the images because they move slightly. Therefore, they would not contribute to the parameter estimation. The movement increases near the end of the string. Considering that the string properties are likely to manifest in the score, weighting \(w_{i}\) is increased toward the end of the string. Here, it should be noted that weighting increment is too much, even if the mass point around the grasp point does not match the actual string shapes, the evaluation value bocomes high and this has an adverse effect on the parameter estimation. After weighting each image \(E_{\mathrm f}\), the sum for all images (1,…, fmax) is calculated. That is, the matching rate E is obtained using the following equation, where the weighting increment is \(\Delta w\).

$$\begin{aligned} E&=\dfrac{1}{f_{max}}\cdot {\displaystyle \sum _{f=1}^{fmax}}E_{\mathrm f},\quad E_{\mathrm f}=\dfrac{{\displaystyle \sum ^{n}_{i=1}}p_{i}\cdot w_{i}}{p_{\mathrm max}\cdot {\displaystyle \sum ^{n}_{i=1}} w_{i}}, \end{aligned}$$
(7)
$$\begin{aligned} w_{i}&=1+(i-1)\Delta w \end{aligned}$$
(8)
Fig. 5
figure 5

How to estimate parameters of string

Manipulation of a string using a robot and its evaluation

Wire-driven robot arm

Dynamic manipulation of a string requires a robot arm moving at high speeds. Therefore, we designed and built a 4 degree of freedom wire-driven robot arm for this study. The robot arm produced is presented in Fig. 6, and Table 1 lists its specifications. The robot arm has no motors at its joints, and only rotary encoders that detect the joint angle are provided. To maintain a lightweight robot arm, all motors are placed beneath the pivot shaft. The overall height of the robot arm is 585 mm and the maximum composite speed of the arm finger is 21.8 m/s. Speed control (proportional-integral-derivative control) is performed for each joint of the arm of the motors. When manipulating the string, the arm is moved by providing commands every 5 ms. Moreover, the joint angle is sampled at the same time interval. We showed a comparison of the finger’s target trajectory and the actual trajectory when the robot arm is made to perform a circular motion of 5 cm radius from various positions in 1 s and 0.6 s. The results in Fig. 7 show that although some overshoot occurred, a virtually circular movement was evident, and it was concluded that there were no problems with string operation.

Fig. 6
figure 6

Wire-driven robot arm

Table 1 Robot arm specification
Fig. 7
figure 7

Circle trajectory at some positions

Obtaining string movement with a camera and evaluation of achievement in real manipulation

To determine whether the desired motion has been achieved in actual manipulation, we performed an evaluation by comparing the desired shape images with the actual string shapes obtained from the camera images. The camera used for the experiment was IDS UI-3580CP-C-HQ (\(512\times 480\) pixel), which recorded images of string movements at 50 fps. To capture the string movement on the basis of the grasp point of robot arm, a round, yellow marker was attached to the robot arm finger. Its location was detected from the images and used as a reference. An area of \(240\times 250\) pixels around the marker was cut from the images. After binarization and noise removal, the string area was slightly blown up by expanding the image. We used these images of a clearly depicted string. The method for calculating the manipulation achievement \(A_{\mathrm{r}}\) is shown in Fig. 8. This method is similar to that of calculating the achievement in simulation. Instead of the mass points in the simulation, we compared the pixels in the actual string image with those of the desired shape images. In other words,

$$\begin{aligned} A_{\mathrm {r}}=\dfrac{\sum _{\mathrm {S}}\cdot p_{\mathrm {i}}}{S_{0}\cdot p_{\mathrm {max}}} \end{aligned}$$
(9)

For each pixel in the actual string image, a weighted score \(p_{\mathrm {i}}(0\sim p_{\mathrm {max}}\)) for the desired image was attached, and the total was calculated. \(S_{0}\) is the pixel in the desired motion image. In actual string images, the string position is likely to shift by a few pixels because of marker detection errors and camera lens distortion. Therefore, we compared the obtained string images with the desired shape images while shifting a few pixels horizontally and vertically. Evaluation of manipulation achievement was performed for all time-series images. When achievement \(A_{r}\) of some images in the time-series image exceeded the threshold, the desired shape was deemed to have been achieved. We call this the threshold of successful manipulation.

Fig. 8
figure 8

Evaluation of achievement in real manipulation

Dynamic manipulation aimed at momentary string shape

We examined whether dynamic manipulation to achieve a momentary string shape can be realized using the proposed method by repeating the sequence of motion generation, actual manipulation, and parameter estimation. The desired shape images are the five shapes shown in Fig. 9 (C, J, d, o, and s). String model was given by ten mass points (\(n=10\)). Fewer number of mass points n cannot express desired string shape such as s and d. However, if the number of mass points are too many, the computational load for motion generation and parameter estimation bocomes heavy. Therefore, we determined the required number of mass points based on the complexity of desired images. The range for selecting the robot movement time is \(T=0.2\sim 1.5\) s. Table 2 lists the ranges of all parameters used in the parameter estimation. The initial values used for the first motion generation are the minimum values given in this table. Furthermore, the convergence factor is \(\beta =0.995\), the initial search range \(\chi _{w0}\) = 0.6, and the weighting increment used for the matching rate \(\Delta w\) = 0.25. We determined the weighting increment \(\Delta w\) via trial and error. The actual manipulation image series used for parameter estimation are the image series added every 0.2 s before and after movement time T. However, when the string came into contact with the robot during manipulation, the parameters were estimated in the image series just prior to the contact. The length of the manipulated string was 300 mm. The threshold for successful manipulation was defined as \(A_{r}=0.7\).

Fig. 9
figure 9

Desired image of string shapes

Table 2 Range for parameter estimation

Dynamic manipulation to achieve a d shape of the string

In the first example, we attempted to manipulate the string to form a d shape. The string used was a braided acrylic string (diameter of 5 mm). We illustrated the changes in the estimated parameter for each manipulation series to achieve the desired shape as well as the associated changes in motion generation. The first manipulation generated the motion shown in Fig. 10a. The period in which the desired shape was achieved in the simulation was 0.33 s after the start of motion, but as can be seen from the actual string movement, the string is not curving near the 0.33 s mark. This demonstrates that the actual string properties are more resistant to bending than the initial parameter. Actually, in the results for the first string parameter estimation, the hinge spring constant \(k_{h}\), which indicates the ease of string bending, was estimated to be approximately eight times the initial parameter (Table 3). The results of the second motion generation and manipulation using the values obtained through this parameter estimation are shown in Fig. 10b. The desired shape was achieved in the third motion generation and manipulation ( Fig. 10c).

Fig. 10
figure 10

Change of generated motion and actual string motion (Desired image of string shape: d )

Table 3 Estimated parameter values (desired image of string shape:d)

Figure 11 shows the changes of achievements in simulation \(A_{s}\) and real manipulation \(A_{r}\). The achievement is low because the simulated string shape and desired string shape do not match at the beginning of the motion. However, the desired string shape is almost achieved at one point, which is the achievement peak. It was confirmed that by repeating the parameter estimation and motion generation, the shift in manipulation achievement \(A_{r}\) generally matches the simulation achievement \(A_{s}\). This indicates that the string parameters are reflected in the motion generation as parameter estimation results and the simulation and actual string movements correspond more to each other.

Fig. 11
figure 11

Experimental result of achievement \(A_{r}\), \(A_{s}\) (desired image of string shape:d)

Figure 12 depicts the actual robot finger trajectories for each of the manipulations. Comparing the first and third manipulations, the robot motion generally takes the same amount of time, and the form of their trajectories followed are also similar. However, because the amount of travel of the robot finger is smaller in the third manipulation, a high level of momentary acceleration occurs on the string. Therefore, although the tendency on the change of string shape is approximately the same, a higher degree of bending is obtained in the third manipulation and the desired motion is achieved. This further demonstrates that motion generation is adjusted in accordance with the estimated parameters. Then, Fig. 13a, b display the comparison of the actual and simulated string movements in each time of parameter estimation. These figures show that both string movements are generally matched. The linear mass-spring model cannot express a large deformation basicaly. However, if the number of mass points is sufficient, there is no large deformation of the adjacent mass point. Therefore, the mass-spring model is capable of expressing the string movement approximately.

Fig. 12
figure 12

Finger trajectories for ”d” shape

Fig. 13
figure 13

Estimation result of string parameters

Dynamic manipulation for various desired string shapes

To examine whether other desired shapes can be achieved, we attempted manipulation for four shapes, namely, letters C, J, o, and s. The previous string was used for this test. Consequently, the letters J and C were achieved during the fourth manipulation and third manipulation, respectively (Figs. 14 and 15). The movement for J is similar to that for d, but slower and with more moving distance. For C, a trajectory where the robot finger moved substantially in the vertical direction was generated. This reveals that the motion is generated according to the given desired shape. Figs. 14c and 15c show that our model cannot completly express the actual string movement but express that near the achieved moment of desired shape. String parameters affects more the string movement when the movement is large. Therefore, the estimated parameter changes according to the desired string shape.

Fig. 14
figure 14

Manipulation result of desired image of string shape: J

Fig. 15
figure 15

Manipulation result of desired image of string shape: C

In contrast, although motion generation was performed with the initial parameters for o, motion generation with the estimated parameters after actual manipulation was not possible. It was possible when changing the vertical oval of the desired string shape into a horizontal oval, and the desired shape was achieved in the third manipulation (Fig. 16). Forming a string into a circular shape is difficult because the positions of the ends of the string are restrained as they are grasped by the robot. Because the movable range of the robot arm is limited, moving the finger directly downward is difficult.

Fig. 16
figure 16

Effects of correction of desired image of string shape: O

In addition, motion generation was not performed when the desired image was set as letter s. We changed the number of control points for the Bezier curve of joint velocity from 4 to 7 to generate the complicated motion. The manipulation result is illustrated in Fig. 17. Manipulation was achieved in three repetitions. We showed that more control points for generating the Bezier curve are required to perform more complicated shapes.

Fig. 17
figure 17

Effect of increasing control points in Bezier curve for joint velocity (Desired images of string shape: S)

As indicated, dynamic manipulation could be achieved for various desired string shapes and the achievable shape depends on the robot motion performance, string property, robot arm restrictions, and so on. When desired shape has high curvatures like a Z, higher robot motion performance is necessary. If the string is longer and more flexible, the possibility to realize the shape with large curvature increases.

Dynamic manipulation for unknown string with different properties

We examined the usefulness of the proposed method for unknown strings with different properties. For this, we prepared two string types (Strings B and C), which are different from the string (String A) used previously. String B is harder than String A, whereas String C is softer than String A (Fig. 18). The desired string shapes were the letters C and J and the results are shown in Fig. 19. For both string types, the momentary string shape could achieve the desired shape. The C shape was generated with approximately the same motion even for different string parameters, whereas the J shape was generated with motions that differed depending on the parameters.

Fig. 18
figure 18

Strings used in the experiment

Fig. 19
figure 19

Experimental results about manipulating strings with various properties

Figures 20 and 21 indicate the typical results of the estimated parameters for manipulation using the three types of strings. Some differences in the properties were observed owing to the type of string. For instance, a comparison of the parameters where the desired shape was achieved shows that for the softest string, String C, the lowest hinge spring constant \(k_{h}\) is estimated to be independent of the desired string shape. For the hardest string, String B, the highest hinge spring constant \(k_{h}\) among the three types of strings was estimated. Based on these results, it can be said that parameter estimation has been successful.

Fig. 20
figure 20

Estimated parameters for each string (desired image of string: C)

Fig. 21
figure 21

Estimated parameters for each string (desired image of string: J)

If the same string was used, it was expected that the same parameters would be estimated regardless of the desired string shape. However, parameter estimations were observed to vary greatly depending on the desired string shape. A comparison of the hinge spring constants shows that the estimated values for the desired shape C, which involves a gentle curvature of the string, are high. In contrast, those for J, which involves sharp curves, are low. This is possibly because they are parameters that are less affected by the desired string shape, or they have the same effect as other parameters, or the non-linearity due to velocity and major deformation affect the properties of the string itself. However, we have shown that even with unknown true parameter values, it is possible to adjust them in accordance with the desired string shape and generate motion.

Reproducibility of estimated parameters and motion generation

To confirm the reproducibility, we conducted experiments repetitively for realizing the momentary shape of J. Figure 22a shows the profile of achievement in real manipulation in each tiral. Repetition count was varied from 2 to 8 until threshold of successful manipulation (\(A_{r}=0.7\)) is satisfied, but the achievement \(A_{r}\) tended to increase by multiple repetition and motion objective was achieved in all trials. However, final achievement \(A_{r}\) values in each trial is different. Because of the parameter estimation method by random search algorithm, estimated parameter values stay the local solution. The final achievement value \(A_{r}\) depends on the property of local solution. In addition, it is considered that achievement is also affected by the variation of real string movement.

Fig. 22
figure 22

Reproducibility of manipulation (desired image of string shape: J)

Estimated parameter in each trial is not same value as shown in Table 4. The first cause is model redundancy. Even if the same string movement is expressed, there are some combinations of parameter sets. Second, parameter estimation method does not estimate the parameter to express various movement but specific movement. Therefore, estimated parameter depends on the specific movement of string. Figure 22b shows the finger trajectories according to the number of trials. Even if the momentary shape is same, string movement for reaching this momentary shape is not same in each trial. This causes different estimated parameters. As a result, although a parameter set estimated by each trial does not have reproducibility, the desired manipulation is achieved successfully with acceptable expression of string movement by estimated parameter set.

Table 4 Reproducibility of estimated parameters (desired image of string: J)

To confirm the compatibility of estimated parameter, the experiment for achieving the desired image of the J shape was conducted by using the parameter estimated from the C shape manipulation. Fig. 23a, b depict the manipulation result and achievement respectively. The first manipulation does not realize the string movement simulated with parameter from C shape. As mentioned previously, parameter estimation method does not estimate the parameter to reproduce various movements but the specific movement. Even if string parameter is obtained from manipulating for other desired shape, manipulation performance is not always satisfying because the obtained parameter is tuned to the specific movement.

Fig. 23
figure 23

Manipulation result for desired image of string shape: J with the estimated parameter for C shape

Our proposed method accumulates experience within the trial. Previous experience (estimated parameter) does not always work well for other manipulation. Therefore, a parameter that is estimated by specific manipulation does not have compatibility. It should be noted that known parameter of string can be used as initial parameter, but it also does not always work well like parameter estimated for other desired shape.

Conclusion

The present study proposed a method for dynamic manipulation of unknown strings by repeating motion generation/actual manipulation/parameter estimation with a mass-spring model. The desired string shape was given by the images representing the momentary string shapes. During motion generation by the robot, the angular velocity for each joint was represented by a Bezier curve, and string movement was simulated based on a randomly generated motion. The achievement in simulation was evaluated to select generated motion. For the actual manipulation, we calculated the achievement by comparing the actual string shape obtained from camera images to the desired shape images. String parameters were estimated by random search via comparing real string movement with simulated string movement.

When we used the proposed method to perform manipulations on a two-dimensional surface for the intended momentary string shape, we were able to achieve five desired shapes. Moreover, we confirmed that the desired shape could be achieved by manipulating three types of strings with different properties. However, string parameter was not uniquely estimated, despite of same desired shape and same string. The desired manipulation cannot be achieved with the parameter estimated from other manipulation. Althogh our proposed method is lack of reproducibility and compatitivety about parameter estimation, estimated parameter express the specific string movement and it realize desired momentary shape of string. Even without prior testing to examine the string properties, we demonstrated the possibility that manipulation can be achieved.

This method can be expected to apply other manipulation types apart from momentary string manipulation. In addition to extending this study to three-dimensional manipulation, our future tasks are its application to cyclic movements such as string turning or manipulations to control shape and trajectory after throwing a string.

References

  1. Takamatsu J, Morita T, Ogawara K, Kimura H, Ikeuchi K (2005) Representation of knot tying tasks for robot execution. J Robot Soc Jpn 23(5):572–582 (in Japanese)

    Article  Google Scholar 

  2. Wakamatsu H, Tsumaya A, Arai E, Hirai S (2005) Linear object manipulation including knotting/unknotting. J Robot Soc Jpn 23(3):344–351 (in Japanese)

    Article  Google Scholar 

  3. Katano K, Gomi T, Tomizawa T, Kudoh S, Suehiro T (2015) Realization of five types of tabletop knotting with Dual-Arm robot. J Robot Soc Jpn 33(7):505–513 (in Japanese)

    Article  Google Scholar 

  4. Nair A, Chen D, Agrawal P, Isola P, Abbeel P, Malik J, Levine S (2017) Combining self-supervised learning and imitation for vision-based rope manipulation. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 2146–2153

  5. Rambow M, Schauss T, Buss M, Hirche S (2012) Autonomousmanipulation of deformable objects based on teleoperated demonstrations. In: International conference on intelligent robots and systems(IEEE/RSJ),Paper No. IROS–2012.6386002

  6. Desbrun M, Schr öder P, Barr AH (1999) Interactive animation of structured deformable objects. In: GraphicsInterface ’99, pp 1–8

  7. Sawada Y, Watanabe T (2019) Casting behavior of the string structure considering axial elogation. J Jpn Soc Mech Eng. https://doi.org/10.1299/transjsme.19-00008(in Japanese)

    Article  Google Scholar 

  8. Lloyd BA, Kirac S, Székely G, Harders M (2008) Identification of dynamic mass-spring parameters for deformable body simulation. Eurographics 2008—short papers, pp 131–134

  9. Yamakawa Y, Namiki A, Ishikawa M (2016) Simplified deformation model and shape generation of a rhythmic gymnastics ribbon using a high-speed multi-jointed manipulator. Mech Eng J. https://doi.org/10.1299/mej.15-00510

    Article  Google Scholar 

  10. Yamakawa Y, Namiki A, Ishikawa M (2010) Motion planning for dynamic knotting of a flexible rope with a high-speed robot arm. IEEE/RSJ Int Conf Intell Robots Syst 2010:49–54

    Google Scholar 

  11. Yamakawa Y, Namiki A, Ishikawa M (2011) Dynamic manipulation of a cloth by high-speed robot system using highspeed visual feedback. In: Proceedings of the 18th international federation of automatic control (IFAC-18), pp 8076–8081

  12. Yoshida E, Ayusawa K, Ramirez-Alpizar I G, Harada K, Duriez C, Kheddar A (2015) Simulation-based optimal motion planning for deformable object. In: 2015 IEEE international workshop on advanced robotics and its social impacts (ARSO), pp 1–6

  13. Jangir R , Alenyá G, Torras C (2020) Dynamic cloth manipulation with deep reinforcement learning. In: 2020 IEEE international conference on robotics and automation, pp 4630–4636

  14. Yan M, Zhu Y, Jin N, Bohg J (2020) Self-supervised learning of state estimation for manipulating deformable linear objects. IEEE robotics and automation letters, 5, pp 2372–2379

  15. Yabuuchi T, Kakusho K, Minoh M (2007) Incremental estimation of model parameter for deformable object in manipulation from sequential observation. IEICE Trans Inf Syst 90(1):94–105 (in Japanese)

    Google Scholar 

  16. Caldwell T, Coleman D, Correll N (2014) Optimal parameter identification for discrete mechanical systems with application to flexible object manipulation. In: IEEE international conference on intelligent robots and systems. https://doi.org/10.1109/IROS.2014.6942666

  17. Alvarez N, Yamazaki K, Matsubara T (2016) An approach to realistic physical simulation of digitally captured deformable linear objects. In: IEEE international conference on simulation, modeling, and programming for autonomous robots (SIMPAR), pp 135–140

Download references

Author information

Authors and Affiliations

Authors

Contributions

The first author designed and conducted the study and analyzed and interpreted the data and wrote the manuscript. The second author supported the idea, the programming. The third author supported programming and give the idea for this study. Forth and the Fifth author gave the idea for this study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kenta Tabata.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tabata, K., Seki, H., Tsuji, T. et al. Dynamic manipulation of unknown string by robot arm: realizing momentary string shapes. Robomech J 7, 39 (2020). https://doi.org/10.1186/s40648-020-00187-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-020-00187-w

Keywords