Dexterous object manipulation by a multi-fingered robotic hand with visual-tactile fingertip sensors

In this paper, a novel visual-tactile sensor is proposed; additionally, an object manipulation method for a multi-fingered robotic hand grasping an object is proposed by detecting a contact position using the visual-tactile sensor. The visual-tactile sensor is composed of a hemispheric fingertip made of soft silicone with a hollow interior and a general USB camera located inside the fingertip to detect the displacement of the many point markers embedded in the silicone. The deformation of each point marker due to a contact force is measured, and a contact position is estimated reliably through a novel method of creating virtual points to determine the point clouds. The aim is to demonstrate both the estimation performance of the new visual-tactile sensor and its usefulness in a grasping and manipulation task. By using the contact position obtained from the proposed sensor and the position of each fingertip obtained from kinematics, the position and orientation of a grasped object are estimated and controlled. The effectiveness of the method is illustrated through numerical simulation and its practical use is demonstrated through grasping and manipulating experiments.


Introduction
A human hand can detect a contact position and force. This tactile sensing skill helps to recognize an external environment and makes it possible to manipulate objects inside one's hand. To make a multi-fingered robotic hand perform a human-like object grasping and manipulation motion, many types of information such as each joint angle, contact force, or contact position should be acquired through an encoder, a force sensor, or a tactile sensor. There have been many attempts to realize a human-like grasping and manipulation task using such sensors [1][2][3][4][5]. Among them, the function of tactile and force sensors for determining the contact force and position must be important because these make it possible to detect the grasping force, position, and orientation of a grasped object, as well as finger slippage by knowing the contact position. Additionally, these sensors contribute to change the position and orientation of a grasped object by in-hand manipulation.
There are various methods to obtain the contact force and position of each fingertip. Many present tactile sensors use electrical resistance, electromagnetics, piezoelectricity, ultrasonic, optics, and strain gauges. However, they need to attach many sensor arrays to the surface of each fingertip to determine a contact position, and these tend to be fragile by impact and cost [6][7][8]. A large number of sensor arrays also increases the number of wires and it may be difficult to satisfy performance requirements or sampling rate due to a computational burden for processing a large amount of sensor information [8]. However, recently several tactile sensors using a visual sensor, which is called a visual-tactile sensor hereinafter, have been proposed [9][10][11][12]. A visual-tactile sensor is mainly based on image processing and therefore they are easily affected by the ambient light condition and require relatively high programming skills. However, a visual sensor does not need to physically contact an object and thus, it is robust for impulsive contact breakdown, and it can build a low if a generally available USB camera is utilized. Several studies have observed deformation of a soft finger by placing a point cloud inside the finger to observe the contact position and force using a visual sensor [10][11][12]. These studies demonstrated the effectiveness of a visual-tactile sensor. However, they mainly evaluated the performance of the sensor itself and they did not mention how to use the obtained information in a grasping and manipulation control and how it works in grasping and manipulation tasks. Yamaguchi et al. [13] evaluated a cutting force of an object by a robotic gripper with a visual-tactile sensor. From the viewpoint of grasping and manipulation control, there is a study that demonstrated reliable grasping by increasing the contact area at an initial contact position [14]. Although this study has been applied to a robotic gripper using a visual-tactile sensor to grasp an object reliably, it has not been applied to a method for controlling the position and orientation of a grasped object by a multi-fingered robotic hand.
However, several works which use conventional tactile sensors have demonstrated that they reliably control the position and orientation of the grasped object based on the sensor information. When changing the position and orientation of a grasped object in the hand, humans can realize it with small motions of each finger through a rolling contact between an object surface and each fingertip. Several studies have achieved reliable grasping and manipulation through the rolling constraint [16,17]. Tahara et al. have proposed a blind grasping and manipulation controller capable of controlling the position and orientation of a grasped object without using a force or tactile sensor [16]. They introduced a virtual object frame that is defined by the position of each fingertip of a threefingered robotic hand to express the position and orientation of a grasped object virtually instead of a real object position and orientation. The fingertip of the robotic hand is made of flexible hemispheric silicone and by utilizing its rolling contact, the position and orientation of the object can be changed with a little motion of each finger. This method is quite advantageous because when controlling a grasped object, force and tactile sensors are unnecessary. However, the controlled position and orientation is not accurate because there is no external sensor. To achieve more accurate position and orientation control, it is necessary to use a force or tactile sensor that can detect the real contact force and position of each fingertip.
In this study, a new visual-tactile fingertip sensor, contact position, and force estimation method are proposed by using the detected information, and a new object grasping and manipulation controller based on the blind grasping and manipulation controller is designed. First, the developed new visual-tactile sensor is introduced, and its performance is demonstrated through several experiments. Next, the new object grasping and manipulation controller, which uses the detected contact position information, is designed. Subsequently, the effectiveness of the proposed controller is evaluated through numerical simulations by comparing the proposed controller with the conventional blind grasping and manipulation controller.

Visual-tactile sensor design
The proposed visual-tactile sensor equipped on a fingertip is shown in Fig. 1.
Several point markers for detecting fingertip deformation are embedded inside the hemispheric silicone fingertip. The position of each point marker is arranged at a constant angle along the spherical surface to show deformation motion when the shape of the silicone fingertip contacts an object. Each point marker comprises 3-mm beads without holes, which are colored to be easy to distinguish. A mold for making silicone fingertip is made using a 3D printer facilitate creating a complex-shaped mold, as shown in Fig. 2.
A camera used in the sensor is a general USB-type camera capable of normal 30 FPS, which is readily available and low cost. The configuration of the sensor is like an endoscopic camera and it possesses a brightnessadjustable LED. Therefore, the influence of ambient light can be controlled to some extent. It is reasonable to use a faster camera to quickly detect information on the fingertip through the movement of the markers. However, this study used a 30 FPS USB camera on purpose because of its cost and implementation of LED. The fixed camera base is made by a 3D printer and designed to disassemble the silicone finger for easy repair as shown in Fig. 1. The base does not require any adhesive or screws.

Contact position and force estimation through the displacement of point markers
Several studies have developed visual-tactile sensors, in which multiple small markers are arranged inside a soft material, and these mainly measured contact force and position by detecting the displacement of the markers from their initial positions when an external force acted on the fingertip [10][11][12][13]. In most cases, the shapes of the contact area were flat and utilized a square arrangement of the markers in the soft material. However, these studies did not consider a rolling contact. In our proposed visual-tactile sensor, each point marker is arranged radially according to the hemispheric shape.
The radial arrangement is reasonable for a hemispheric fingertip even though the displacement of a point marker becomes more complex than the standard flat shape. Similar studies related to the visual-tactile force sensor have been performed (e.g. [14,15]). However, the advantage of our proposed tactile sensor compared with similar studies is to use the position information of the virtual point markers instead of that of measured point markers. This provides an advantage that the labeling of each point maker is performed robustly against the change in the external lighting environment. When labeling each point marker continuously, incorrect labeling sometimes occurs induced by the loss of the detection of each point marker. Using virtual point markers instead of the detected point markers delivers more reliable labeling even if point markers are misplaced in real-time. Moreover, the most advantageous point of the study compared with other related studies is in addition to developing the sensor itself, this study also proposes an object grasping and attitude control method. Many similar studies focus on the specification and performance of the sensor itself and they hardly mention how to use it in a manipulation task. However, to utilize such a tactile sensor in practice, it is also important to know how to use it in a manipulation task. In this study, the proposed sensor and controller are designed simultaneously, and these are eventually integrated as a system. Namely, it can be said that the specification of the proposed tactile sensor is demanded by the proposed controller or vice versa. Its effectiveness is evaluated in both numerical simulation and experiments using a prototype of the multi-fingered robotic hand with proposed tactile fingertip sensors. The displacement of each point marker arranges can be observed in real-time at 30 FPS through the USB camera.
We will give a new estimation method for a contact position and force using a geometric relation between each point marker in the next section.

Changes in the inside markers due to external forces
The shape of the fingertip is changed by external forces, which causes the position of the markers to move. The location where the external force is acting on the surface of the fingertip can be identified by comparing the initial and the present states of the changing markers. This deformation is obtained by the USB camera when the actual external force is applied as shown in the following Fig. 3.

Delaunay triangulation and virtual point markers implementation
To estimate a contact position and force, the camera recognizes markers placed on the hemisphere of the fingertip, which means that the three-dimensional location of each marker is projected onto a two-dimensional visual plane and the markers can be classified into several meaningful groups. The Delaunay triangulation and the Voronoi space division are well-known methods that can divide space to be several small parts. However, when using the Voronoi space division, because the edge area of the observed markers becomes infinite size, it is necessary to set a separate closing marker as shown in Fig. 4e. In this study, the Delaunay triangulation was used to divide the markers to be several meaningful parts. The Delaunay triangulation method divides a point cloud into several triangles by dividing the circumcircle of triangles without including the other vertices of the triangle. By dividing the observed point cloud into several triangles, it is possible to measure the change in contact position or the change in force by comparing the area of each triangle.
When an external force acts on the silicon finger, the labeling order changes depending on the moving position of each marker as shown in Fig. 4. In such a case, the present divided area cannot directly be compared with the previously collected area because the labeling order is different. To overcome this problem, the observed markers are fitted into ellipses using the leastsquares method in the LabView fitting ellipse function. From the obtained fitted ellipses, information about the major and minor axes and the center point can also be obtained. Subsequently, the virtual point markers on the ellipse can be generated using the following Eq. (1): where [x n , y n ] T denotes the position of virtual point marker located on the ellipse, θ n denotes the internal angle of the n-th triangle made by [x n , y n ] T , denotes the center position of the fitted ellipse. The total number of sampled points on the ellipse is denoted by t n , and a and b denote the major and minor axes of the fitted ellipses, respectively. The labeling order on virtual point markers is robust even if the lighting environment changes. The process for this method is visually shown in Fig. 5.
Another advantage of using the virtual point markers is that it is robust to the change in the external environment. The outer layer of the fingertip sensor is made of silicone and is thus, affected by the change in the light condition of the external environment as shown in Fig. 6.
The positions of several markers may not be detected due to the external environment. As each triangle changes according to the position of the markers and if it changes frequently, the force or contact position estimation may become unstable and inaccurate. However, the virtual point markers are created from a fitted ellipse and thus, the Delaunay triangulation can be performed reliably even though the positions of several markers are unknown.

Relocation of the area change and estimation of the contact point position
As we mentioned before, the issue of labeling order of each divided part is solved by the introduction of virtual point markers. However, the virtual point markers are always uniformly distributed in an area. This may cause another issue that if the area size ratio of each triangle is almost identical before and after an external force acts on it, the change in the triangle area cannot be detected even if the shape of the ellipse is changed by an external force as shown in Fig. 7a. To overcome this issue, instead of comparing just each triangle, we compare the triangle area averages including the virtual point markers as in Fig. 7b. The reason for taking the average is that at an edge, especially around the upper edge of the fingertip sensor, the number in the triangle exists only two triangles compared with the center part. A difference in the number of triangles can lead to a considerable difference in the total size area of the sum of all triangles. By utilizing the area average, it is possible to compare the center points containing around six triangles with the edge points containing at least two triangles. By using the average, this undesirable estimation can be avoided to a great extent.
When the contact point position is estimated by the change in triangle area around the marker, the spatial resolution depends on the number of markers. In this study, the proposed visual-tactile sensor has 85 point markers, which are not very dense and thus the spatial resolution is not high. To compensate for the low spatial resolution, the contact point position P c is estimated using the following equation: where vp 1a denotes the area size of the center point that is thought to be in contact, vp 2a ∼ vp 7a denote the peripheral points of the area size considered to be in contact, and vp 2x ∼ vp 7x and vp 2y ∼ vp 7y stand for x− and y−position of each point. This calculation enables continuous retrieval of the contact position even though the spatial resolution is low. Additionally, this choice makes the sensor easy to assemble and reduces computational costs.

Experiments on the comparison between the estimated and actual contact positions
It is necessary to evaluate how much estimation error is in the contact position estimated by the proposed sensor. In this study, transparent silicone rubber is used for the outer layer of the visual-tactile sensor. It is possible to determine the contact position by marking the contact point as shown in Fig. 8a. To evaluate the performance of the sensor according to the contact position, the measured area is sectioned into five parts as shown in Fig. 8b. The experimental device using the linear guide for accurate contact angle division and force measurement is shown in Fig. 9. The estimation error is evaluated by comparing the actual contact position with the estimated contact position by Eq. (2). A total of 30 trials in which an external force is applied to each area is performed, and the error of each trial is averaged.
It can be seen from Table 1 that the overall error average is 1.475 (mm). The error was smaller on the side, which is thought to be because of the nature of the contact position improving as the average variation in the area is more abundant on the side. The result of the error is small enough compared with the diameter of the proposed visual-tactile sensor 60 (mm).

Contact force estimation
In this study, we model a contact force acting on the fingertip such that the change in the contact area when the silicone is deformed due to an external force induces a spring-like force.
It can be seen from Fig. 10 that the change in the triangle area decreases when the contact position approaches the upper sensor edge. This indicates that the triangle area size is different according to where the external force acts even if the applied force is constant. Namely, the relationship between the applied contact force and the change in the triangle area is not linear and it changes depending on the position. To address the non-linearity, we first numerically analyzed the relationship between the displacement of the point markers and the contact force using a finite element method (FEM) offline. The numerical analysis of the fingertip deformation due to a static contact force according to contact angles is shown in Fig. 11. In this analysis, we assume that the upper end of the hemispherical fingertip is fixed, and a rectangular steel plate is pushed up by an external force from the bottom. The hemispherical fingertip is deformed by the external force applied by the steel plate, and the position of each inner point is changed depending on the relationship between the external force and the contact angle. From FEM, we determined that the distance between each side point spreads linearly when an external force acts on the fingertip as shown in Table 2.

Experiment for determining the revision of the distance between each point marker
The correction function to revise the non-linearity can be determined through numerical analysis as shown in Fig. 11. As a result, we can obtain an almost linearized relationship between the contact position and force using the correction function as shown in Fig. 12.
When the contact position approaches both upper edges, the spread returns to its original position. In the case of distance 4 and 5, the distance between point markers on both sides can be kept constant even if the contact position is approaching the boundary. Therefore, instead of using the change in the triangle area, an external force can be estimated by applying the correction function to the distance between the point-markers on both edge sides as shown in Eq. (3).
where P nc denotes the nearest estimated point around the location of ⑤ on Fig. 12, P nc+1 and P nc-1 are the points at the front and back of the P nc , F est denotes the estimated force and K est denotes a stiffness coefficient depending on the position and material of the fingertip.

Simulation
In this section, numerical simulations of grasping an object and its orientation control are conducted using a newly redesigned virtual object frame. The object grasping and manipulation controller use the virtual frame, which was proposed by Tahara et al. [16], is an externally   sensorless controller. It is robust because there is no effect from noise and time delay of sensing information, while the accuracy depends on an initial contact position and object shape. Kawamura et al. [17] has also proposed an object grasping and orientation control method using visual information obtained from a camera to detect the orientation of a grasped object. In Kawamura's method, the desired virtual object frame is updated by the visual information from a camera as an external sensor. However, generally to detect the orientation of a grasped object in enough accuracy is still difficult even using a very high speed and dense spatial resolution camera. In this study, unlike Kawamura's method, the virtual object frame is composed using the contact position information obtained from the proposed sensor. It is easy to compose the desired virtual object frame using the contact position information from the proposed sensor. Additionally, our proposed visual-tactile sensor can estimate the contact force if necessary, while Kawamura's method cannot. Even if the sensing information is not accurate enough, our proposed controller is robust to sensing error, noise, and time delay because the contact position information is not directly used in a feedback controller and it is only used for composing and updating the virtual object frame.

Dynamics of the object-fingers system
The multi-fingered robotic hands of our research have a hemispherical shape and soft contact surface. The robot hands and grasping object have a nonholonomic rolling constraint when the robot hands hold a grasping object. The objects and the multi-fingered robot hands are modeled and define the dynamics. In addition, the conditions of the contact surface of the soft fingertips are defined to satisfy the physical laws of the real world. The soft fingertip robot hands can be illustrated in three dimensions as shown in Fig 13. We assume that the robotic hand has three fingers in which each finger has five Degree Of Freedom (DOF)s and the grasped object is a triangular prism, and i(= 1, 2, 3) be the index of each finger. Therefore, this robotic hand has a total of fifteen DOFs. The dynamics of the object-finger system have already been modeled in [16,17]. In this simulation, we use the same dynamics, which can be given as follows: For the multi-fingered hand: For the grasped object: where H(q) ∈ R 15×15 denotes an inertia matrix of all the fingers; M ∈ R 3×3 is the mass of the object; I ∈ R t×3 is the inertia tensor of the object; q = [q 11 , . . . , q 15 q 21 , . . . , q 25 q 31 , . . . , q 35 ] T ∈ R 15 stands for the joint angle vector for all fingers; q ∈ R 15 and q ∈ R 15 are the joint angular velocity and acceleration vector, respectively; x ∈ R 3 , ẋ ∈ R 3 , and ẍ ∈ R 3 are the position, velocity, and acceleration vector of the object, respectively; and ω ∈ R 3 and ω ∈ R 3 are the angular velocity and acceleration vector of the object, respectively. Furthermore, S q ∈ R 15×15 and S ω ∈ R 3×3 denote skew-symmetric matrices including Coriolis and centrifugal forces for the robotic hand and the object, respectively. ∂T i ∂q and ∂T i ∂ω are the viscosity between the contact surface of the object and each fingertip. These terms affect the torsional motion of the fingertip on the object surface. The energy dissipation function of the torsional viscosity of the fingertip is modeled as follows: Fig. 11 Numerical analysis of the change in distance between each point marker: the distance between each point marker is spread on both sides when an external force acts on the fingertip Table 2 Distance of both side point markers in the case where the static external force is applied to the different positions on the fingertip and the ratio between the distance of before and after contacts: the contact angle is defined in Fig. 8b Distance before contact Fig. 11a  ( where ω i indicates an angular velocity vector for each fingertip, which can also be expressed by the joint angular velocity vector q , and b is a viscosity coefficient that depends on the contact area and the fingertip material. The contact position of each fingertip on the object surface is denoted by x i ∈ R 3 and C i = [C iX , C iY , C iZ ] ∈ SO(3) denotes a rotational matrix to express the orientation of each contact surface on the object from Cartesian coordinates. In the dynamics, two constraints should be considered. One is a contact condition in the normal direction on the contact surface. A constraint force f i induced by the contact condition implies a grasping force. We assume that there is only one contact point on the fingertip. Moreover, the contact force changes according to the deformation of the soft fingertip as shown in Fig. 14. The relationship between the displacement of the fingertip and its reproducing force is defined as Eq. (8), which was proposed by Arimoto et al. [18].
where ξ denotes a positive damping coefficient and k denotes the positive elastic coefficient, both of which depend on the soft fingertip material, and r i denotes the deformation displacement of the i-th soft fingertip. The other is a rolling constraint condition in the tangential direction on the contact surface. The rolling constraint forces iX and iZ imply tangential contact forces, and X iq , X ix , X iω , and Z iω are the rolling constraint Jacobians, and u ∈ R 15 denotes an input torque vector for each joint. The details of these constraint Jacobians can be found in [16,17].

Design of the control input
The object orientation controller using the contact position information and stable grasping controller based on Tahara's and Kawamura's controllers are designed here. These two controllers are eventually combined into one controller, which is given as follows: where u sg denotes a torque control input to each actuator for reliable grasping and u at denotes a torque control input to each actuator for the object orientation. The control input u sg generates a grasping force at the center of each fingertip to approach the fingertips to each other as shown in Fig. 15a. It is given as follows: where B ∈ R 15×15 stands for the positive damping coefficient matrix, J ∈ R 15×3 stands for the Jacobian matrix,  Based on Tahara's and Kawamura's controllers [16,17], a new object grasping and orientation controller is designed as follows: where u at denotes the newly proposed object orientation control input. The orientation of the object is regulated by adding the computed desired contact force F di to the object from each fingertip through the Jacobian of each finger. The desired contact force F di is given as follows: where where r x vir , r y vir , and r z vir each denote the column vectors of R vir as shown in Eq. (12), and the physical meaning of each vector composing f the object orientation controller is illustrated in Fig. 15b. The desired rotational axis ω di of the grasped object is updated in real-time by the (13) K ri r x vir × r xd vir + r y vir × r yd vir + r z vir × r zd vir desired object orientation and the present virtual object frame. The distance between the rotational axis and the contact position of each fingertip l di and the desired contact force F di to generate the desired torque around ω di are computed.
Orientation control using contact position information with a time delay Figure 16 shows the coordinates of a robotic hand and a grasped object and the contact position used in the simulations. We assume that the robotic hand has three fingers in which each finger has five DOFs and the grasped object is a triangular prism. It is known that there is a considerable time delay caused by visual image acquisition from a visual sensor and its processing cost. The object contact position observed using the visual-tactile sensor uses a camera sensor of up to 30 FPS. In this study, the time delay can be measured, and its range is 20-30 FPS. The measured visual information by the USB camera is processed between 33 and 50 (ms) in LabVIEW. Namely, the detection of the object orientation can be performed within 50 (ms) at the worst. To overcome the time delay, Kawamura's method is introduced, which controls the virtual object position and orientation (not the real object position and orientation) and updates them according to sensing information, including considerable time delay. Let t delay be the time delay. The virtual object frame R vir (t − t delay ) that consists of the position of the center of each hemispheric fingertip by each joint angle from encoder embedded on each joint [16], the

Initial condition of numerical simulation and results
The orientation of the object is regulated by adding the computed desired contact force F di to the object from (15) R d vir = r xd vir r yd vir r zd vir R vir (t − t delay ) = r x vir r y vir r z vir R c (t − t delay ) = r cx r cy r cz each fingertip through the Jacobian of each finger as shown in Eq. (13). The total control input torque for each finger u i is composed of the summation of the orientation control input and the blind grasping control input that can realize the reliable object grasping already used in Tahara's and Kawamura's methods [16,17]. The condition of the numerical simulations is shown in Table3.
The numerical simulation results of the object grasping and orientation control when using our proposed method and when using Tahara's method are shown in Fig. 17.
In our controller, the desired and present virtual frame are expressed as unit vectors of the rotational matrix, but it is not easy to ascertain the orientation intuitively. In the simulation results, the orientation of the object is expressed using the roll-pitch-yaw angle expression. It should be noted that the roll-pitchyaw angle expression is only used for illustrating the result, and never in control input. It can be seen from these figures that the desired orientation can be realized using the proposed method and its error is clearly reduced compared with Tahara's method. There is an orientation error in Tahara's method because that controller does not use any external sensing information including the contact position, and therefore, the virtual object frame is an approximated frame to express the orientation of the object giving a gap between the actual and the virtual object orientation. Instead, our new controller uses the information of the contact position, which reduces the error. However, there still exists a small error even when our proposed controller is used, which is induced by the design of the virtual object frame as shown in Eq. (12). In this design, we never used the information of the object shape. If the information of the object shape can be used in the controller, the controller would be designed based on a prior knowledge of the object and thus, one of the advantages of our controller, which requires no prior knowledge, would vanish. In this regard, we can confirm that our proposed controller is advantageous compared with Tahara's method.

Experiments
In this section, several grasping and manipulation experiments were performed using a triple-fingered robotic hand in which each finger had three DOFs and the proposed visual-tactile sensors. Due to the difference in degrees of freedom, the movement of the experimental setup of the robot hand was restricted compared with the simulated model. Therefore, in the experiments, in-hand manipulation was performed within a kinematic region in which the number of DOFs would not seriously affect its performance.   In the experiments, to demonstrate the effectiveness of the proposed method, an external sensor to detect the position and orientation of the object must be used to measure a ground-truth value without the use of proposed visual-tactile sensors.
As shown in Fig. 18, the robotic hand was fixed to a fixture on the ground and the ArUco marker [19] is placed on the grasped object. The position and orientation of the ArUco marker could be determined through an external USB camera. Even though some errors remained in the detection of object position and orientation by the camera, the attitude of the grasped object is compared using only ArUco markers; thus the measurement through the camera is regarded as ground-truth in the experiment. The overall system of the experimental setup is shown in Fig. 19. The LabVIEW software by National Instruments a b c d e f Fig. 17 Simulation results of the object orientation control: (a, c, e) indicate the actual object orientation using Tahara's method (without using the visual-tactile sensor), (b, d, f) indicate the actual object orientation using the proposed method (using the visual-tactile sensor) was used to process the visual information from the visual-tactile sensor. The control signal was output to the motor driver by the Compact Rio system by National Instruments. The position and orientation of the grasped object used as a ground-truth were detected by the external camera and LabView. Table 4 shows the initial conditions in the experiment. The experimental results of the object orientation when using our proposed method and when using Tahara's method are shown in Fig. 20. In these figures, the orientation of the object was expressed by roll-pitch-yaw angle for convenience. It should be noted that roll-pitch-yaw angle expression is not used in the controller. We see from Fig. 20a, b that the proposed method is more accurate in the pitch and yaw directions than Tahara's sensorless control method even though there is a time delay. Figure 20c-e show the yaw angle of the object. In the yaw angle, the accuracy of proposed method is almost like Tahara's sensorless method. It is suggested that in the case of the yaw angle, the proposed method can achieve the desired angle eventually by suppressing the desired angle. This is because the limit value in the yaw direction is almost up to 30 (deg) due to the mechanical restriction, and in the yaw direction, it is expected to be insensitive to change because it is not much different from the Tahara's virtual frame. On the other hand, the accuracy of Tahara's method is better than the proposed method according to some initial situations because it acts like a feed-forward controller and thus is not affected by the accuracy of feedback information. We can see from Fig. 20e that at the beginning phase, there is an error in the proposed method, but it gradually decreases because the desired virtual object frame is updated by the controller. However, it is evident from these figures that the proposed method can realize a certain level of accuracy regardless of the initial conditions.
Through these experimental results, we can conclude that the proposed method is more accurate than Tahara's method irrespective of initial conditions.