Skip to main content
  • Research Article
  • Open access
  • Published:

Kinematics of platform stabilization using a 3-PRS parallel manipulator

Abstract

In this paper, a 3-PRS (prismatic, revolute, and spherical) parallel manipulator for platform stabilization is designed. The main purpose of this device is to stabilize visual equipment, which is placed on top of a car to inspect electrical transmission cables, as part of routine maintenance. Due to the bulky and heavy infrared cameras used during inspections, a stabilizer platform has been designed to handle the weight of camera equipment up to 10 kg. This device consists of two major mechanisms. The first mechanism is able to adjust the angle of the camera. Thus, the user can focus the camera along the electric transmission lines. The second mechanism is stabilization. The mechanism serves to stabilize the orientation and position of the camera in the roll, pitch, and heave directions. To test the performance of the stabilization mechanism, the device is fed with the known value of the angle with regard to the input. As such, the device is trying to compensate for the change in angle. The results show that the errors between the input angles and compensated angles are in the range of 0.4–3%. Errors are seen to be within an acceptable range. It is significant that the resultant errors do not affect the orientation of the camera.

Introduction

This paper introduces the use of a parallel manipulator for the inspection of electrical transmission cables. As part of routine maintenance, an infrared camera is used to monitor the damage to the overheated cables. Without using a stabilization system, the infrared camera takes snapshots of the wires that reverberate, causing blurry images. Thus, a novel design is proposed: a stabilization platform that can hold the weight of the camera and equipment up to 10 kg. The platform is used to stabilize the camera. Conventional practice for inspecting electric transmission lines is for an inspector to walk along, hold a thermal camera, and focus the camera on the electric transmission lines. This practice requires a large workforce plus time. The concept of installing a camera on top of a car is applied to solve the problem. However, when the vehicle is moving and inspection of the cables is carried out, the quality of the images or the videos that are captured by the camera are of poor quality. Such results are due to the continuously changing orientation of the camera as the vehicle moves along. Hence, whenever the vehicle encounters a bump or hole in the road, jittering images and videos are the result. Especially when a thermal camera is used, significant errors can occur. To counteract this, a stabilization model has been suggested, which could remove the effect of vertical motion when the vehicle experiences road bumps [1]. Since the model needs to be developed specifically based on a vehicle’s suspension system, this method requires the known parameter of the system for modeling prior to designing such a system. The gimbal mechanism is generally used to stabilize a camera system for most autonomous vehicles and UAVs [2, 3]. Compared to gimbal car mounts and a vibration isolator that are designed to handle a camera on the top or the side of a vehicle, they can only handle a single camera at a time. When dealing with a larger system of multiple cameras, a more complicated stabilization approach is required that can activate a number of controls. The system requires multiple actuators to drive each degree of freedom of the camera holder along with stabilizing the system. Heya et al. [4] developed a three-degree-of-freedom electromagnetic actuator to perform image stabilization.

Pulli et al. [5] applied a nonlinear filter to perform real-time video stabilization using a gyroscope. The algorithm used can reduce the amount of small motion and rolling shutter distortion and smooth out large motion, thus enabling videos to be taken by mobile devices. A variety of methods can be applied to stabilize the images and videos taken [6, 7]. Sensors such as gyroscopes [8] and an inertial measurement unit (IMU) are used for hardware stabilization [9]. Votrubec [10] also introduced a control system for platform stabilization using a gyroscope. An image stabilizer, introduced by [11], which applied a template matching method with gyrosensors, has shown promising results while reducing oscillation during the dynamic motion of a walking robot.

In robotics, parallel manipulators have been widely applied, such as industrial robots, space exploration, satellite tracking, and military robots. The advantages of parallel robots over other serial or chain manipulators are that they can handle large loads, have faster responses and are much better in terms of accuracy [12]. Thus, to handle a weight of 10 kg, a parallel manipulator is applied to stabilize the camera system. A Stewart platform has been introduced with a six degree-of-freedom (DOF) parallel mechanism, which proved to be highly effective [13]. In some applications, less motion or workspace is needed. A modified Stewart platform presents an alternative approach as a reduced DOF parallel manipulator [14]. With fewer actuators, the kinematics of a three-DOF parallel manipulator are explored [14]. For example, the DELTA robot has three translational motions and is a 3-RPS (revolute-prismatic-spherical) parallel manipulator [15, 16]. A large-scale parallel robot with 5-DOF [17] was designed based on topology and dimensional synchronous optimization to perform in situ machining of steel components. As part of the design of a manipulator, kinematic and dynamic analyses were conducted to analyze the capability of the manipulator. The kinematics of the parallel robot were analyzed to compare the advantages and performances of the various designs for such a robot [16, 18]. Thomas et al. [19] performed kinematic and dynamic analyses of a 3-PRUS parallel manipulator. The analysis led to trajectory planning for the end effector. Throughout the previous designs of parallel manipulators, many attempts have been geared toward finding the best design, which can overcome the many challenges that arise in developing a multiple degrees of freedom manipulator having a simple architecture and easy-to-determine inverse kinematics.

In this paper, a 3-PRS parallel robot has been designed to stabilize a camera system for an inspection vehicle. A system of multiple cameras can be controlled remotely. The stabilizer can be applied on any type of vehicle, and a model parameter of the vehicle is then not needed. The parallel manipulator has been designed to handle high acceleration and speed changes of a vehicle with the inertia of the camera system. Hence, compensation for traveling over rough terrain is performed by the manipulator.

Method

The structure of most camera stabilizers found on the market is serial linkage. However, due to the workload limitation of this chain manipulator, it is more suitable to use a parallel linkage mechanism. A parallel linkage mechanism has a faster response than serial linkage because it has fewer moving parts. Such a device consists of two main mechanisms. First, the parallel linkage has an angle adjusting mechanism for the platform, which helps the user find focus along the electrical transmission cables. Then, the mechanism hones in on the flip and yaw manipulation. The flip and yaw mechanisms have been designed to adjust the camera angle in the pitch (θ) and yaw (ϕ) directions, respectively. The second function of the parallel linkage is to stabilize the platform. The structure of this mechanism is a 3-PRS (prismatic-revolute-spherical) parallel mechanism originally designed by Carretero et al. [14]. Primarily, the mechanism serves to stabilize the orientation and the position of the camera in the roll (ψ), pitch (θ), and heave (z) directions. In Fig. 1, an overview of the design is shown.

Fig. 1
figure 1

Overview of the device

Inverse kinematics

The 3-PRS parallel linkage consists of 3 legs, and each leg is identical. The links (each leg) are connected through a prismatic, revolute, and spherical joint. The prismatic joint is an active joint, while the revolute and spherical joints are passive joints. In Fig. 2, the vector diagram that represents one leg of 3-PRS is shown. The 3-PRS parallel linkage structure has three degrees of freedom (DOFs), namely, roll (ψ), pitch (θ), and heave (z) directions. The 3-DOF is sufficient to stabilize the camera platform, which is placed on top of the vehicle during usage. To manipulate the moving platform of the device so that it can be in the desired orientation or position, an inverse kinematic equation must be obtained.

Fig. 2
figure 2

Schematic diagram of a the joints and links of the system, b vector diagram representing one leg of 3-PRS, and operating range of the ball screw set

As shown in Fig. 2, each vector represents:

  • \({\varvec{P}}={\left[\begin{array}{ccc}x& y& z\end{array}\right]}^{T}\) is the vector from the origin of the base frame

  • \({{\varvec{a}}}_{{\varvec{i}}}={\left[\begin{array}{ccc}{a}_{ix}& {a}_{iy}& {a}_{iz}\end{array}\right]}^{T}\) is the vector from the origin of the moving frame to each attachment point of the spherical joint, \({{\varvec{S}}}_{{\varvec{i}}}\).

  • \({{\varvec{r}}}_{{\varvec{i}}}={\left[\begin{array}{ccc}{r}_{ix}& {r}_{iy}& {r}_{iz}\end{array}\right]}^{T}\) is the vector from the base frame origin to \({{\varvec{S}}}_{{\varvec{i}}}\).

  • \({{\varvec{b}}}_{{\varvec{i}}}={\left[\begin{array}{ccc}{b}_{ix}& {b}_{iy}& {b}_{iz}\end{array}\right]}^{T}\) is the vector from the origin of the base frame to the origin of the moving path of the prismatic joint. Note that the magnitude of this vector \(\Vert {{\varvec{b}}}_{{\varvec{i}}}\Vert\) is equal to \({r}_{B}\) and constant.

  • \({{\varvec{q}}}_{{\varvec{i}}}={\left[\begin{array}{ccc}{q}_{ix}& {q}_{iy}& {q}_{iz}\end{array}\right]}^{T}\) is the vector from the origin of the moving path of the prismatic joint to each attachment point of the revolute joint, \({{\varvec{R}}}_{{\varvec{i}}}\). Note that the magnitude of this vector, \(\Vert {{\varvec{q}}}_{{\varvec{i}}}\Vert\), is equal to the length of each actuator, \({Q}_{i}\), which is an acquired value from the inverse kinematic equation.

  • \({{\varvec{l}}}_{{\varvec{i}}}={\left[\begin{array}{ccc}{l}_{ix}& {l}_{iy}& {l}_{iz}\end{array}\right]}^{T}\) is the vector from \({{\varvec{R}}}_{{\varvec{i}}}\) to \({{\varvec{S}}}_{{\varvec{i}}}\). Note that the magnitude of this vector, \(\Vert {{\varvec{l}}}_{{\varvec{i}}}\Vert\), is equal to the length of the transmitted link, \(L\), and constant.

In Fig. 2, the first actuator is placed on the base of the frame. The second and third actuators are placed with angles of \(\alpha\) and \(\beta\), respectively, with respect to the x-axis. The moving path of each actuator inclines from the horizontal plane with an angle \(\gamma\). The vector that is expressed in the moving frame must be converted and expressed as the \({{\varvec{a}}}_{{\varvec{i}}}\) vector. The \({{\varvec{a}}}_{{\varvec{i}}}\) vector can be expressed in terms of constant values, as shown below:

$${{\varvec{a}}}_{1}={\left[\begin{array}{ccc}{a}_{1x}& {a}_{1y}& {a}_{1z}\end{array}\right]}^{T}={\left[\begin{array}{ccc}{r}_{P}& 0& 0\end{array}\right]}^{T}$$
$${{\varvec{a}}}_{2}={\left[\begin{array}{ccc}{a}_{2x}& {a}_{2y}& {a}_{2z}\end{array}\right]}^{T}={\left[\begin{array}{ccc}{r}_{P}{c}_{\alpha }& {r}_{P}{s}_{\alpha }& 0\end{array}\right]}^{T}$$
$${{\varvec{a}}}_{3}={\left[\begin{array}{ccc}{a}_{3x}& {a}_{3y}& {a}_{3z}\end{array}\right]}^{T}={\left[\begin{array}{ccc}{r}_{P}{c}_{\beta }& {r}_{P}{s}_{\beta }& 0\end{array}\right]}^{T}$$

The frame whereby a vector is expressed can be transformed by multiplying the rotation matrix. The rotation matrix that is suitable for the task is Euler’s ZXY order. Note that \(c\) is \(cos\) and \(s\) is \(sin\):

$$T={R}_{y,\theta }{R}_{x,\psi }{R}_{z,\phi }$$
$$T=\left[\begin{array}{ccc}{c}_{\theta }{c}_{\phi }+{s}_{\psi }{s}_{\theta }{s}_{\phi }& -{c}_{\theta }{s}_{\phi }+{s}_{\psi }{s}_{\theta }{c}_{\phi }& {c}_{\psi }{s}_{\theta }\\ {c}_{\psi }{s}_{\phi }& {c}_{\psi }{c}_{\phi }& -{s}_{\psi }\\ -{s}_{\theta }{c}_{\phi }+{s}_{\psi }{c}_{\theta }{s}_{\phi }& {s}_{\theta }{s}_{\phi }+{s}_{\psi }{c}_{\theta }{c}_{\phi }& {c}_{\psi }{c}_{\theta }\end{array}\right]$$

From Fig. 2, the relation from the closed-loop vector chain is obtained:

$${{\varvec{r}}}_{{\varvec{i}}}={\varvec{P}}+{\varvec{T}}{{\varvec{a}}}_{{\varvec{i}}}\left[\begin{array}{c}{r}_{ix}\\ {r}_{iy}\\ {r}_{iz}\end{array}\right]=\left[\begin{array}{c}x\\ y\\ z\end{array}\right]+\left[\begin{array}{ccc}{T}_{11}& {T}_{12}& {T}_{13}\\ {T}_{21}& {T}_{22}& {T}_{23}\\ {T}_{31}& {T}_{32}& {T}_{33}\end{array}\right]\left[\begin{array}{c}{a}_{ix}\\ {a}_{iy}\\ {a}_{iz}\end{array}\right]$$
$${r}_{ix}=x+{T}_{11}{a}_{ix}+{T}_{12}{a}_{iy}+{T}_{13}{a}_{iz}$$
$${r}_{iy}=y+{T}_{21}{a}_{ix}+{T}_{22}{a}_{iy}+{T}_{23}{a}_{iz}$$
(1)
$${r}_{iz}=z+{T}_{31}{a}_{ix}+{T}_{32}{a}_{iy}+{T}_{33}{a}_{iz}$$

To define the configuration of the structure, the constraint equations are defined. Constraint equations are obtained because vectors \({{\varvec{r}}}_{{\varvec{i}}}\) are on the same plane. Therefore, constraint equations are shown for the first, second, and third leg, respectively:

$${r}_{1y}=0$$
(2)
$${r}_{2y}={r}_{2x}\mathrm{tan}\left(\alpha \right)$$
(3)
$${r}_{3y}={r}_{3x}\mathrm{tan}\left(\beta \right)$$
(4)

Substituting Eqs. (24) into Eq. (1) yields Eqs. (57):

$$y+{T}_{21}{a}_{1x}+{T}_{22}{a}_{1y}+{T}_{23}{a}_{1z}=0$$
(5)
$$y+{T}_{21}{a}_{2x}+{T}_{22}{a}_{2y}+{T}_{23}{a}_{2z}=\left(x+{T}_{11}{a}_{2x}+{T}_{12}{a}_{2y}+{T}_{13}{a}_{2z}\right)\mathrm{tan}\left(\alpha \right)$$
(6)
$$y+{T}_{21}{a}_{3x}+{T}_{22}{a}_{3y}+{T}_{23}{a}_{3z}=\left(x+{T}_{11}{a}_{3x}+{T}_{12}{a}_{3y}+{T}_{13}{a}_{3z}\right)\mathrm{tan}\left(\beta \right)$$
(7)

Substituting vector \({{\varvec{a}}}_{1}\) and rotation matrix \(T\) into Eq. (5) yields:

$$y=-{c}_{\psi }{s}_{\phi }{r}_{p}$$
(8)

Substituting vector \({{\varvec{a}}}_{2}\), rotation matrix \(T\) and Eq. (8) into Eq. (6) gives:

$$x=-\left({c}_{\theta }{c}_{\phi }+{s}_{\psi }{s}_{\theta }{s}_{\phi }\right){r}_{p}{c}_{\alpha }-(-{c}_{\theta }{s}_{\phi }+ {s}_{\psi }{s}_{\theta }{c}_{\phi }){r}_{p}{s}_{\alpha }+\frac{{r}_{p}}{\mathrm{tan}\left(\alpha \right)}({c}_{\psi }{s}_{\phi }\left({c}_{\alpha }-1\right)+{c}_{\psi }{c}_{\phi }{s}_{\alpha })$$
(9)

Substituting vector \({{\varvec{a}}}_{3}\) and rotation matrix \(T\), Eq. (8,9) into Eq. (7) yields:

$$\phi ={\mathrm{tan}}^{-1}\left(\frac{A}{B}\right)$$
(10)

where

$$A={\frac{1}{{c}_{\beta }}(s}_{\beta }{c}_{\theta }{c}_{\beta }-{s}_{\beta }{c}_{\theta }{c}_{\alpha }+{s}_{\beta }^{2}{s}_{\psi }{s}_{\theta }-{s}_{\beta }{s}_{\psi }{s}_{\theta }{s}_{\alpha }+{s}_{\beta }{c}_{\alpha }{c}_{\psi }-{c}_{\psi }{s}_{\beta }{c}_{\beta })$$
$$B=\frac{1}{{{s}_{\alpha }c}_{\beta }}({s}_{\alpha }{c}_{\psi }{c}_{\beta }^{2}-{s}_{\alpha }{c}_{\beta }{c}_{\psi }-{s}_{\alpha }{s}_{\beta }{s}_{\psi }{s}_{\theta }{c}_{\beta }+{s}_{\alpha }{s}_{\beta }{s}_{\psi }{s}_{\theta }{c}_{\alpha }+{s}_{\alpha }{s}_{\beta }^{2}{c}_{\theta }-{s}_{\alpha }^{2}{s}_{\beta }{c}_{\theta }-{s}_{\beta }{c}_{\psi }{c}_{\alpha }^{2}+{s}_{\beta }{c}_{\psi }{c}_{\alpha })$$

In Fig. 3, the plane of the moving path of the i-th actuator is shown.

Fig. 3
figure 3

The plane of moving path of i-th actuator

From Fig. 3, applying the Pythagorean theorem gives:

$${L}^{2}={\left({r}_{b}-{r}_{i}cos{\theta }_{L,i}+{q}_{i}\mathrm{cos}({90}^{o}-\gamma )\right)}^{2}+{\left({r}_{i}sin{\theta }_{L,i}-{q}_{i}\mathrm{sin}({90}^{o}-\gamma )\right)}^{2}$$
(11)

where \({\theta }_{L,i}={\mathrm{tan}}^{-1}\left(\frac{{r}_{iz}}{\sqrt{{r}_{ix}^{2}+{r}_{iy}^{2}}}\right)\)

Rewriting Eq. (12) as a function of \({q}_{i}\) yields:

$${q}_{i}=\frac{-{\kappa }_{i}\pm \sqrt{{\kappa }_{i}^{2}-4{\varepsilon }_{i}{\nu }_{i}}}{2{\varepsilon }_{i}}$$
(12)

where \({\kappa }_{i}\), \({\varepsilon }_{i}\), and \({\nu }_{i}\) are defined as (13–15).

$${\kappa }_{i}=2\left({r}_{B}-{r}_{i}cos{\theta }_{L,i}\right)cos\gamma -2{r}_{i}sin{\theta }_{L,i}sin$$
(13)
$${\varepsilon }_{i}=1$$
(14)
$${\nu }_{i}={\left({r}_{i}sin{\theta }_{L,i}\right)}^{2}+{\left({r}_{B}-{r}_{i}cos{\theta }_{L,i}\right)}^{2}-{L}^{2}$$
(15)

Hence, the position of the platform is defined via the kinematic equations. Equation (12) shows that there are two solutions which implies that there could be two configurations of the mechanism which based on the inward and the outward position of the links. However, with this designed mechanism, there is a limitation of the device. The device is designed to work with a regular vehicle and is able to handle the Watts profile hump with maximum height of 76.2 mm. The platform is designed to be able to adjust within a 20–90 degrees range in the yaw direction for an application which track the electrical cable.

Orientation and position estimation

To stabilize the orientation and position of the camera, the changing orientation and position of the platform’s base must be previously determined. An accelerometer is used to estimate the changes in position, while the IMU sensor is used to estimate the change in orientation. In addition, a sensor fusion process is applied to both data to enhance its accuracy. The changes in orientation and position are then substituted into the inverse kinematic equation of the 3-PRS mechanism. Accordingly, the actuators drive the platform to the desired position. The desired position compensates for the change in position in which the base of the platform is altered due to the motion of the vehicle. Eventually, both the orientation and position of the moving platform reach the desired position.

A. Orientation estimation by sensor fusion

To determine orientation by observing the gravity vector, an accelerometer is used. A gyroscope is used to find the change in orientation via dead reckoning. It is noted, however, that the accelerometer and the gyroscope provide poor quality data, which cannot be directly applied to stabilize the system. For the accelerometer, apart from the gravitational acceleration, it also measures the acceleration, which arises by external forces or movement. If any external forces or movements are detected on any equipment while the sensor is reading, the orientation data are considered to be unreliable. Regarding the gyroscope, the process of dead reckoning requires a previous position to estimate the current position. Usually, during the sensor reading, noises and biases are presented. Thus, when noises and biases are included in determining orientation, errors accumulate and can affect the calculated orientation. Such a situation eventually causes the orientation to drift over a period.

To reduce the undesired effects based on each sensor, sensor fusion can be set in motion. The algorithm that is used to fuse the orientation data from the accelerometer and the gyroscope is a complementary filter. The following equations are used to find the roll and pitch filtered angle by the IMU sensor:

$${\psi }_{i}=\alpha *({\psi }_{i-1}+\dot{\psi }\Delta T)+\left(1-\alpha \right)*{\mathrm{tan}}^{-1}\left(\frac{{G}_{py}}{{G}_{pz}}\right)$$
(16)
$${\theta }_{i}=\alpha *({\theta }_{i-1}+\dot{\theta }\Delta T)+\left(1-\alpha \right)*{\mathrm{tan}}^{-1}\left(\frac{{-G}_{px}}{\sqrt{{G}_{py}^{2}+{G}_{pz}^{2}}}\right)$$
(17)

where \(\alpha\) weight ratio (range of value between 0–1).

\(\Delta T\) time step (s).

\({\psi }_{i},{\theta }_{i}\) roll and pitch angles in the current time step, respectively (rad).

\({\psi }_{i-1},{\theta }_{i-1}\) roll and pitch angles in the previous time step, respectively (rad).

\(\dot{\psi },\dot{\theta }\) rate of change of roll and pitch angle, respectively, acquired from the gyroscope (rad/s).

\({G}_{px},{G}_{py},{G}_{pz}\) gravitational acceleration in the x-, y-, and z-axes, respectively, acquired from the accelerometer (m/s2).

Figure 4 shows that the spiking orientation values from an accelerometer occur when an external force is exerted, and the drifting orientation data from the gyroscope materialize over time. Hence, the information from both accelerometer and gyroscope are chosen to determine the orientation through complementary filter. The integration was picked to start at the point where the bump is detected. The filtered orientation from the sensor fusion appears not to be affected by these problems. The weight ratio \(\alpha\) was selected at 0.96 based on trial and errors.

Fig. 4
figure 4

The orientation data as measured from accelerometer, gyroscope, and complementary filter

B. Position estimation by accelerometer

In this paper, the changing position is acquired by integrating the acceleration data from the accelerometer. Data that have been fed to operate the 3-PRS mechanism must be expressed in an inertial frame, while the reading data from the sensor are expressed in a sensor frame, which is moving with the platform. Therefore, the rotation matrix must be multiplied to transform the coordinates of the data.

The selected rotation sequence from the inertial frame to the sensor frame is ZYX. Therefore, the rotation sequence from the sensor frame to the inertial frame is XYZ. The rotation matrix from the sensor frame to the inertial frame in XYZ order is:

$${R}_{B}^{I}=\left[\begin{array}{ccc}{c}_{\theta }{c}_{\phi }& {c}_{\phi }{s}_{\theta }{s}_{\psi }-{c}_{\psi }{s}_{\phi }& {c}_{\psi }{c}_{\phi }{s}_{\theta }+{s}_{\psi }{s}_{\phi }\\ {c}_{\theta }{s}_{\phi }& {c}_{\phi }{c}_{\psi }+{s}_{\theta }{s}_{\phi }{s}_{\psi } & {c}_{\psi }{s}_{\theta }{s}_{\phi }-{c}_{\phi }{s}_{\psi }\\ -{s}_{\theta }& {c}_{\theta }{s}_{\psi }& {c}_{\theta }{c}_{\psi }\end{array}\right]$$

Acceleration that was integrated to find the changing position must be linear acceleration only (acceleration from movement). Consequently, the gravitational acceleration is compensated:

$${a}_{I}={R}_{B}^{I}{a}_{m}+\left[\begin{array}{c}0\\ 0\\ g\end{array}\right]$$
$$\left[\begin{array}{c}{a}_{Ix}\\ {a}_{Iy}\\ {a}_{Iz}\end{array}\right]={R}_{B}^{I}\left[\begin{array}{c}{a}_{mx}\\ {a}_{my}\\ {a}_{mz}\end{array}\right]+\left[\begin{array}{c}0\\ 0\\ g\end{array}\right]$$

where \({a}_{I}\) acceleration in inertial frame (m/s2).

\({a}_{m}\) acceleration in sensor frame, acquired from accelerometer (m/s2).

\(g\) gravitational acceleration (9.81 m/s2). The acceleration in the inertial frame that compensated gravity can be expressed accordingly as follows:

$${v}_{i}={v}_{i-1}+{a}_{i-1}\Delta T$$
(18)
$${p}_{i}={p}_{i-1}+{v}_{i-1}\Delta T$$
(19)

where \({v}_{i},{p}_{i}\) velocity and position in the current time step, respectively. \({a}_{i-1},{v}_{i-1},{p}_{i-1}\) acceleration, velocity, and position in the previous time step and inertial frame, respectively. \(\Delta T\) time step (s).

The position acquired from an accelerometer suffers a similar problem as an orientation and is obtained via a gyroscope. Over time, reading data appear to drift away from true values. To solve this problem, an algorithm to define when to start and stop the integration process is presented.

In this paper, a parallel manipulator stabilizes the changing position in the z-axis when a car is passing over road bumps. In Fig. 5, the experimental data show that when the car is passing over road bumps, the angular velocity in the lateral axis of the car (y-axis) spikes. From this information, the threshold of angular velocity in the y-axis is set to categorize the motion of the car and is used to identify whether the car is moving on a normal road surface or passing over a bumpy road. The upper threshold (black dashed lines) in Fig. 5 serves as the cut off line.

Fig. 5
figure 5

Angular velocity while the car is passing over a bump

The bump used in this study is the Watts profile hump, the slope of which is presented in Fig. 6. The Watts Profile Hump has a maximum height of 76.2 mm, and the width of the profile is 3650 mm. A test is conducted to record angular velocity profile while the vehicle is passing through a bump. If the angular velocity is beyond the upper threshold or the cut offline, integration to determine the change in position is initiated. However, as shown in Eqs. (16) and (17), previous time step terms are needed. Figure 5 shows that point 2 is the point where the algorithm indicates that the car is passing over a bump, but the actual starting point is point 1, which means that the previous time-step angular velocity must be found first. The angular velocity of the previous time step can be found by the following steps. First, when the angular velocity in the y-axis is beyond the upper threshold (point 2), the program will run backward to find the angular velocity that was in the lower threshold (point 1). Then, the time for point 1 is noted. After that, the acceleration is integrated to find the velocity and position from point 1 to point 2. Thus, the value for the velocity and position that resulted from the integration from point 1 to point 2 will be the previous time-step angular velocity.

Fig. 6
figure 6

Watts profile hump is used as a model for a bump

The flowchart (Fig. 7) shows the process used to determine the change in position. Integration from point 2 is continuously carried out until the calculated position has a minus sign. The assumption of the algorithm is that the heights of the road before and after the bump are approximately the same. The level of the normal road surface serves as a datum. The sign of height is positive above the datum and negative otherwise. The calculated position should never be negative since the device only stabilizes when the car is passing over bumps, excluding the case where the car is passing over potholes.

Fig. 7
figure 7

Flowchart of algorithm for finding changing position via the accelerometer

Results and discussion

Two experiments were conducted to test the performance of the stabilization mechanism. In Fig. 8, the working process of the stabilization mechanism is shown. The difference between the actual orientation and the desired orientation of the moving platform was set at 1 degree. This value is based on an acceptable error while perform tracking of electrical transmission cable at the high of 10 m which locates 3-6 m away from the vehicle.

Fig. 8
figure 8

The flow chart shows the process whereby the stabilization mechanism works to stabilize the camera platform

1) Feeding the known values of angle as input

For the first experiment, the device was fed with the known values of the angles as input. The angles of the moving platform are measured by the IMU sensor. The measured angles and input were then compared. In Fig. 9, the testing procedure is shown. In Table 1, the results of the measured angles and the input angles are shown.

Fig. 9
figure 9

a The device was test with the known angles and b the moving platform

Table 1 The measured angles compared to the input angles along with the errors

In Table 1, the errors between the input angle and measured angle in the range of 0.4–3% are shown. The range of errors was acceptable since the extent of the error did not significantly affect the orientation of the camera. For example, if the orientation of the base platform was changed to 6 degrees, the orientation of the moving platform will compensate by 6.18 degrees (as an average result from the experiment). The size of the error, which is 0.18 degrees, did not significantly affect the orientation of the camera.

Errors can arise from 2 sources: the friction and clearance of the mechanism and the resolution of the motors. Regarding the hardware part, errors are caused by the inaccuracy of measuring the architecture parameters of the 3-PRS structure. For example, vector \({b}_{i}\) in Fig. 2 is quite difficult to measure by hand since some parts of the structure present obstacles. For the resolution of motors, motors are programmed to drive when the difference between the actual length and desired length of an actuator is more than 1 mm. If more resolution is needed, this value can be adjusted to be less but should not be too small because the device can pick up more noise instead of real signals.

2) Testing on the road

For the second experiment, the device was installed on the roof of the car. The camera was attached to the moving platform of the device. While the car is passing over bumps at a constant speed of 10 km/hr, pictures from the camera are captured. In Fig. 10, the installation of the device on top of the car is shown.

Fig. 10
figure 10

The installation of the device on top of the vehicle during a test

In Fig. 11, pictures that were captured from the camera are displayed. If the roof of the car parking spot is a reference line, which appears as the red line in Fig. 11, it shows that the device is compensating for the change in orientation and position of the camera when the front wheels are passing over bumps, as shown in (Fig. 11b). The black dashed line, which represents the image of the roof, is no longer parallel to the red line. However, when the rear wheels pass over bumps, the device stops compensating for the change in position (the black dashed line and the red line are no longer parallel) since the algorithm used to find the changing position halts the integration of acceleration. In Sect. 3.2, it is observed that the integration of acceleration stops when the sign of the position is negative. This action shows us that the positions acquired from integration are quite inaccurate. For instance, when the car is passing over bumps, it is impossible that any point of the car will have a negative sign of a position, in other words, below the road surface.

Fig. 11
figure 11

Captured images from the camera when the car is passing over bumps

The inaccuracy of the changing position that is acquired from an accelerometer is caused by 2 factors. The first factor points out that the acceleration that was integrated is not just linear acceleration because the accelerometer that is attached to the car can be affected by external forces, viz. vibration from an engine, vibration from the roughness of a road surface, or an impact between car and bumps. Various forces can affect the accuracy of the calculated change in position.

For the second factor, as seen in Eqs. (16) and (17), the inertial acceleration is a function of the rotation matrix. If the orientation acquired from sensor fusion is not accurate, it can affect the accuracy of the calculated changing orientation.

Conclusion

In this paper, a parallel manipulator for platform stabilization was designed and implemented. Experiments were conducted to test the performance of the stabilization mechanism. In the first experiment, known angles were fed to the device as input, and the angle of the moving platform was measured. The errors between the input angles and platform angles were found to be in the range of 0.4–3%. Errors were caused by the inaccuracy of the sensor data, the structural parameters and the inaccuracy of the driven motors. The camera was attached to the moving platform of the device. The device was installed on top of the car, which was moving at a constant speed of 10 km/hr passing over bumps. The images and videos that were captured from the camera show that the device was able to compensate for the change in orientation and position when the front wheels passed over the bumps. However, when the rear wheels passed over the bumps, the device came to a stop to compensate for the change in position due to inaccuracy of the calculated changing position. The accuracy of the calculated change in position depends on two main factors. The first factor is that the acceleration, which was integrated to find the position, must be linear acceleration only. Any external forces that affect the sensor can contribute to the accuracy of the calculated change in position. The second factor relates to the accuracy of the orientation, which is also affected by the accuracy of the calculated change in position since inertial acceleration is a function of the rotation matrix.

After testing the device, it is noted that there is still room for improvement. The following section presents the possible methods that can improve the performance of the device. The current algorithm used to estimate orientation is known as a complementary filter. This algorithm suits the static system because orientation acquired from an accelerometer requires no movement. Such a device, however, becomes dynamic when the vehicle is moving. A Kalman filter, however, may be more suitable for the device because this algorithm has a prediction stage built into the dynamic model and can record measurements from the sensors, which can provide more accurate orientation.

The changing position of the platform was estimated by integrating the acceleration that was read from the accelerometer. To find the changing position, the acceleration that was integrated must be linear acceleration only. When a sensor is attached to a moving car, many external forces can affect the sensors and influence the accuracy of the calculated changing position. However, it was found that the change in acceleration could be due to motor noise, disturbance, during a smooth ride along the road. To eliminate or reduce this problem, other types of sensors can be used to improve accuracy. For example, in an autonomous car, an object around the car is visualized by a LIDAR sensor. As such, our device can use this principle to visualize a bump or pothole and then predict the changing position that might occur on the device. It is recommended that the vibration isolator should be added to the platform to reduce those small amplitude noises. An alternative method to measure in-progress responses of the moving platform is an optical motion tracker.

Availability of data and materials

Not applicable.

References

  1. Shih FY, Stone A (2010) A new image stabilization model for vehicle navigation. Positioning 1(1):8–17

    Article  Google Scholar 

  2. Jędrasiak K, Bereska D, Nawrat A (2013) The Prototype of Gyro-Stabilized UAV Gimbal for Day-Night Surveillance. In: Nawrat A, Simek K, Świerniak A (eds) Advanced Technologies for Intelligent Systems of National Border Security. Springer, Heidelberg, Berlin

    Google Scholar 

  3. Gašparović M, Jurjević L (2017) Gimbal influence on the stability of exterior orientation parameters of UAV acquired images. Sensors. https://doi.org/10.3390/s17020401

    Article  Google Scholar 

  4. Heya A, Hirata K (2020) Experimental verification of three-degree-of-freedom electromagnetic actuator for image stabilization. Sensors 20(9):2485

    Article  Google Scholar 

  5. Bell S, Troccoli A, Pulli K (2014) A non-linear filter for gyroscope-based video stabilization. In: 13th European conference on computer vision (ECCV-2014) Zurich, Switzerland, 6–12 September 2014, pp 294–308

  6. Jia C, Evans B (2013) 3D rotational video stabilization using manifold optimization. In: 2013 IEEE international conference on acoustics, speech and signal processing, 26–31 May 2013, pp 2493–2497

  7. Han F, Xie L, Yin Y, Zhang H, Chen G, Lu S (2021) Video stabilization for camera shoot in mobile devices via inertial-visual state tracking. IEEE Trans Mob Comput 20:1714–1729

    Article  Google Scholar 

  8. Sivčák M, Škoda J (2011) Substitution of Gyroscopic Stabilizer Correction Motor by Active Control of Pneumatic Spring 2011. Springer, Dordrecht

    Google Scholar 

  9. Zong Y-t, Jiang X-y, Song X-s, Wang X, Liu Z-x (2010) Design of a two axes stabilization platform for vehicle-borne opto-electronic imaging system. In: International conference on computational problem-solving, 3–5 December 2010, Li Jiang, China

  10. Votrubec R (2014) Stabilization of Platform Using Gyroscope. Procedia Engineering 69:410–414

    Article  Google Scholar 

  11. Kurazume R, Hirose S (2000) Development of image stabilization system for remote operation of walking robots. In: Proceedings 2000 ICRA millennium conference IEEE international conference on robotics and automation symposia proceedings, 24–28 April 2000, San Francisco, CA, USA

  12. Fatehi MH, Vali AR, Eghtesad M, Fatehi AA, Zarei J (2011) Kinematic analysis of 3-PRS parallel robot for using in satellites tracking system. In: The 2nd international conference on control, instrumentation and automation, 27–29 December 2011, Shiraz, Iran

  13. Gexue R, Qiuhai L, Ning H, Rendong N, Bo P (2004) On vibration control with Stewart parallel mechanism. Mechatronics 14(1):1–13

    Article  Google Scholar 

  14. Carretero JA, Podhorodeski RP, Nahon MA, Gosselin CM (1999) Kinematic analysis and optimization of a new three degree-of-freedom spatial parallel manipulator. J Mech Des 122(1):17–24

    Article  Google Scholar 

  15. Li Y, Xu Q (2007) Kinematic analysis of a 3-PRS parallel manipulator. Robo Computer-Integrated Manuf 23(4):395–408

    Article  Google Scholar 

  16. Hu B (2016) Kinematically identical manipulators for the Exechon parallel manipulator and their comparison study. Mech Mach Theory 103:117–137

    Article  Google Scholar 

  17. Chen K, Wang M, Huo X, Wang P, Sun T (2023) Topology and dimension synchronous optimization design of 5-DoF parallel robots for in-situ machining of large-scale steel components. Mech Mach Theory 179:105105

    Article  Google Scholar 

  18. Hu B, Shi D, Xie T, Hu B, Ye N (2020) Kinematically identical manipulators derivation for the 2-RPU + UPR parallel manipulator and their constraint performance comparison. J Mech Robotics 10(1115/1):4047540

    Google Scholar 

  19. Thomas MJ, Joy ML, Sudheer AP (2020) Kinematic and dynamic analysis of a 3-PRUS spatial parallel manipulator. Chin J Mech Eng 33(1):13

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Department of Mechanical Engineering at KMUTT for partial funding support and the use of equipment. This research project is supported by Thailand Science Research and Innovation (TSRI). Basic Research Fund: Fiscal year 2021 under project number 64A306000032.

Funding

This research project is supported by Thailand Science Research and Innovation (TSRI). Basic Research Fund: Fiscal year 2021 under project number 64A306000032.

Author information

Authors and Affiliations

Authors

Contributions

All research and experiments were conducted by TU, SC, and SL. TC conceptualized the research, drafted a paper and verified the design and experiment.

Corresponding author

Correspondence to Teeranoot Chanthasopeephan.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Udomsap, T., Chinchouryrang, S., Liampipat, S. et al. Kinematics of platform stabilization using a 3-PRS parallel manipulator. Robomech J 10, 8 (2023). https://doi.org/10.1186/s40648-023-00247-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-023-00247-x

Keywords