Skip to main content
  • Research Article
  • Open access
  • Published:

A Leader-follower formation control of mobile robots by position-based visual servo method using fisheye camera

Abstract

This paper presents a leader-follower formation control of multiple mobile robots by position-based method using a fisheye camera. A fisheye camera has a wide field of view and recognizes a wide range of objects. In this paper, the fisheye camera is first modeled on spherical coordinates and then a position estimation technique is proposed by using an AR marker based on the spherical model. This paper furthermore presents a method for estimating the velocity of a leader robot based on a disturbance observer using the obtained position information. The proposed techniques are combined with a formation control based on the virtual structure. In this paper, the formation controller and velocity estimator can be designed independently, and the stability analysis of the total system is performed by using Lyapunov theorem. The effectiveness of the proposed method is demonstrated by simulation and experiments using two real mobile robots.

Introduction

Formation control of mobile robots has received much attention in recent years. The purpose of formation control is to realize a specified form and achieve tasks while maintaining the formation. For example, it is expected to reduce transportation costs by applying this technology to such as automatic convoy transport of multiple trucks [1]. Satellite formation flight [2], mapping [3] by multiple mobile robots, and so on are studied. To realize the formation control of mobile robots, it is necessary to measure the relative distance between robots, the velocity, the attitude angle, etc. In recent years, researches have been conducted to realize formation by regarding mobile robots as multi-agents and considering the formation of a specific formation by exchanging information among agents through a network as an inter-agent consensus problem [4, 5]. These studies assume the existence of network communication that enables information exchange between robots. On the other hand, the formation control which does not require the communication by measurement and estimation of relative position and velocity by sonar sensor and laser range finder mounted on the robot is considered [6].

With the improvement of image processing technology, the realization of formation control using a camera instead of a distance sensor has been considered [7,8,9]. By applying image recognition technology, realization of flexible formation change and collision avoidance is expected by utilizing not only distance but also object recognition. In this paper, we consider the formation control based on the image information of the camera. The formation control techniques based on image information are divided into the position-based method [7, 8] and the image-based method [9]. In the former, markers mounted on a mobile robot are detected by the camera. The position and the attitude are then estimated and utilized to control the robot. On the other hand, the latter is robust against camera calibration error because it does not require physical quantities such as positions from image information. However, since the state equation is generally represented based on an image Jacobian matrix, the control system has to be designed with taking into account the nonlinear and coupled properties of the matrix. In many studies of formation control based on image information, standard lenses such as commercially available cameras are used. They may cause loss of sight of tracked objects because of the narrow field of view.

To improve the above, this paper presents a formation control using a fisheye lens camera to enlarge the field of view of the robot. Since the image information obtained by the fisheye camera is distorted, the control system design in the image-based method becomes complicated. In this paper, we consider a position estimation method by using a fisheye camera and then a formation control method based on the estimated position.

The projection of the fisheye camera is represented by a spherical model in order to deal with the distortion of the image. Then, the paper estimates the relative position of a mobile robot to be tracked based on the model. Next, we discuss the velocity estimation in order to achieve a good formation running. Here, the velocity can be regarded as a disturbance in the relative motion model between two robots. Disturbance observer can deal with nonlinear systems without linearization and be easily implemented in control systems, compared to Kalman filter. There is little reported research on formation control using disturbance observers for velocity estimation based on image information. Therefore, the paper proposes a velocity estimation method based on disturbance observer. Then, we present the formation control based on the position-based method. By utilizing the disturbance observer, the control problem and the velocity estimation problem can be solved independently. We perform the stability analysis of the whole system. The proposed method is verified by simulation and experiment using real mobile robots.

This paper is structured as follows. In Sect. "Marker recognition and position estimation using a fisheye camera", we use AR markers to recognize markers by fisheye cameras and propose a method for estimating the relative positions between robots from marker data. Section "Velocity estimation by using disturbance observer and formation control based on virtual structure" considers the formation control of the leader-follower type. The velocity estimation of the leader based on a disturbance observer and the virtual structure formation control are presented. In Sect. "Stability analysis", we verify the stability and bounded analysis of the proposed formation control. Section "Simulation and experimental evaluations" shows the simulation results of formation control using the proposed method. In addition, the velocity estimations by the disturbance observer and the formation control experiment result are verified, and the effectiveness of the proposed method is verified. Finally, Sect. "Conclusion" summarizes this paper.

Marker recognition and position estimation using a fisheye camera

Fisheye camera model

Figure 1 shows the projection model of a fisheye camera. The projection model of a camera with a standard lens is modeled based on the principle of a pinhole camera. In this principle, geometric properties such as the similarity of the shape of objects projected onto the image plane are invariant. The fisheye camera increases the field of view as the projection angle changes from \(\theta\) to \(\theta _f\). However, this causes a distorted projection of the object’s shape, and the geometric properties are not preserved.

Fig. 1
figure 1

Fisheye camera model

Fig. 2
figure 2

Sphere model

Fig. 3
figure 3

Marker and sphere model coordinate system

Fig. 4
figure 4

USB camera attached with a fisheye lens

There are some works to cope with this image distortion. The primary method is to project the feature quantity of the image onto a spherical model [10,11,12]. This paper also presents the coordinate system of the spherical model by considering the marker position estimation using the spherical model as Fig. 2. Consider the case where the vector \({{{{\varvec{P}}}}}_f\) which is viewed from the camera coordinate system is projected to the point \({{{{\varvec{p}}}}}_f\) of the image plane of the fisheye camera. The projection method of a fisheye camera is described by the angle \(\theta _f\) where the projection ray from the object to the image plane is the optical axis and the image height \(r_f\) on the image plane. In this paper, the projection scheme of a fisheye camera is expressed by the following approximate formula.

$$\begin{aligned} r_f \approx f(\theta _f + k_1\theta ^3_f+k_2\theta ^5_f+ k_3\theta ^7_f +k_4\theta ^9_f), \end{aligned}$$
(1)

where f is the camera focal length. The coefficients \(k_1\), \(k_2\), \(k_3\) and \(k_4\) of each term in (1) are derived from camera calibration of the fisheye camera.

We denote \({{{{\varvec{p}}}}}_f=[x_f~~~y_f]^T\) as the coordinate position (feature point) on the image plane of a fisheye camera and \({{{{\varvec{c}}}}}=[c_u~~c_v]^T\) as the optical axis point on the image plane. The angle \(\phi _f\) between the image height \(r_f\) and the \(x_f\)-axis of the feature \({{{{\varvec{p}}}}}_f\) can be calculated by the following formula.

$$\begin{aligned} r_f= & {} \sqrt{(x_f - c_u)^2+(y_f-c_v)^2}, \end{aligned}$$
(2)
$$\begin{aligned} \phi _f = \tan ^{-1}\left( \frac{y_f-c_v}{x_f - c_u}\right) . \end{aligned}$$
(3)

Furthermore, we assume that the image obtained by the fisheye camera is projected on a spherical surface with a radius of 1. From the feature \({{{{\varvec{p}}}}}_f\), we can calculate the image height \(r_f\) and angle \(\phi _f\) from the Eqs. (2) and (3), then substitute them into the Eq. (1) and solve the polynomial to obtain the angle \(\theta _f\).

Position estimation based on marker recognition

We describe how to estimate the position and orientation from the marker information recognized by image processing. We assume that the marker is a square and that the length of one side is known. First, the four corner features of the recognized marker are projected onto a point on a spherical model of radius 1 as shown in Fig. 3. According to the work of Komagata et al. in [12], it is known that the following properties of a figure projected onto a spherical model are preserved.

  1. (i)

    Linearity: A straight line in space becomes part of a great circle.

  2. (ii)

    Parallelism: The group of great circles created by parallel lines in space passes through a single axis passing through the origin (parallel projection axis).

  3. (iii)

    Orthogonality: The projection axes of two sets of orthogonal lines in space are orthogonal.

The proposed position estimation is carried out from the marker recognition by utilizing these properties.

From \(\phi _f\) and \(\theta _f\) calculated by (1), (2) and (3), the point \({{{{\varvec{p}}}}}_f\) on the image plane is assumed to be projected to the point \({{{{\varvec{p}}}}}\) on the spherical model with radius 1. The point \({{{{\varvec{p}}}}}\) is represented by the following equation,

$$\begin{aligned} {{{{\varvec{p}}}}}=\,& {} [\sin \theta _f\cos \phi _f~~\sin \theta _f\sin \phi _f~~\cos \theta _f]^T. \end{aligned}$$
(4)

Next, we consider the spherical model of a fisheye camera and markers as shown in Fig. 3. The vertex \({{{{\varvec{P}}}}}^i_f\) of the marker is assumed to be projected onto the point \({{{{\varvec{P}}}}}_i\) on the spherical model. The points \({{{{\varvec{P}}}}}^i_f\) and \({{{{\varvec{P}}}}}^{i+1}_f\) lie on the same line. By the nature of projection, \(H_i\) is the plane containing the origin of the spherical model, point \({{{{\varvec{P}}}}}^i_f\), and point \({{{{\varvec{P}}}}}^{i+1}_f\), and \({{{{\varvec{p}}}}}_i\) and \({{{{\varvec{p}}}}}_{i+1}\). From this, the normal vector on the plane is calculated as follows:

$$\begin{aligned} {{{{\varvec{n}}}}}_i = \frac{{{{{\varvec{p}}}}}_i \times {{{{\varvec{p}}}}}_{i+1}}{\Vert {{{{\varvec{p}}}}}_i \times {{{{\varvec{p}}}}}_{i+1}\Vert }~~(i=1,\cdots ,4), \end{aligned}$$
(5)

where \(\times\) denotes the outer product and \({{{{\varvec{p}}}}}_5={{{{\varvec{p}}}}}_1\). Then, by considering a plane containing multiple parallel lines from the property (ii), a line passing through the origin of the spherical model and the parallel projection axis can be calculated. The normal vectors \({{{{\varvec{n}}}}}_{r1}\) and \({{{{\varvec{n}}}}}_{r2}\) of the parallel projection axes to the opposite sides of the square marker can be expressed as follows, respectively:

$$\begin{aligned} \left\{ \begin{array}{lll} {{{{\varvec{n}}}}}_{r1} &{} = &{} \frac{{{{{\varvec{n}}}}}_1 \times {{{{\varvec{n}}}}_3}}{\Vert {{{{\varvec{n}}}}}_1 \times {{{{\varvec{n}}}}_3}\Vert }\\ {{{{\varvec{n}}}}}_{r2} &{} = &{} \frac{{{{{\varvec{n}}}}_2} \times {{{{\varvec{n}}}}}_4}{\Vert {{{{\varvec{n}}}}_2} \times {{{{\varvec{n}}}}}_4\Vert } \end{array} \right. . \end{aligned}$$
(6)

According to property (iii), the two projected lines are orthogonal, which means that the two normal vectors in Eq. (6) are also orthogonal. Since these two vectors are normal vectors to the parallel projection axis, they represent the orientation direction of the marker. Therefore, the rotation matrix of the marker coordinate system from the spherical model coordinate system is set as follows:

$$\begin{aligned} R = [{{{{\varvec{n}}}}}_{r1}~~{{{{\varvec{n}}}}}_{r2}~~{{{{\varvec{n}}}}}_{r1}\times {{{{\varvec{n}}}}}_{r2}]^T. \end{aligned}$$
(7)

Then, we calculate the translation vector between the spherical model and the coordinates of the center of gravity of the marker. It is assumed that the center of gravity of the marker is the same as the origin of the marker coordinate system. Let l be the length of the edge of the square marker. We set the position coordinates of each vertex in the marker coordinate system as follows:

$$\begin{aligned} \left\{ \begin{array}{lll} {{{{\varvec{P}}}}}^1_f &{} = &{} [-\frac{l}{2}~~\frac{l}{2} ~~0]^T \\ {{{{\varvec{P}}}}}^2_f &{} = &{} [-\frac{l}{2}~~-\frac{l}{2} ~~0]^T \\ {{{{\varvec{P}}}}}^3_f &{} = &{} [\frac{l}{2}~~-\frac{l}{2} ~~0]^T \\ {{{{\varvec{P}}}}}^{{4}_f} &{} = &{} [\frac{l}{2}~~\frac{l}{2} ~~0]^T \end{array} \right. . \end{aligned}$$
(8)

The marker point \({{{{\varvec{P}}}}}^i_f\) corresponding to point \({{{{\varvec{p}}}}}_i\) in the spherical model and the relationship can be expressed by the rotation matrix R and the translation vector T in the Eq. (7) as follows:

$$\begin{aligned} \zeta _i {{{{\varvec{p}}}}}_i = R{{{{\varvec{P}}}}}^i_f + {{{{\varvec{T}}}}}~~(i=1,\cdots , 4), \end{aligned}$$
(9)

where \(\zeta _i\) is a variable that represents scaling. The translation vector \({{{{\varvec{T}}}}}\) is computed by putting each vertex together to form the following simultaneous equations and finding the solution

$$\begin{aligned} { \left[ \begin{array}{lllll} -I_3 &{} \varvec{p}_1 &{} 0 &{} 0 &{} 0 \\ -I_3 &{} 0 &{} \varvec{p}_2 &{} 0 &{} 0 \\ -I_3 &{} 0 &{} 0 &{} \varvec{p}_3 &{} 0 \\ -I_3 &{} 0 &{} 0 &{} 0 &{} \varvec{p}_4 \end{array}\right] } \left[ \begin{array}{c} {{{{\varvec{T}}}}} \\ \zeta _1 \\ \zeta _2 \\ \zeta _3 \\ \zeta _4 \end{array} \right] \!&\!=&\! \left[ \begin{array}{l} \frac{l}{2}(-{{{{\varvec{n}}}}}_{r1}+{{{{\varvec{n}}}}}_{r2}) \\ \frac{l}{2}(-{{{{\varvec{n}}}}}_{r1}-{{{{\varvec{n}}}}}_{r2}) \\ \frac{l}{2}({{{{\varvec{n}}}}}_{r1}-{{{{\varvec{n}}}}}_{r2}) \\ \frac{l}{2}({{{{\varvec{n}}}}}_{r1}+{{{{\varvec{n}}}}}_{r2}) \end{array} \right] \!. \end{aligned}$$
(10)

In this paper, the mobile robot was equipped with a USB camera. A fisheye lens for a smartphone was attached to the camera. Figure 4 shows the USB camera equipped with the fisheye lens. In this paper, since the fisheye camera simply mounts the fisheye lens by clip-on, the performance such as resolution, delay and so on, is almost same as the USB camera. Calibration was performed using OpenCV camera calibration for fisheye cameras to obtain the focal length f and coefficients \(k_1\), \(k_2\), \(k_3\), and \(k_4\) of the equation (1), respectively. Markers were attached to a leading robot to estimate the relative positions and velocities of the robots by marker recognition. ArUco [13] C++ library was implemented to recognize the marker. Figure 5b shows how the markers are recognized by the fisheye camera mounted on the mobile robot like Fig. 5a. We set the size of the marker as \(l= 0.12 \, \hbox {[m]}\) in this paper and verify the accuracy of our method by using the marker. When a marker is placed at \(0.45 \, \hbox {[m]}\) in the Z direction from the camera center, the proposed position estimation method has an estimation error of \(\pm 0.03 \, \hbox {[m]}\). For \(0.75 \, \hbox {[m]}\), the estimation error is \(\pm 0.07 \, \hbox {[m]}\). This confirms that the estimation of the relative position of the camera and marker is feasible. Then, when the distance between the camera and the marker is \(1.0 \, \hbox {[m]}\), the range of the viewing angle from which the marker can be identified is \(\pm 70 \, \hbox {[deg]}\). The accuracy of the method depends on the size of the marker and the relative distance between the marker and the camera. We assume that the relative distances when formation control varies in the range from 0.45 to 0.75 [m] in this paper. From the verification of the accuracy, the marker length of \(l=0.12\) is acceptable.

Fig. 5
figure 5

Marker recognition by a fisheye camera

Fig. 6
figure 6

The coordinate system of mobile robots

Fig. 7
figure 7

Virtual structure

For the comparison between fisheye lens and normal lens, the captured image without fisheye lens is shown in Fig. 5c. The part of the marker is beyond the viewing area of the camera. Therefore, the formation shape of the robots is limited to line shape in case of the normal lens camera. On the other hand, the fisheye camera is valid for realizing a desired formation shape for mobile robots, such as triangle and zigzag shapes.

Velocity estimation by using disturbance observer and formation control based on virtual structure

In order to keep multiple robots running while maintaining a specific formation, it is necessary to control not only the relative positions of the robots, but also their velocity so that the velocity of the leader robot is the same as that of the follower robot. In this section, we consider a method for estimating the velocity of the leader robot based on relative position information for a follower robot alone. In the following, the leader robot is abbreviated as “leader” and the follower robot is abbreviated as “follower”.

Kinematic model

For simplicity, we consider a relative kinematic model of two robots, a leader and a follower. Each robot is assumed to be a two-wheeled vehicle type robot that cannot move laterally, that is has nonhonolomic constraint. Figure 6 shows the coordinate system of the robots. A marker is assumed to be mounted behind the leader. The leader’s velocity command \({{{{\varvec{u}}}}}_l=[V_l~~~\omega _l]^T\) is given and the leader is traveling at an angle \(\theta _e\) to the follower’s direction of motion. The velocity of the marker is given by the following equation

$$\begin{aligned} {{{{\varvec{V}}}}_r} = \left[ \begin{array}{c} V_{rx} \\ V_{rz} \end{array}\right] =\left[ \begin{array}{c} V_l\sin \theta _e -L\omega _l\cos \theta _e \\ V_l\cos \theta _e + L\omega _l\sin \theta _e \end{array} \right] , \end{aligned}$$
(11)

where L is the distance between the vehicle center of gravity and the marker with respect to the direction of travel. We assume that the center of gravity of the marker and the vehicle center of gravity in the lateral direction are located as Fig. 6.

A kinematic model is derived for the relative motion of the follower equipped with a fisheye camera and the leader. The origin points of the camera coordinates and the vehicle center of gravity are assumed to coincide. The follower calculates the relative position vector of the center of gravity of the marker by the marker estimation described in the previous section, and the estimated translation vector gives the relative position information between the camera coordinates and the marker \({{{{\varvec{e}}}}} = [e_x~~ e_z]^T\). The relative kinematic model of the marker and follower is expressed as follows:

$$\begin{aligned} \dot{{{{{\varvec{e}}}}}}= & {} {{{{\varvec{V}}}}}_r + {g}({{{{\varvec{e}}}}}){{{{\varvec{u}}}}}_f \nonumber \\= & {} {{{{\varvec{V}}}}}_r + \left[ \begin{array}{cc} 0 &{} -e_z \\ -1 &{} e_x \end{array}\right] \left[ \begin{array}{c} V_f \\ \omega _f \end{array}\right] , \end{aligned}$$
(12)

where \({{{{\varvec{u}}}}}_f = [V_f~~~\omega _f]^T\) is the velocity command of the follower. \(V_f\) is the linear velocity and \(\omega _f\) is the angular velocity, respectively. From the above equation, the relative motion model is nonlinear.

Velocity estimation by using disturbance observer

To realize formation travelling, it is necessary to make the followers not only maintain the relative positions but also travel at the same velocity as the leader. In this paper, we consider a method for estimating the velocity of the leader based on the relative position information obtained by the follower itself.

From the Eq. (12), the velocity of the marker \({{{{\varvec{V}}}}}_r\) can be regarded as a disturbance. Therefore, we attempt to estimate \({{{{\varvec{V}}}}}_r\) through a disturbance observer. We construct the following disturbance observer based on the method for nonlinear systems by Mohammadi et al. [14].

$$\begin{aligned} \left\{ \begin{array}{ccl} \dot{{{{{\varvec{z}}}}}} &{} = &{} -L_d {{{{\varvec{z}}}}} -L_d({g}({{{{\varvec{e}}}}})u_f+{{{{\varvec{p}}}}}({{{{\varvec{e}}}}})) \\ {{{{\varvec{p}}}}}({{{{\varvec{e}}}}})&{} = &{} L_d {{{{\varvec{e}}}}} \\ \hat{{{{{\varvec{V}}}}}}_r &{} = &{} {{{{\varvec{z}}}}} + {{{{\varvec{p}}}}}({{{{\varvec{e}}}}}) \end{array}\right. , \end{aligned}$$
(13)

where \({{{{\varvec{z}}}}}\) is the state variable of the observer, \({{{{\varvec{p}}}}}({{{{\varvec{e}}}}})\) is the auxiliary vector and \(\hat{{{{{\varvec{V}}}}}}_r\) is the estimated velocity vector. \(L_d\) is the observer gain and positive definite matrix.

Formation control based on virtual structure

We consider a virtual mobile robot at a certain position from the follower as shown in Fig. 7. If the follower can be controlled so that the center of gravity of the virtual robot coincides with the center of gravity of the marker, the desired formation can be formed [15]. We realize formation control based on the virtual structure.

Let \({{{{\varvec{l}}}}}_p=[l_x~~l_z]^T\) be the position of the desired formation. The relative error between the marker center of gravity and the center of gravity of the virtual robot is \(\tilde{e}\) and is defined as follows:

$$\begin{aligned} \tilde{{{{{\varvec{e}}}}}}= \left[ \begin{array}{c} \tilde{e}_x \\ \tilde{e}_z \end{array}\right] = \left[ \begin{array}{c} e_x - l_x \\ e_z - l_z \end{array}\right] . \end{aligned}$$
(14)

Given the velocity \({{{{\varvec{u}}}}}_f\) of the follower, the velocity of the virtual robot is as follows:

$$\begin{aligned} {{{{\varvec{V}}}}}^v_f = \left[ \begin{array}{cc} 0 &{} l_z \\ 1 &{} -l_x \end{array}\right] \left[ \begin{array}{c} V_f \\ \omega _f \end{array}\right] =-g({{{{\varvec{l}}}}}_p){{{{\varvec{u}}}}}_f. \end{aligned}$$
(15)

The relative kinematic model of the marker and the virtual robot is expressed in the following

$$\begin{aligned} \dot{\tilde{{{{{\varvec{e}}}}}}}= & {} {{{{\varvec{V}}}}}_r - {{{{\varvec{V}}}}}^v_f = {{{{\varvec{V}}}}}_r + g({{{{\varvec{l}}}}_p}){{{{\varvec{u}}}}}_f. \end{aligned}$$
(16)

If \(l_z \ne 0\), then \(\textrm{det}(g( {{{{\varvec{l}}}}}_p))\ne 0\). The following control law is applied in this paper.

$$\begin{aligned} {{{{\varvec{u}}}}}_f = {g}^{-1}({{{{\varvec{l}}}}}_p)\left( -\hat{{{{{\varvec{V}}}}}}_r - K\tilde{{{{{\varvec{e}}}}}}\right) , \end{aligned}$$
(17)

where \(K=\textrm{diag}(k_x, k_z)\) is the control gain matrix, and \(k_x\) and \(k_z\) are the positive number, respectively.

From the Eqs. (16) and (17), the error system in formation control can be expressed as follows:

$$\begin{aligned} \dot{\tilde{{{{{\varvec{e}}}}}}}= & {} -K\tilde{{{{{\varvec{e}}}}}}-(\hat{{{{{\varvec{V}}}}}}_r-{{{{\varvec{V}}}}}_r). \end{aligned}$$
(18)

Figure 8 shows the block diagram of the proposed method. We implement the control system based on the block diagram.

Fig. 8
figure 8

The block diagram of formation control

Fig. 9
figure 9

The simulation result obtained by our method

Fig. 10
figure 10

The trajectory obtained by the simulation

Stability analysis

In this section, we verify the stability and boundedness of the proposed control system. First, the estimation error in the disturbance observer is defined as follows:

$$\begin{aligned} {{{{\varvec{e}}}}}_v = \hat{{{{{\varvec{V}}}}}}_r - {{{{\varvec{V}}}}}_r. \end{aligned}$$
(19)

Then, the error system of the disturbance observer is expressed as the following:

$$\begin{aligned} \dot{{{{{\varvec{e}}}}}}_v= & {} \dot{\hat{{{{{\varvec{V}}}}}}}_r -\dot{{{{{\varvec{V}}}}}}_r = -L_d {{{{\varvec{e}}}}}_v -\dot{{{{{\varvec{V}}}}}}_r. \end{aligned}$$
(20)

From the Eqs. (18) and (20), we obtain the following error system.

$$\begin{aligned} \left\{ \begin{array}{ccl} \dot{{{{{\varvec{e}}}}}}_v &{}=&{} -L_d {{{{\varvec{e}}}}}_v -\dot{{{{{\varvec{V}}}}}}_r \\ \dot{\tilde{{{{{\varvec{e}}}}}}} &{} =&{} -K\tilde{{{{{\varvec{e}}}}}}-{{{{\varvec{e}}}}}_v \end{array} \right. \end{aligned}$$
(21)

From the above equation, it is considered that the stability of the error system depends on the leader acceleration \(\dot{{{{{\varvec{V}}}}}}_r\). In this paper, the stability and boundedness of the formation control are examined based on Lyapunov’s stability theory for the error system (21).

Stability analysis for step changes in the leader’s velocity

We consider the case where the leader’s velocity changes in a stepwise manner, i.e., the leader is traveling at \(\dot{{{{{\varvec{V}}}}}}_r={{{{\varvec{0}}}}}\). First, consider the following Lyapunov function candidates.

$$\begin{aligned} \mathcal {V} = \frac{1}{2}{\tilde{{{{{\varvec{e}}}}}}^T}{\tilde{{{{{\varvec{e}}}}}}} +\frac{1}{2}{{{{{\varvec{e}}}}}_v^T}{{{{{\varvec{e}}}}}_v}. \end{aligned}$$
(22)

Next, by differentiating the Eq. (22) with time along the solution of the error system (21), the following equation is obtained:

$$\begin{aligned} \dot{\mathcal {V}}=\,& {} {\tilde{{{{{\varvec{e}}}}}}^T}{\dot{\tilde{{{{{\varvec{e}}}}}}}} +{{{{{\varvec{e}}}}}_v^T}{\dot{{{{{\varvec{e}}}}}}_v} = {\tilde{{{{{\varvec{e}}}}}}^T}(-{{{{\varvec{e}}}}}_v - K\tilde{{{{{\varvec{e}}}}}}) -L_d{{{{{\varvec{e}}}}}_v^T}{{{{\varvec{e}}}}}_v. \end{aligned}$$
(23)

We use Young’s inequality for the following vectors

$$\begin{aligned} {{{{\varvec{a}}}}}^T {{{{\varvec{b}}}}}\le & {} \frac{\varepsilon \Vert {{{{\varvec{a}}}}}\Vert ^2}{2}+\frac{\Vert {{{{\varvec{b}}}}}\Vert ^2}{2\varepsilon } , \end{aligned}$$
(24)

where \(\varepsilon\) is an any positive constant. Then, the following equation is obtained:

$$\begin{aligned} \dot{\mathcal {V}}\le & {} -(K-\frac{\varepsilon }{2}I)\Vert \tilde{{{{{\varvec{e}}}}}}\Vert ^2 - (L_d-\frac{1}{2\varepsilon }I)\Vert {{{{\varvec{e}}}}}_v\Vert ^2 \nonumber \\\le & {} -(\lambda ^{K}_{min}-\frac{\varepsilon }{2})\Vert \tilde{{{{{\varvec{e}}}}}}\Vert ^2 -(\lambda ^{L}_{min}-\frac{1}{2 \varepsilon })\Vert {{{{\varvec{e}}}}}_v\Vert ^2, \end{aligned}$$
(25)

where \(\lambda ^{K}_{min}\) and \(\lambda ^{L}_{min}\) are the minimum eigenvalues of control gain K and observer gain \(L_d\), respectively.

If the control and observer gains are chosen so that \(\lambda ^{K}_{min}>\frac{\varepsilon }{2}\) and \(\lambda ^{L}_{min}>\frac{1}{2 \varepsilon }\), then \(\dot{\mathcal {V }}\le 0\) and \(\mathcal {V}\) is the Lyapunov function of the system (21). Also, \(\dot{\mathcal {V}} < 0\) except at the origin \([\tilde{{{{{\varvec{e}}}}}}^T~~{{{{\varvec{e}}}}}_v^T]^T={{{{\varvec{0}}}}}\). Therefore, when \(\dot{{{{{\varvec{V}}}}}}_r={{{{{\varvec{0}}}}}}\), the error system is asymptotically stable.

Next, we verify the zero dynamics of \(\theta _e\). From the control input (15), the following equation is obtained:

$$\begin{aligned} \dot{\theta }_e=\,& {} \dot{\theta }_l - \dot{\theta }_f = \omega _l - \frac{1}{l_z}(V_{rx}-k_x\tilde{e}_x). \end{aligned}$$
(26)

Then, we substitute the Eq. (11) into the above equation.

$$\begin{aligned} \dot{\theta }_e=\,& {} \omega _l - \frac{1}{l_z}(V_l\sin \theta _e - L\omega _l\sin \theta _e-k_x\tilde{e}_x). \end{aligned}$$
(27)

We assume that the leader robot is moving straight ahead at a constant velocity, i.e., \(\omega _l=0\) and \(\dot{V}_l>0\). Also, since \(\lim _{t\rightarrow \infty }\tilde{{{{{\varvec{e}}}}}} = {{{{\varvec{0}}}}}\) from the above discussion, the zero dynamics of \(\theta _e\) is as follows:

$$\begin{aligned} \dot{\theta }_e= & {} - \frac{V_l}{l_z}\sin \theta _e. \end{aligned}$$
(28)

Therefore, if \(|\theta _e|<\pi /2\), \(\theta _e\) moves toward the origin and converges. Thus, if the leader moves straight ahead at a constant velocity, the follower converges to the same attitude angle as the leader.

Boundedness analysis in the presence of acceleration

We examine the case where \(\dot{{{{{\varvec{V}}}}}}_r \ne {{{{\varvec{0}}}}}\), i.e., where the leader travels with a certain acceleration. Let \(a_{max}\) be the maximum acceleration of \({{{{\varvec{V}}}}}_r\). As in the previous section, we consider a Lyapunov function candidate with the Eq. (22). By differentiating the Eq. (22) in time along the solution of the error system (21), the following equation is obtained:

$$\begin{aligned} \dot{\mathcal {V}}=\,& {} {\tilde{{{{{\varvec{e}}}}}}^T}{\dot{\tilde{{{{{\varvec{e}}}}}}}} +{{{{{\varvec{e}}}}}_v^T}{\dot{{{{{\varvec{e}}}}}}_v} \nonumber \\=\,& {} {\tilde{{{{{\varvec{e}}}}}}^T}({{{{\varvec{V}}}}}_r + g({{{{\varvec{l}}}}_p}){{{{\varvec{u}}}}}_f)+{{{{{\varvec{e}}}}}_v^T} (\dot{\hat{{{{{\varvec{V}}}}}}}_r-\dot{{{{{\varvec{V}}}}}}_r) \nonumber \\=\,& {} {\tilde{{{{{\varvec{e}}}}}}^T}(-{{{{\varvec{e}}}}}_v - K\tilde{{{{{\varvec{e}}}}}})+{{{{{\varvec{e}}}}}_v^T}(-L_d {{{{\varvec{e}}}}}_v-\dot{{{{{\varvec{V}}}}}}_r). \end{aligned}$$
(29)

By using Young’s inequality for the vector in Eq. (24), the following inequality holds

$$\begin{aligned} \dot{\mathcal {V}}\le & {} -(K-\frac{\varepsilon }{2}I)\Vert \tilde{{{{{\varvec{e}}}}}}\Vert ^2 - (L_d-\frac{1}{\varepsilon }I)\Vert {{{{\varvec{e}}}}}_v\Vert ^2+\frac{\varepsilon a^2_{max}}{2}. \end{aligned}$$
(30)

If K and \(L_d\) are chosen so that \(c = \min \{\lambda ^{K}_{min}-\frac{\varepsilon }{2}, \lambda ^{L}_{min}-\frac{1}{\varepsilon }\}\) and \(c>0\), then the following holds

$$\begin{aligned} \dot{\mathcal {V}}\le & {} -c(\Vert \tilde{{{{{\varvec{e}}}}}}\Vert ^2 + \Vert {{{{\varvec{e}}}}}_v\Vert ^2) + \frac{\varepsilon a^2_{max}}{2}. \end{aligned}$$
(31)

\(\dot{\mathcal {V}}\) is negative outside the set \(\Omega _c =\{[\tilde{{{{{\varvec{e}}}}}}^T~~{{{{\varvec{e}}}}}_v^T]\in \mathcal {R}^4~|~(\Vert \tilde{{{{{\varvec{e}}}}}}\Vert ^2 + \Vert {{{{\varvec{e}}}}}_v\Vert ^2)\le \frac{\varepsilon a^2_{max}}{2c}\}\). It follows that the error system (21) is ultimately bounded if the leader travels with acceleration from Lyapunov’s stability theory [14, 16].

Simulation and experimental evaluations

In this section, the effectiveness of the proposed method is demonstrated through simulations and experiments. In the simulations, velocity estimation by the disturbance observer and formation control based on virtual structure are evaluated. Note that the simulations are performed under the assumption that the relative positions of the leader and follower are estimated, i.e., the marker position estimation is not implemented. In the experiment, we perform marker position estimation and experiment formation control of the proposed method based on the estimated marker positions.

Simulation results

Simulation of formation control with one leader and two followers was performed. The leader’s center of gravity was set as the origin, and the distance L from the marker center of gravity was set to \(0.25~\hbox {[m]}\). The observer gain was set to \(L_d = 0.8I_2\) and the control gain to \(K=\textrm{diag}(0.7, 2.0)\). In the simulation, we discretized the kinematic model and the control system with a sampling time \(T_s=0.10~\hbox {[s]}\) using an Eulerian approximation. The goal of the simulation is to control three robots running in an equilateral triangle formation with a distance of 0.4[m]. The leader first runs at \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\) and then curves to face the opposite direction at \({{{{\varvec{u}}}}}_l=[0.08~~5]^T\). Finally, the leader was given an acceleration of \(0.010\,[\hbox {m}/\hbox {s}^{2}]\) until \(V_l\,=\,0.15 [\hbox {m}/\hbox {s}^2]\), and then it was made to run straight. The initial position and posture of the follower were \([z_1~x_1~\theta _1]^T=[-0.80~-0.50~-15\pi /180]^T\) and \([z_2~x_2~\theta _2]^T=[-0.90~-0.40~30\pi /180]^T\), respectively.

The simulation results are shown in Figs. 9 and 10.

\(e1_x\), \(e1_z\), \(e2_x\), and \(e2_z\) in Fig. 9c show the relative errors in X and Z-direction between the leader and the follower 1 and 2, respectively. The black line triangles in Fig. 10 indicate the relative distance between the robots at each 5 s intervals, representing the formation. Figure 9 shows that good velocity estimation is achieved since both estimated velocities of the follower1 and 2 are close to the velocity of the leader. When the leader is accelerating, there is a slight estimation error, but this is because the disturbance model assumes a step-like change with \(\dot{{{{{\varvec{V}}}}}}_r={{{{\varvec{0}}}}}\) for the observer. As shown in Fig. 9(d), when the leader is moving at a constant velocity, the estimated velocities and attitude angles of the followers converge to the velocity and angle of the leader, respectively. One can see the relative position of each robot close to the target position. The trajectory of Fig. 10 shows that good formation control has been achieved.

Experimental results

The effectiveness of the proposed method is verified using two Pioneer 3DX mobile robots. In the experiment, the two robots are oriented in the same direction and placed so that the relative position between the robots is \([e_x~~e_z]^T=[0.3~~0.6]^T\). The desired formation shape was set to \({{{{\varvec{l}}}}}_p=[0.2~~0.5]^T\). First, the velocity estimation of the linear motion by the disturbance observer is verified, and then the formation control experiment is conducted. There are two cases in the formation control experiment. The first case, the initial attitude angle of the leader and the follower is same (Case 1). The second case, the initial relative angle between the robots is different (Case 2). The sampling time in the experiments is \(T_s=0.10~\hbox {[s]}\), the same as in the simulation.

Velocity estimation

We show experimental results of velocity estimation by the disturbance observer. At first, both the leader and follower were allowed to move straight ahead at \(V_l=V_f=0.10~\hbox {[m/s]}\), and only the leader increased its velocity to \(V_l=0.12~\hbox {[m/s]}\) at about 15 [s] along the way. The observer gain of the disturbance observer was set to the same value as in the simulation. The experimental results of velocity and relative position estimation are shown in Fig. 11.

Fig. 11
figure 11

The result of estimated velocities by our method

Fig. 12
figure 12

The result of formation control by our method (Case 1)

Fig. 13
figure 13

The captured images by the fisheye camera (Case 1)

Figure 11a shows that the disturbance observer is able to estimate almost the same leader velocities. After the leader velocity changes from \(V_l=0.12~\hbox {[m/s]}\), the estimated value becomes oscillatory, because the distance between the robots increases and the error in the marker position estimation increases, as shown in Fig. 11b. The relative position in the X-direction was also observed to increase. This may be due to a slight difference in the attitude angle between the leader and follower.

Formation control

Next, the formation control experiments were conducted with the two robots. The control gain was set to \(K=\textrm{diag}(0.35, 2.0)\) to account for the effect of marker position estimation error. The initial robot placement is the same as in the velocity estimation. The leader was made to move straight at a velocity \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\) and curve at \({{{{\varvec{u}}}}}_l=[0.080~~25]^T\) in Case 1. When the leader’s attitude angle reached about 90 [deg], it was made to move straight at \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\).

The experimental results of Case 1 are shown in Fig. 12. Images taken by the fisheye camera of the follower are shown in Fig. 13.

Fig. 14
figure 14

The trajectories of the formation control (Case 1)

Fig. 15
figure 15

The result of formation control by our method (Case 2)

Fig. 16
figure 16

The captured images by the fisheye camera (Case 2)

Fig. 17
figure 17

The trajectories of the formation control (Case 2)

From the estimated relative position in Fig. 12b, the target formation position \({{{{\varvec{l}}}}}_p\) is almost achieved in the first straight motion stage. While the relative error in the Z direction becomes larger for the curved motion, it converges to the desired position \({{{{\varvec{l}}}}}_p\) in straight line motion. Also, a large pulse can be seen in the relative position result at about 22 [s]. This is because another marker is recognized inside the marker as shown in Fig. 13c. Figure 14 is the result of odometry measurements by the ARIA library for Pioneer3DX. The “Marker” in the legend is the estimated relative position of the marker added to the follower’s trajectory. The red and blue circles in Fig. 14 show the start position of the leader and the follower, the red and blue arrows show the initial traveling direction of each robot, respectively. The target formation was formed during the initial linear motion, but the error increased after the curvilinear motion. This may be due to tire slippage caused by the curvilinear motion. A three-dimensional measurement system is required to accurately measure the trajectory.

Furthermore, we show the experimental results of Case 2. In the experiments, the initial relative position and posture of the follower were \(~[e_x~~e_z]^T=[-0.20~~0.65]^T\), \(\theta = -42.3\hbox {[deg]}\), respectively. The leader was only made to move straight at \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\). The experimental results of Case 2 are shown in Figs. 14, 15 and 16. In Fig. 17, the circles and arrows show the start positions and the initial traveling direction of each robot, respectively. Fig. 15a shows that the velocity of the leader is correctly estimated by the disturbance observer. Figure 15b shows that the relative positions converge the desired formation position \({{{{\varvec{l}}}}}_p\), that is, the formation shape is realized. While the attitude angle converges to a constant angle in Fig. 15c, the converged angle is not \(0 \hbox {[deg]}\) but about \(-4\hbox {[deg]}\). The attitude angle is also measured by ARIA odometry. This angle error may be due to tire sliding as Case 1. However, Fig. 16e and f show that the two robots are traveling at the same attitude angle.

Conclusion

This paper has presented formation control of a mobile robot using image information from a fisheye camera. A marker position estimation method that takes into account the distortion characteristics of the fisheye camera was studied. Furthermore, a velocity estimation method based on a disturbance observer was realized, and a formation control system based on the position-based method was constructed. The stability and the boundedness analysis based on Lyapunov’s stability theory were performed on the constructed control system. Finally, experiments were conducted to verify the effectiveness of the velocity estimation and formation control using two robots.

This paper does not consider collisions between robots. The realization of collision avoidance control is one of future issue to research. Furthermore, since a more robust position estimation of the leader is required to achieve reliable formation control, the multiple markers recognition such as Evageliou et al. [17] is the future work, too.

Availability of data and materials

Not applicable.

References

  1. New Energy and Industrial Technology Development Organization (NEDO) (2013) NEDO’s research and development achievements on ITS. https://www.nedo.go.jp/content/100552007.pdf . Accessed 20 Oct 2022

  2. Wong H, Kapila V, Sparks AG (2002) Adaptive output feedback tracking control of spacecraft formation. Int J Robust Nonlinear Ctrl 12:117–139

    Article  Google Scholar 

  3. Vincent R, Fox D, Ko J, Konolige K, Limketkai B, Ortiz C, Schulz D, Stewart B (2008) Distributed multirobot exploration, mapping, and task allocation. Ann Math Artif Intell 52:229–255

    Article  MathSciNet  Google Scholar 

  4. Ren W, Beard RW, Atkins EM (2007) Information consensus in multivehicle cooperative control. IEEE Control Syst Mag 27(2):71–82. https://doi.org/10.1109/MCS.2007.338264

    Article  Google Scholar 

  5. Kuriki Y, Namerikawa T (2013) Control of formation configuration using leader-follower structure. J Syst Design Dyn 7(3):254–264. https://doi.org/10.1299/jsdd.7.254

    Article  Google Scholar 

  6. Fujimori A, Kubota H, Shibata N, Tezuka Y (2014) Leader-follower formation control with obstacle avoidance using sonar-equipped mobile robots. Proc Inst Mech Eng Part I J Syst Ctrl Eng 228(5):303–315. https://doi.org/10.1177/0959651813517682

    Article  Google Scholar 

  7. Poonawal H, Satici AC, Gans N, Spong MW (2012) Formation control of wheeled robots with vision-based position measurement. In: 2012 American Control Conference (ACC), pp. 3173–3178. https://doi.org/10.1109/ACC.2012.6315000

  8. Dani AP, Gans N, Dixon WE (2009) Position-based visual servo control of leader-follower formation using image-based relative pose and relative velocity estimation. In: 2009 American Control Conference, pp. 5271–5276. https://doi.org/10.1109/ACC.2009.5160698

  9. Lin J, Miao Z, Zhong H, Peng W, Wang Y, Fierro R (2021) Adaptive image-based leader-follower formation control of mobile robots with visibility constraints. IEEE Trans Industr Electron 68(7):6010–6019. https://doi.org/10.1109/TIE.2020.2994861

    Article  Google Scholar 

  10. Kannala J, Brandt SS (2006) A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans Pattern Anal Mach Intell 28(8):1335–1340

    Article  Google Scholar 

  11. Kase S, Mitsumoto H, Aragaki Y, Shimomura N, Umeda K (2009) A method to construct overhead view images using multiple fish-eye cameras. J Jpn Soc Precis Eng 75(2):251–255. https://doi.org/10.2493/jjspe.75.251

    Article  Google Scholar 

  12. Komataga H, Ishii I, Takahashi A, Wakatsuki D, Imai H (2006) A geometric calibration method of internal camera parameter for fish-eye lenses. IEICE Trans Inf Syst J89–D–I(1):64–73

    Google Scholar 

  13. Garrido-Jurado S, Munoz-Salinas R, Madrid-Cuevas FJ, Marn-Jimenez MJ (2014) Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recogn 47(6):2280–2292. https://doi.org/10.1016/j.patcog.2014.01.005

    Article  Google Scholar 

  14. Mohammadi A, Marquez HJ, Tavakoli M (2017) Nonlinear disturbance observers: design and applications to Euler-Lagrange systems. IEEE Control Syst Mag 37(4):50–72. https://doi.org/10.1109/MCS.2017.2696760

    Article  MathSciNet  Google Scholar 

  15. Ikeda T, Jongusuk J, Ikeda T, Mita T (2006) Formation control of multiple nonholonomic mobile robots. Electr Eng Jpn 157(3):814–819

    Article  Google Scholar 

  16. Khalil HK (2001) Nonlinear systems, 3rd edn. Prentice Hall, USA

    Google Scholar 

  17. Evangeliou N, Chaikalis D, Tsoukalas A, Tzes A (2022) Visual collaboration leader-follower uav-formation for indoor exploration. Front Robot AI. https://doi.org/10.3389/frobt.2021.777535

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by JSPS KAKENHI Grant 18K04046.

Author information

Authors and Affiliations

Authors

Contributions

SO conducted all of research and experiments. AF provided advice based on knowledge of the research. Both authors discussed the results and wrote this manuscript.

Corresponding author

Correspondence to Shinsuke Oh-hara.

Ethics declarations

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oh-hara, S., Fujimori, A. A Leader-follower formation control of mobile robots by position-based visual servo method using fisheye camera. Robomech J 10, 30 (2023). https://doi.org/10.1186/s40648-023-00268-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-023-00268-6

Keywords