 Research Article
 Open access
 Published:
A Leaderfollower formation control of mobile robots by positionbased visual servo method using fisheye camera
ROBOMECH Journal volume 10, Article number: 30 (2023)
Abstract
This paper presents a leaderfollower formation control of multiple mobile robots by positionbased method using a fisheye camera. A fisheye camera has a wide field of view and recognizes a wide range of objects. In this paper, the fisheye camera is first modeled on spherical coordinates and then a position estimation technique is proposed by using an AR marker based on the spherical model. This paper furthermore presents a method for estimating the velocity of a leader robot based on a disturbance observer using the obtained position information. The proposed techniques are combined with a formation control based on the virtual structure. In this paper, the formation controller and velocity estimator can be designed independently, and the stability analysis of the total system is performed by using Lyapunov theorem. The effectiveness of the proposed method is demonstrated by simulation and experiments using two real mobile robots.
Introduction
Formation control of mobile robots has received much attention in recent years. The purpose of formation control is to realize a specified form and achieve tasks while maintaining the formation. For example, it is expected to reduce transportation costs by applying this technology to such as automatic convoy transport of multiple trucks [1]. Satellite formation flight [2], mapping [3] by multiple mobile robots, and so on are studied. To realize the formation control of mobile robots, it is necessary to measure the relative distance between robots, the velocity, the attitude angle, etc. In recent years, researches have been conducted to realize formation by regarding mobile robots as multiagents and considering the formation of a specific formation by exchanging information among agents through a network as an interagent consensus problem [4, 5]. These studies assume the existence of network communication that enables information exchange between robots. On the other hand, the formation control which does not require the communication by measurement and estimation of relative position and velocity by sonar sensor and laser range finder mounted on the robot is considered [6].
With the improvement of image processing technology, the realization of formation control using a camera instead of a distance sensor has been considered [7,8,9]. By applying image recognition technology, realization of flexible formation change and collision avoidance is expected by utilizing not only distance but also object recognition. In this paper, we consider the formation control based on the image information of the camera. The formation control techniques based on image information are divided into the positionbased method [7, 8] and the imagebased method [9]. In the former, markers mounted on a mobile robot are detected by the camera. The position and the attitude are then estimated and utilized to control the robot. On the other hand, the latter is robust against camera calibration error because it does not require physical quantities such as positions from image information. However, since the state equation is generally represented based on an image Jacobian matrix, the control system has to be designed with taking into account the nonlinear and coupled properties of the matrix. In many studies of formation control based on image information, standard lenses such as commercially available cameras are used. They may cause loss of sight of tracked objects because of the narrow field of view.
To improve the above, this paper presents a formation control using a fisheye lens camera to enlarge the field of view of the robot. Since the image information obtained by the fisheye camera is distorted, the control system design in the imagebased method becomes complicated. In this paper, we consider a position estimation method by using a fisheye camera and then a formation control method based on the estimated position.
The projection of the fisheye camera is represented by a spherical model in order to deal with the distortion of the image. Then, the paper estimates the relative position of a mobile robot to be tracked based on the model. Next, we discuss the velocity estimation in order to achieve a good formation running. Here, the velocity can be regarded as a disturbance in the relative motion model between two robots. Disturbance observer can deal with nonlinear systems without linearization and be easily implemented in control systems, compared to Kalman filter. There is little reported research on formation control using disturbance observers for velocity estimation based on image information. Therefore, the paper proposes a velocity estimation method based on disturbance observer. Then, we present the formation control based on the positionbased method. By utilizing the disturbance observer, the control problem and the velocity estimation problem can be solved independently. We perform the stability analysis of the whole system. The proposed method is verified by simulation and experiment using real mobile robots.
This paper is structured as follows. In Sect. "Marker recognition and position estimation using a fisheye camera", we use AR markers to recognize markers by fisheye cameras and propose a method for estimating the relative positions between robots from marker data. Section "Velocity estimation by using disturbance observer and formation control based on virtual structure" considers the formation control of the leaderfollower type. The velocity estimation of the leader based on a disturbance observer and the virtual structure formation control are presented. In Sect. "Stability analysis", we verify the stability and bounded analysis of the proposed formation control. Section "Simulation and experimental evaluations" shows the simulation results of formation control using the proposed method. In addition, the velocity estimations by the disturbance observer and the formation control experiment result are verified, and the effectiveness of the proposed method is verified. Finally, Sect. "Conclusion" summarizes this paper.
Marker recognition and position estimation using a fisheye camera
Fisheye camera model
Figure 1 shows the projection model of a fisheye camera. The projection model of a camera with a standard lens is modeled based on the principle of a pinhole camera. In this principle, geometric properties such as the similarity of the shape of objects projected onto the image plane are invariant. The fisheye camera increases the field of view as the projection angle changes from \(\theta\) to \(\theta _f\). However, this causes a distorted projection of the object’s shape, and the geometric properties are not preserved.
There are some works to cope with this image distortion. The primary method is to project the feature quantity of the image onto a spherical model [10,11,12]. This paper also presents the coordinate system of the spherical model by considering the marker position estimation using the spherical model as Fig. 2. Consider the case where the vector \({{{{\varvec{P}}}}}_f\) which is viewed from the camera coordinate system is projected to the point \({{{{\varvec{p}}}}}_f\) of the image plane of the fisheye camera. The projection method of a fisheye camera is described by the angle \(\theta _f\) where the projection ray from the object to the image plane is the optical axis and the image height \(r_f\) on the image plane. In this paper, the projection scheme of a fisheye camera is expressed by the following approximate formula.
where f is the camera focal length. The coefficients \(k_1\), \(k_2\), \(k_3\) and \(k_4\) of each term in (1) are derived from camera calibration of the fisheye camera.
We denote \({{{{\varvec{p}}}}}_f=[x_f~~~y_f]^T\) as the coordinate position (feature point) on the image plane of a fisheye camera and \({{{{\varvec{c}}}}}=[c_u~~c_v]^T\) as the optical axis point on the image plane. The angle \(\phi _f\) between the image height \(r_f\) and the \(x_f\)axis of the feature \({{{{\varvec{p}}}}}_f\) can be calculated by the following formula.
Furthermore, we assume that the image obtained by the fisheye camera is projected on a spherical surface with a radius of 1. From the feature \({{{{\varvec{p}}}}}_f\), we can calculate the image height \(r_f\) and angle \(\phi _f\) from the Eqs. (2) and (3), then substitute them into the Eq. (1) and solve the polynomial to obtain the angle \(\theta _f\).
Position estimation based on marker recognition
We describe how to estimate the position and orientation from the marker information recognized by image processing. We assume that the marker is a square and that the length of one side is known. First, the four corner features of the recognized marker are projected onto a point on a spherical model of radius 1 as shown in Fig. 3. According to the work of Komagata et al. in [12], it is known that the following properties of a figure projected onto a spherical model are preserved.

(i)
Linearity: A straight line in space becomes part of a great circle.

(ii)
Parallelism: The group of great circles created by parallel lines in space passes through a single axis passing through the origin (parallel projection axis).

(iii)
Orthogonality: The projection axes of two sets of orthogonal lines in space are orthogonal.
The proposed position estimation is carried out from the marker recognition by utilizing these properties.
From \(\phi _f\) and \(\theta _f\) calculated by (1), (2) and (3), the point \({{{{\varvec{p}}}}}_f\) on the image plane is assumed to be projected to the point \({{{{\varvec{p}}}}}\) on the spherical model with radius 1. The point \({{{{\varvec{p}}}}}\) is represented by the following equation,
Next, we consider the spherical model of a fisheye camera and markers as shown in Fig. 3. The vertex \({{{{\varvec{P}}}}}^i_f\) of the marker is assumed to be projected onto the point \({{{{\varvec{P}}}}}_i\) on the spherical model. The points \({{{{\varvec{P}}}}}^i_f\) and \({{{{\varvec{P}}}}}^{i+1}_f\) lie on the same line. By the nature of projection, \(H_i\) is the plane containing the origin of the spherical model, point \({{{{\varvec{P}}}}}^i_f\), and point \({{{{\varvec{P}}}}}^{i+1}_f\), and \({{{{\varvec{p}}}}}_i\) and \({{{{\varvec{p}}}}}_{i+1}\). From this, the normal vector on the plane is calculated as follows:
where \(\times\) denotes the outer product and \({{{{\varvec{p}}}}}_5={{{{\varvec{p}}}}}_1\). Then, by considering a plane containing multiple parallel lines from the property (ii), a line passing through the origin of the spherical model and the parallel projection axis can be calculated. The normal vectors \({{{{\varvec{n}}}}}_{r1}\) and \({{{{\varvec{n}}}}}_{r2}\) of the parallel projection axes to the opposite sides of the square marker can be expressed as follows, respectively:
According to property (iii), the two projected lines are orthogonal, which means that the two normal vectors in Eq. (6) are also orthogonal. Since these two vectors are normal vectors to the parallel projection axis, they represent the orientation direction of the marker. Therefore, the rotation matrix of the marker coordinate system from the spherical model coordinate system is set as follows:
Then, we calculate the translation vector between the spherical model and the coordinates of the center of gravity of the marker. It is assumed that the center of gravity of the marker is the same as the origin of the marker coordinate system. Let l be the length of the edge of the square marker. We set the position coordinates of each vertex in the marker coordinate system as follows:
The marker point \({{{{\varvec{P}}}}}^i_f\) corresponding to point \({{{{\varvec{p}}}}}_i\) in the spherical model and the relationship can be expressed by the rotation matrix R and the translation vector T in the Eq. (7) as follows:
where \(\zeta _i\) is a variable that represents scaling. The translation vector \({{{{\varvec{T}}}}}\) is computed by putting each vertex together to form the following simultaneous equations and finding the solution
In this paper, the mobile robot was equipped with a USB camera. A fisheye lens for a smartphone was attached to the camera. Figure 4 shows the USB camera equipped with the fisheye lens. In this paper, since the fisheye camera simply mounts the fisheye lens by clipon, the performance such as resolution, delay and so on, is almost same as the USB camera. Calibration was performed using OpenCV camera calibration for fisheye cameras to obtain the focal length f and coefficients \(k_1\), \(k_2\), \(k_3\), and \(k_4\) of the equation (1), respectively. Markers were attached to a leading robot to estimate the relative positions and velocities of the robots by marker recognition. ArUco [13] C++ library was implemented to recognize the marker. Figure 5b shows how the markers are recognized by the fisheye camera mounted on the mobile robot like Fig. 5a. We set the size of the marker as \(l= 0.12 \, \hbox {[m]}\) in this paper and verify the accuracy of our method by using the marker. When a marker is placed at \(0.45 \, \hbox {[m]}\) in the Z direction from the camera center, the proposed position estimation method has an estimation error of \(\pm 0.03 \, \hbox {[m]}\). For \(0.75 \, \hbox {[m]}\), the estimation error is \(\pm 0.07 \, \hbox {[m]}\). This confirms that the estimation of the relative position of the camera and marker is feasible. Then, when the distance between the camera and the marker is \(1.0 \, \hbox {[m]}\), the range of the viewing angle from which the marker can be identified is \(\pm 70 \, \hbox {[deg]}\). The accuracy of the method depends on the size of the marker and the relative distance between the marker and the camera. We assume that the relative distances when formation control varies in the range from 0.45 to 0.75 [m] in this paper. From the verification of the accuracy, the marker length of \(l=0.12\) is acceptable.
For the comparison between fisheye lens and normal lens, the captured image without fisheye lens is shown in Fig. 5c. The part of the marker is beyond the viewing area of the camera. Therefore, the formation shape of the robots is limited to line shape in case of the normal lens camera. On the other hand, the fisheye camera is valid for realizing a desired formation shape for mobile robots, such as triangle and zigzag shapes.
Velocity estimation by using disturbance observer and formation control based on virtual structure
In order to keep multiple robots running while maintaining a specific formation, it is necessary to control not only the relative positions of the robots, but also their velocity so that the velocity of the leader robot is the same as that of the follower robot. In this section, we consider a method for estimating the velocity of the leader robot based on relative position information for a follower robot alone. In the following, the leader robot is abbreviated as “leader” and the follower robot is abbreviated as “follower”.
Kinematic model
For simplicity, we consider a relative kinematic model of two robots, a leader and a follower. Each robot is assumed to be a twowheeled vehicle type robot that cannot move laterally, that is has nonhonolomic constraint. Figure 6 shows the coordinate system of the robots. A marker is assumed to be mounted behind the leader. The leader’s velocity command \({{{{\varvec{u}}}}}_l=[V_l~~~\omega _l]^T\) is given and the leader is traveling at an angle \(\theta _e\) to the follower’s direction of motion. The velocity of the marker is given by the following equation
where L is the distance between the vehicle center of gravity and the marker with respect to the direction of travel. We assume that the center of gravity of the marker and the vehicle center of gravity in the lateral direction are located as Fig. 6.
A kinematic model is derived for the relative motion of the follower equipped with a fisheye camera and the leader. The origin points of the camera coordinates and the vehicle center of gravity are assumed to coincide. The follower calculates the relative position vector of the center of gravity of the marker by the marker estimation described in the previous section, and the estimated translation vector gives the relative position information between the camera coordinates and the marker \({{{{\varvec{e}}}}} = [e_x~~ e_z]^T\). The relative kinematic model of the marker and follower is expressed as follows:
where \({{{{\varvec{u}}}}}_f = [V_f~~~\omega _f]^T\) is the velocity command of the follower. \(V_f\) is the linear velocity and \(\omega _f\) is the angular velocity, respectively. From the above equation, the relative motion model is nonlinear.
Velocity estimation by using disturbance observer
To realize formation travelling, it is necessary to make the followers not only maintain the relative positions but also travel at the same velocity as the leader. In this paper, we consider a method for estimating the velocity of the leader based on the relative position information obtained by the follower itself.
From the Eq. (12), the velocity of the marker \({{{{\varvec{V}}}}}_r\) can be regarded as a disturbance. Therefore, we attempt to estimate \({{{{\varvec{V}}}}}_r\) through a disturbance observer. We construct the following disturbance observer based on the method for nonlinear systems by Mohammadi et al. [14].
where \({{{{\varvec{z}}}}}\) is the state variable of the observer, \({{{{\varvec{p}}}}}({{{{\varvec{e}}}}})\) is the auxiliary vector and \(\hat{{{{{\varvec{V}}}}}}_r\) is the estimated velocity vector. \(L_d\) is the observer gain and positive definite matrix.
Formation control based on virtual structure
We consider a virtual mobile robot at a certain position from the follower as shown in Fig. 7. If the follower can be controlled so that the center of gravity of the virtual robot coincides with the center of gravity of the marker, the desired formation can be formed [15]. We realize formation control based on the virtual structure.
Let \({{{{\varvec{l}}}}}_p=[l_x~~l_z]^T\) be the position of the desired formation. The relative error between the marker center of gravity and the center of gravity of the virtual robot is \(\tilde{e}\) and is defined as follows:
Given the velocity \({{{{\varvec{u}}}}}_f\) of the follower, the velocity of the virtual robot is as follows:
The relative kinematic model of the marker and the virtual robot is expressed in the following
If \(l_z \ne 0\), then \(\textrm{det}(g( {{{{\varvec{l}}}}}_p))\ne 0\). The following control law is applied in this paper.
where \(K=\textrm{diag}(k_x, k_z)\) is the control gain matrix, and \(k_x\) and \(k_z\) are the positive number, respectively.
From the Eqs. (16) and (17), the error system in formation control can be expressed as follows:
Figure 8 shows the block diagram of the proposed method. We implement the control system based on the block diagram.
Stability analysis
In this section, we verify the stability and boundedness of the proposed control system. First, the estimation error in the disturbance observer is defined as follows:
Then, the error system of the disturbance observer is expressed as the following:
From the Eqs. (18) and (20), we obtain the following error system.
From the above equation, it is considered that the stability of the error system depends on the leader acceleration \(\dot{{{{{\varvec{V}}}}}}_r\). In this paper, the stability and boundedness of the formation control are examined based on Lyapunov’s stability theory for the error system (21).
Stability analysis for step changes in the leader’s velocity
We consider the case where the leader’s velocity changes in a stepwise manner, i.e., the leader is traveling at \(\dot{{{{{\varvec{V}}}}}}_r={{{{\varvec{0}}}}}\). First, consider the following Lyapunov function candidates.
Next, by differentiating the Eq. (22) with time along the solution of the error system (21), the following equation is obtained:
We use Young’s inequality for the following vectors
where \(\varepsilon\) is an any positive constant. Then, the following equation is obtained:
where \(\lambda ^{K}_{min}\) and \(\lambda ^{L}_{min}\) are the minimum eigenvalues of control gain K and observer gain \(L_d\), respectively.
If the control and observer gains are chosen so that \(\lambda ^{K}_{min}>\frac{\varepsilon }{2}\) and \(\lambda ^{L}_{min}>\frac{1}{2 \varepsilon }\), then \(\dot{\mathcal {V }}\le 0\) and \(\mathcal {V}\) is the Lyapunov function of the system (21). Also, \(\dot{\mathcal {V}} < 0\) except at the origin \([\tilde{{{{{\varvec{e}}}}}}^T~~{{{{\varvec{e}}}}}_v^T]^T={{{{\varvec{0}}}}}\). Therefore, when \(\dot{{{{{\varvec{V}}}}}}_r={{{{{\varvec{0}}}}}}\), the error system is asymptotically stable.
Next, we verify the zero dynamics of \(\theta _e\). From the control input (15), the following equation is obtained:
Then, we substitute the Eq. (11) into the above equation.
We assume that the leader robot is moving straight ahead at a constant velocity, i.e., \(\omega _l=0\) and \(\dot{V}_l>0\). Also, since \(\lim _{t\rightarrow \infty }\tilde{{{{{\varvec{e}}}}}} = {{{{\varvec{0}}}}}\) from the above discussion, the zero dynamics of \(\theta _e\) is as follows:
Therefore, if \(\theta _e<\pi /2\), \(\theta _e\) moves toward the origin and converges. Thus, if the leader moves straight ahead at a constant velocity, the follower converges to the same attitude angle as the leader.
Boundedness analysis in the presence of acceleration
We examine the case where \(\dot{{{{{\varvec{V}}}}}}_r \ne {{{{\varvec{0}}}}}\), i.e., where the leader travels with a certain acceleration. Let \(a_{max}\) be the maximum acceleration of \({{{{\varvec{V}}}}}_r\). As in the previous section, we consider a Lyapunov function candidate with the Eq. (22). By differentiating the Eq. (22) in time along the solution of the error system (21), the following equation is obtained:
By using Young’s inequality for the vector in Eq. (24), the following inequality holds
If K and \(L_d\) are chosen so that \(c = \min \{\lambda ^{K}_{min}\frac{\varepsilon }{2}, \lambda ^{L}_{min}\frac{1}{\varepsilon }\}\) and \(c>0\), then the following holds
\(\dot{\mathcal {V}}\) is negative outside the set \(\Omega _c =\{[\tilde{{{{{\varvec{e}}}}}}^T~~{{{{\varvec{e}}}}}_v^T]\in \mathcal {R}^4~~(\Vert \tilde{{{{{\varvec{e}}}}}}\Vert ^2 + \Vert {{{{\varvec{e}}}}}_v\Vert ^2)\le \frac{\varepsilon a^2_{max}}{2c}\}\). It follows that the error system (21) is ultimately bounded if the leader travels with acceleration from Lyapunov’s stability theory [14, 16].
Simulation and experimental evaluations
In this section, the effectiveness of the proposed method is demonstrated through simulations and experiments. In the simulations, velocity estimation by the disturbance observer and formation control based on virtual structure are evaluated. Note that the simulations are performed under the assumption that the relative positions of the leader and follower are estimated, i.e., the marker position estimation is not implemented. In the experiment, we perform marker position estimation and experiment formation control of the proposed method based on the estimated marker positions.
Simulation results
Simulation of formation control with one leader and two followers was performed. The leader’s center of gravity was set as the origin, and the distance L from the marker center of gravity was set to \(0.25~\hbox {[m]}\). The observer gain was set to \(L_d = 0.8I_2\) and the control gain to \(K=\textrm{diag}(0.7, 2.0)\). In the simulation, we discretized the kinematic model and the control system with a sampling time \(T_s=0.10~\hbox {[s]}\) using an Eulerian approximation. The goal of the simulation is to control three robots running in an equilateral triangle formation with a distance of 0.4[m]. The leader first runs at \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\) and then curves to face the opposite direction at \({{{{\varvec{u}}}}}_l=[0.08~~5]^T\). Finally, the leader was given an acceleration of \(0.010\,[\hbox {m}/\hbox {s}^{2}]\) until \(V_l\,=\,0.15 [\hbox {m}/\hbox {s}^2]\), and then it was made to run straight. The initial position and posture of the follower were \([z_1~x_1~\theta _1]^T=[0.80~0.50~15\pi /180]^T\) and \([z_2~x_2~\theta _2]^T=[0.90~0.40~30\pi /180]^T\), respectively.
The simulation results are shown in Figs. 9 and 10.
\(e1_x\), \(e1_z\), \(e2_x\), and \(e2_z\) in Fig. 9c show the relative errors in X and Zdirection between the leader and the follower 1 and 2, respectively. The black line triangles in Fig. 10 indicate the relative distance between the robots at each 5 s intervals, representing the formation. Figure 9 shows that good velocity estimation is achieved since both estimated velocities of the follower1 and 2 are close to the velocity of the leader. When the leader is accelerating, there is a slight estimation error, but this is because the disturbance model assumes a steplike change with \(\dot{{{{{\varvec{V}}}}}}_r={{{{\varvec{0}}}}}\) for the observer. As shown in Fig. 9(d), when the leader is moving at a constant velocity, the estimated velocities and attitude angles of the followers converge to the velocity and angle of the leader, respectively. One can see the relative position of each robot close to the target position. The trajectory of Fig. 10 shows that good formation control has been achieved.
Experimental results
The effectiveness of the proposed method is verified using two Pioneer 3DX mobile robots. In the experiment, the two robots are oriented in the same direction and placed so that the relative position between the robots is \([e_x~~e_z]^T=[0.3~~0.6]^T\). The desired formation shape was set to \({{{{\varvec{l}}}}}_p=[0.2~~0.5]^T\). First, the velocity estimation of the linear motion by the disturbance observer is verified, and then the formation control experiment is conducted. There are two cases in the formation control experiment. The first case, the initial attitude angle of the leader and the follower is same (Case 1). The second case, the initial relative angle between the robots is different (Case 2). The sampling time in the experiments is \(T_s=0.10~\hbox {[s]}\), the same as in the simulation.
Velocity estimation
We show experimental results of velocity estimation by the disturbance observer. At first, both the leader and follower were allowed to move straight ahead at \(V_l=V_f=0.10~\hbox {[m/s]}\), and only the leader increased its velocity to \(V_l=0.12~\hbox {[m/s]}\) at about 15 [s] along the way. The observer gain of the disturbance observer was set to the same value as in the simulation. The experimental results of velocity and relative position estimation are shown in Fig. 11.
Figure 11a shows that the disturbance observer is able to estimate almost the same leader velocities. After the leader velocity changes from \(V_l=0.12~\hbox {[m/s]}\), the estimated value becomes oscillatory, because the distance between the robots increases and the error in the marker position estimation increases, as shown in Fig. 11b. The relative position in the Xdirection was also observed to increase. This may be due to a slight difference in the attitude angle between the leader and follower.
Formation control
Next, the formation control experiments were conducted with the two robots. The control gain was set to \(K=\textrm{diag}(0.35, 2.0)\) to account for the effect of marker position estimation error. The initial robot placement is the same as in the velocity estimation. The leader was made to move straight at a velocity \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\) and curve at \({{{{\varvec{u}}}}}_l=[0.080~~25]^T\) in Case 1. When the leader’s attitude angle reached about 90 [deg], it was made to move straight at \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\).
The experimental results of Case 1 are shown in Fig. 12. Images taken by the fisheye camera of the follower are shown in Fig. 13.
From the estimated relative position in Fig. 12b, the target formation position \({{{{\varvec{l}}}}}_p\) is almost achieved in the first straight motion stage. While the relative error in the Z direction becomes larger for the curved motion, it converges to the desired position \({{{{\varvec{l}}}}}_p\) in straight line motion. Also, a large pulse can be seen in the relative position result at about 22 [s]. This is because another marker is recognized inside the marker as shown in Fig. 13c. Figure 14 is the result of odometry measurements by the ARIA library for Pioneer3DX. The “Marker” in the legend is the estimated relative position of the marker added to the follower’s trajectory. The red and blue circles in Fig. 14 show the start position of the leader and the follower, the red and blue arrows show the initial traveling direction of each robot, respectively. The target formation was formed during the initial linear motion, but the error increased after the curvilinear motion. This may be due to tire slippage caused by the curvilinear motion. A threedimensional measurement system is required to accurately measure the trajectory.
Furthermore, we show the experimental results of Case 2. In the experiments, the initial relative position and posture of the follower were \(~[e_x~~e_z]^T=[0.20~~0.65]^T\), \(\theta = 42.3\hbox {[deg]}\), respectively. The leader was only made to move straight at \({{{{\varvec{u}}}}}_l=[0.10~~0]^T\). The experimental results of Case 2 are shown in Figs. 14, 15 and 16. In Fig. 17, the circles and arrows show the start positions and the initial traveling direction of each robot, respectively. Fig. 15a shows that the velocity of the leader is correctly estimated by the disturbance observer. Figure 15b shows that the relative positions converge the desired formation position \({{{{\varvec{l}}}}}_p\), that is, the formation shape is realized. While the attitude angle converges to a constant angle in Fig. 15c, the converged angle is not \(0 \hbox {[deg]}\) but about \(4\hbox {[deg]}\). The attitude angle is also measured by ARIA odometry. This angle error may be due to tire sliding as Case 1. However, Fig. 16e and f show that the two robots are traveling at the same attitude angle.
Conclusion
This paper has presented formation control of a mobile robot using image information from a fisheye camera. A marker position estimation method that takes into account the distortion characteristics of the fisheye camera was studied. Furthermore, a velocity estimation method based on a disturbance observer was realized, and a formation control system based on the positionbased method was constructed. The stability and the boundedness analysis based on Lyapunov’s stability theory were performed on the constructed control system. Finally, experiments were conducted to verify the effectiveness of the velocity estimation and formation control using two robots.
This paper does not consider collisions between robots. The realization of collision avoidance control is one of future issue to research. Furthermore, since a more robust position estimation of the leader is required to achieve reliable formation control, the multiple markers recognition such as Evageliou et al. [17] is the future work, too.
Availability of data and materials
Not applicable.
References
New Energy and Industrial Technology Development Organization (NEDO) (2013) NEDO’s research and development achievements on ITS. https://www.nedo.go.jp/content/100552007.pdf . Accessed 20 Oct 2022
Wong H, Kapila V, Sparks AG (2002) Adaptive output feedback tracking control of spacecraft formation. Int J Robust Nonlinear Ctrl 12:117–139
Vincent R, Fox D, Ko J, Konolige K, Limketkai B, Ortiz C, Schulz D, Stewart B (2008) Distributed multirobot exploration, mapping, and task allocation. Ann Math Artif Intell 52:229–255
Ren W, Beard RW, Atkins EM (2007) Information consensus in multivehicle cooperative control. IEEE Control Syst Mag 27(2):71–82. https://doi.org/10.1109/MCS.2007.338264
Kuriki Y, Namerikawa T (2013) Control of formation configuration using leaderfollower structure. J Syst Design Dyn 7(3):254–264. https://doi.org/10.1299/jsdd.7.254
Fujimori A, Kubota H, Shibata N, Tezuka Y (2014) Leaderfollower formation control with obstacle avoidance using sonarequipped mobile robots. Proc Inst Mech Eng Part I J Syst Ctrl Eng 228(5):303–315. https://doi.org/10.1177/0959651813517682
Poonawal H, Satici AC, Gans N, Spong MW (2012) Formation control of wheeled robots with visionbased position measurement. In: 2012 American Control Conference (ACC), pp. 3173–3178. https://doi.org/10.1109/ACC.2012.6315000
Dani AP, Gans N, Dixon WE (2009) Positionbased visual servo control of leaderfollower formation using imagebased relative pose and relative velocity estimation. In: 2009 American Control Conference, pp. 5271–5276. https://doi.org/10.1109/ACC.2009.5160698
Lin J, Miao Z, Zhong H, Peng W, Wang Y, Fierro R (2021) Adaptive imagebased leaderfollower formation control of mobile robots with visibility constraints. IEEE Trans Industr Electron 68(7):6010–6019. https://doi.org/10.1109/TIE.2020.2994861
Kannala J, Brandt SS (2006) A generic camera model and calibration method for conventional, wideangle, and fisheye lenses. IEEE Trans Pattern Anal Mach Intell 28(8):1335–1340
Kase S, Mitsumoto H, Aragaki Y, Shimomura N, Umeda K (2009) A method to construct overhead view images using multiple fisheye cameras. J Jpn Soc Precis Eng 75(2):251–255. https://doi.org/10.2493/jjspe.75.251
Komataga H, Ishii I, Takahashi A, Wakatsuki D, Imai H (2006) A geometric calibration method of internal camera parameter for fisheye lenses. IEICE Trans Inf Syst J89–D–I(1):64–73
GarridoJurado S, MunozSalinas R, MadridCuevas FJ, MarnJimenez MJ (2014) Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recogn 47(6):2280–2292. https://doi.org/10.1016/j.patcog.2014.01.005
Mohammadi A, Marquez HJ, Tavakoli M (2017) Nonlinear disturbance observers: design and applications to EulerLagrange systems. IEEE Control Syst Mag 37(4):50–72. https://doi.org/10.1109/MCS.2017.2696760
Ikeda T, Jongusuk J, Ikeda T, Mita T (2006) Formation control of multiple nonholonomic mobile robots. Electr Eng Jpn 157(3):814–819
Khalil HK (2001) Nonlinear systems, 3rd edn. Prentice Hall, USA
Evangeliou N, Chaikalis D, Tsoukalas A, Tzes A (2022) Visual collaboration leaderfollower uavformation for indoor exploration. Front Robot AI. https://doi.org/10.3389/frobt.2021.777535
Acknowledgements
Not applicable.
Funding
This work was supported by JSPS KAKENHI Grant 18K04046.
Author information
Authors and Affiliations
Contributions
SO conducted all of research and experiments. AF provided advice based on knowledge of the research. Both authors discussed the results and wrote this manuscript.
Corresponding author
Ethics declarations
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ohhara, S., Fujimori, A. A Leaderfollower formation control of mobile robots by positionbased visual servo method using fisheye camera. Robomech J 10, 30 (2023). https://doi.org/10.1186/s40648023002686
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40648023002686