Skip to main content

Generalization of movements in quadruped robot locomotion by learning specialized motion data


Machines that are sensitive to environmental fluctuations, such as autonomous and pet robots, are currently in demand, rendering the ability to control huge and complex systems crucial. However, controlling such a system in its entirety using only one control device is difficult; for this purpose, a system must be both diverse and flexible. Herein, we derive and analyze the feature values of robot sensor and actuator data, thereby investigating the role that each feature value plays in robot locomotion. We conduct experiments using a developed quadruped robot from which we acquire multi-point motion information as the movement data; we extract the features of these movement data using an autoencoder. Next, we decompose the movement data into three features and extract various gait patterns. Despite learning only the “walking” movement, the movement patterns of trotting and bounding are also extracted herein, which suggests that movement data obtained via hardware contain various gait patterns. Although the present robot cannot locomote with these movements, this research suggests the possibility of generating unlearned movements.


In nature, animals adapt their behavior to changes in the surrounding environment. This ability is not limited to animals: it is also required for robots. Thus, machines that are sensitive to environmental fluctuations, such as autonomous robots and pet robots, are now in demand, thereby making it increasingly important to be able to control systems that are both huge and complex. However, it is difficult to control all of such a system with one control device, and it is important that the system is both diverse and flexible. Studies have been conducted into system configurations and control methods to address those issues, and many of those studies were focused on robot locomotion. In particular, gait-pattern transitions corresponding to the surrounding environment of quadruped (four-legged) animals, the gaits of which have long been observed and analyzed in detail [1,2,3,4,5,6,7,8,9], have attracted much attention in recent years [10,11,12,13,14,15].

Quadruped locomotion is classified into various gait patterns such as walking, trotting, and bounding (Fig. 1) [1]. The simple basic motion (e.g., walking, flying) of animals has multiple patterns; however, most are a combination of multiple periodic phenomena. Additionally, rich repertoires of complex behaviors are created from the flexible combination of a small set of modules [16,17,18,19,20,21]. Looking at a different classification like these names already suggests that there is something qualitatively different between these movements. Besides, it is known that a quadruped tends to select the optimum gait pattern according to the speed at which it moves [2, 3, 22]. In addition, the locomotion speed changes continuously, whereas the change in a locomotion pattern is discontinuous; however, the animal suddenly changes its gait without falling over. The investigation of gait transitions in quadrupeds has a long history. Various observation methods have been used to study the limb coordination in different gaits at the macroscopic level [1,2,3, 22,23,24,25]. For example, the reported factors causing gait switching include energy-related ones (i.e., minimizing energy costs) [2], durability ones (i.e., protection from overload) [23], environmental and morphometric ones [22], and mathematical ones [26]. Furthermore, the neural control of locomotion and coordination has been studied [5,6,7,8,9, 27, 28]. Experiments using decerebrate cats can be cited as examples of studies to understand gait transitions in quadrupeds [7, 8]. In such experiments, although the cats cannot walk voluntarily, they can walk on a treadmill when electrical stimulation is applied below the midbrain; furthermore, they change their gait pattern according to their speed. Those experiments suggest that locomotion occurs autonomously at a rather low level, such as in the cerebellum and the spine, rather than being instructed at a high level such as in the cerebrum. The process whereby animal gait patterns are generated will be an interesting debate in the future given its relation to the generation of robot behavior.

Fig. 1

Timing diagram for each leg in different gaits. The left panel illustrates orders of stance and swing phases in the typical gaits. The right one indicates the assignment of Legs layout

Many studies of robot systems refer to biological control systems and attempt to create various movements by configuring the robot controller with either a central pattern generator (CPG) or some type of oscillator [10,11,12,13,14,15, 29, 30]. Indeed, a quadruped robot for which the CPG model was used successfully generated and transitioned to various gait patterns [10, 11]. However, regarding using a CPG to generate movement, although it is possible to realize periodic motion considering synchronization with the outside world, it is difficult to adjust the parameters to realize the desired movement. In particular, in a large-scale system such as one required for movement, innumerable variables are intertwined in complicated ways, and from among those relationships it is difficult to select a priori the required input/output relationship. The issue of input/output determination cannot be potentially ignored when a target motion is regarded as one of the outcome by kinds of control systems. It seems to be worth investigating from the viewpoint of not only control systems but also datasets of motion.

Meanwhile, by using system theory to focus on the system design and by analyzing possible system behaviors and trajectories, a behavioral approach has been proposed [31,32,33]. In this approach, system design and analysis are performed using a set of temporal trajectories of the physical variables of the system without assuming an input/output relationship. Herein, we use the behavioral approach to construct a natural and flexible theoretical framework for analyzing the input/output relationship by means of machine learning. First, we give some examples of previous research that used system behaviors and trajectories [34,35,36,37,38,39,40,41,42]. In one case, focusing on only human motion trajectories, an unlearned motion pattern was generated by learning two types of movement [37, 40, 41]. In another case, the two basic stepping patterns in neonates were retained through development, augmented by two new patterns first revealed in toddlers [34]. In the aforementioned studies, motion experiments were used to produce angle or electromyography data, and only those data were used to extract movement features. However, motion involves many other parameters, including acceleration and leg sense. In the present research, we use not only trajectories but also hardware that can acquire force/speed information online at high speed from sensors and actuators. We then decompose the gait patterns of a quadruped robot into lower levels (e.g., latent features) by performing feature-quantity analysis based on machine learning.

In the present paper, we report on a theoretical study of the coordination patterns that are inherent in the gaits of quadrupeds. We concentrate specifically on inter-limb coordination by introducing a description that captures the relative timing of the rhythmic movements of the four limbs. Our aims are to derive and analyze the feature values of robot sensor and actuator data and to investigate the role that each feature value plays in robot locomotion. We also conduct experiments using a developed quadruped robot from which we acquire multi-point motion information as the movement data, and we extract the features of those movement data using an autoencoder. From this, we decompose the movement data into three features and extract multiple different gait patterns. Despite learning only walking movement, the movement patterns of trotting and bounding are also extracted, which suggests that movement data obtained via hardware contain various gait patterns. Although the present robot can neither trot nor bound, this research suggests the possibility that one specific motion can reveal unlearned movements via hardware experiments. However, we use language that is operational in nature so that the basic concepts are well-defined experimentally and can be applied also to the study of actual (i.e., non-idealized) gaits.

Gait patterns

Animal gait patterns

In this section, as preparation for the reported research, we describe those movements that are common to both quadruped animals and the present robot. The gait patterns are defined by the leg patterns shown in Fig. 1; each gait pattern shows the differences among the left foreleg (LF), the right foreleg (RF), the left hind leg (LH), and the right hind leg (RH). The movement of each leg is classified as being in either the stance phase (in which the foot is in contact with the ground) or the swing phase (in which the foot is lifted and moved forward).

The classification of locomotion patterns is stated below with reference to previously conducted studies in the literature [1]. Walking is a gait pattern that appears at low speed, wherein two or more legs are always in the stance phase. The legs operate in the sequence of RH, RF, LH, LF, and the phase difference of each leg is a quarter. Pacing appears at a slightly higher speed than that of walking. At any time in this gait pattern, one fewer leg is in the stance phase compared to walking. Furthermore, the legs on the left side move in unison, as do those on the right side but with the opposite phase to those on the left. Trotting is a gait pattern that appears at medium speed. It is similar to pacing except that now it is diagonal legs that are in pairs and operate in antiphase. Bounding (and galloping) is a gait pattern that appears at high speed. The specific order in which the legs move depends on the species of animal being considered; however, in each case the forelegs and the hind legs emphasize with almost the same phase. Strictly speaking, motion in which the forelegs and the hind legs emphasize with almost the same phase is classed as bounding. There is also a gait pattern known as pronking, in which all the legs operate in phase (e.g., springboks, kangaroos). There are many other gait patterns besides the aforementioned ones; however, herein we focus only on these.

Robot gait patterns

Here, we define robot gait patterns to determine the movement in which the robot is to be moved. Furthermore, in this study, a gait pattern (movement) is evaluated by focusing only on the phase differences among the leg motions. At that time, because the phase relation of given gait patterns does not consider whether to actually locomote, we considered this as an element of the gait and defined each phase.

In practice, locomotion patterns are often stated by control schemes through feasible locomotion. In contrast, our method for investigating the resultant behaviors does not originate from the feasible locomotion itself but from the possibility for feasible locomotion. Regarding the possibility, we classified resultant movements based on the relative phase differences between the rhythmic movements of the robot’s legs.

When the legs are operated in the sequence of RH, RF, LH, LF, this is taken as walking. When the legs on the left side are moved in unison, as are those on the right side but with the opposite phase to those on the left, this is taken as pacing. When the same is done but with the two diagonal pairs of legs, this is taken as trotting. When the forelegs and the hind legs emphasize with almost the same phase, this is taken as bounding. Finally, when all the legs emphasize with almost the same phase, this is taken as pronking.


Hardware development

Quadruped robot

To investigate the movement of the robot accurately, we realize various gait patterns, which requires hardware with high-speed motor control and sensing capabilities. In this research, we conduct experiments using a developed quadruped robot (Fig. 2a) from which we acquire multi-point movement information (Table 1). When stabilizing the posture of the robot and controlling the positions of its legs, if the number of degrees of freedom (DOFs) is large, the necessary information becomes very complicated. Therefore, in this study, we set two DOFs for each leg.

Fig. 2

Developed quadruped robot: a Overview of robot; b Circuit diagram of load-cell on toe

Table 1 List of sensors

Position control of toes

To control a leg by inputting its position, each joint angle is obtained from the leg position by inverse kinematics. There are two types of leg, namely left-handed and right-handed ones (Fig. 3, Table 2), and each expression was obtained using inverse kinematics. Furthermore, the leg trajectory is set to a semi-ellipse whose major axis is parallel to the ground.

Fig. 3

Kinematics and coordination of legs

Table 2 Leg parameters

Sensors and actuators

The controlling actuators and monitoring sensor data are performed via main micro controller equipped the robot as shown in Fig. 4. The actuator of each joint (Dynamixel MX-64; Robotis) has a high maximum torque of 6 Nm, allows serial communication, and has a maximum control cycle of 10 kHz. The absolute angle of the motor is measured by a built-in absolute encoder and is also estimated and measured from the angular velocity and the torque applied to the motor as the current value. A gyro sensor and an acceleration sensor are fixed and set near the center of the trunk and provide measurements in each of the X, Y, and Z axes. The positive direction of the X axis is fixed toward the f of the body. A load cell (Nitta Co., Ltd.) is mounted on each leg, and the sensor value is obtained using a voltage-divider circuit; the circuit diagram of the leg pressure sensor is shown in Fig. 2b. The resistance in the voltage-divider circuit is set to 10 KΩ. Overall, the developed robot provides 34 types of sensor value, and all the movement information is obtained online at intervals of 25 ms. To perform the whole experiment, we used the premise of a robot moving on a treadmill, as shown in Fig. 5.

Fig. 4

Device configuration of the developed robot system. The micro controller receives all of sensor data. The angler positions, velocities, accelerations, and current values (proportional to motor torques) are also monitored via motor driver board with RS485

Fig. 5

Experimental system consists of a treadmill controlled manually

Analysis methods

Extraction of feature quantities

In this research, we use an autoencoder [43] to extract the features of the movement data. An autoencoder is a neural network that acquires the characteristics of data by learning to make the input data and the output data the same. The effect of an autoencoder is broadly the same as that of principal component analysis; however, an autoencoder has much greater representational capabilities because it can perform nonlinear feature extraction. Also, an autoencoder is not limited to normally distributed data and does not assume that the “principal components” [44] are perpendicular, thereby making it possible to extract the feature quantities of data while losing little of the quality of the original data.

The structure of the autoencoder was empirically determined through pretests (as explained in “Neural-network structure of autoencoder” section). In most cases, several unusual gaits often appeared based on the its structures. In practice, we continued to seek relevant gaits until the obvious motion dataset was obtained. Then, we chose a specific autoencoder that displayed well-known gait patterns. Furthermore, we recorded datasets from the same 12 experiments for each condition. This methodological sequence stands on the research objective of decomposing one movement to several notable movements, displaying significant phase differences between the robot’s leg movements.

Neural-network structure of autoencoder

Figure 6 shows the structure of the autoencoder used in this study. Comprising encoder networks and decoder networks, an autoencoder is a neural network that learns features from unlabeled data. As Fig. 6 shows, this network contains seven layers, namely an input layer, hidden-layers 1–5, and an output layer. The weights used to encode the entire network and those used for decoding are related transpositionally. Furthermore, regularization (L2 norm [45]) to express with fewer feature quantities is applied to the entire network. The weight between the input layer and hidden-layer 1 is defined as W1, that between hidden-layers 2 and 3 is defined as W2, and so on for weights W3, W4,W5,and W6. We employed symmetric autoencoder (it provides W6 = W1T, W5 = W2 T, and W4 = W3 T.). A sigmoid function is used as the activation function, and a mean-squared function in outputs is used as the loss function.

Fig. 6

Structure of autoencoder for movement data extractions of feature quantities. The full of dynamical and mechanical data are incorporated as input to autoencoder. The autoencoder works to reduce the dimensions of data in Hidden-3 layer

We select hidden-layer 3 and assess how the loss function varies with the number of neurons in that layer, as shown in Fig. 7. We choose the eventual number of neurons in the layer by assessing where the loss function increases abruptly with decrease in the number of neurons. Thus, hidden-layer 3 is considered to contain three neurons. The redundant networks are not preferred in this study since the objective involves to generalize the motions and to decompose the movement by the autoencoder. Therefore, we need to find the autoencoder with the minimum number of nodes to extract the feature of the movement data.

Fig. 7

Relationship between loss value and number of neurons in hidden-layer 3 in walk gait pattern training

Next, we describe the detailed conditions regarding the layers and the learning. Between the input layer and hidden-layer 1, the features of each item of leg data (joint angle, joint angular velocity, joint torque) are extracted. Furthermore, because each leg has the same structure by design, we consider the same dynamic model and learn to make all weights related to each leg equal (the blue dotted lines in Fig. 6). Data related to the entire body (gyroscope, acceleration, leg pressure) are collected in a feature space that is different from that of the leg data. Between hidden-layers 1 and 2, the previously collected features of the modules of each leg are related to the features of the entire body. Because the relationship of data in each module was extracted in the previous section, we will look at the relationship between modules. Finally, between hidden-layers 2 and 3, the feature of the movement of the robot is dropped into a latent feature space.

The computation related to learnings was carried out by a standard stochastic gradient descent method, backpropagations. To improve convergence in learning, the pre-learning phases were conducted in every network weights from visible layers. Python on Anaconda was employed with the library for matrix handlings, and without any libraries for machine learnings. It takes about 15 h in each learning with CPU core i5-7300U @2.60 Ghz on a laptop.

Experimental setup

In this research, we perform experiments in which we acquire data about the robot’s movements on a treadmill shown in Fig. 5. We use a PC to control the treadmill speed via a speed conversion table. During the robot walking experiments, the treadmill speed is controlled so that the robot performs its locomotion in a fixed location in the laboratory frame of reference. The robot is connected by means of wires to a pulley installed above the treadmill so that the robot can be lifted off the treadmill in the event of a malfunction; in normal operation, these wires are not in tension and do not affect the robot locomotion. There is another wire that is used to assist the walking; when the robot gets in the walking direction or the position becomes the rear part of the device, it is mounted to assist the robot with a certain force.

For each gait pattern, we use a pre-prepared computer program to realize that pattern in the robot and perform a walking experiment for 5 s. Each experiment is repeated 13 times under the same conditions, and movement data are acquired from the sensors mounted on the robot.


A gait corresponds to reproducible patterns of intra- and inter-limb coordination in locomotion. The intra-limb pattern is always cyclic in nature. In this work, we are not concerned with the intra-limb patterns except for questions regarding limb trajectory, stance–swing timing, body movement, and the like. Our focus is entirely on patterns of coordination between the limbs (hip joints). In this context, it is worth noting experimental perturbation studies [7] that suggest that each multi-joint limb of a quadruped may be treated as a single functional unit.

Thus, in this study, we focus in particular on the phase differences among the leg movements. Moreover, it is shown that the cycle times of the stance phase and the hip-joint angles are almost the same [26]. Given this fact, we focus in particular on the hip movement of each toe and define the gait pattern by looking at the phase differences among the hip movements. The experiment with four neurons was tested, but remarkable results were not observed because of redundancy in latent features (see Additional file 1: Fig. S1). In practice, it was observed that the resultant data with four neurons are not replicable even through more than 13 times trials. For example, only trot gait is induced in the one test, while all gaits are generated in the other. It is considered that there is the possibility of that the dimension reduction is not adequately executed, that is, the given features are redundancy so that uneven decoding in autoencoders may evoke various results. The mainstream of this study is not to find the better learning methods, but is to decompose the movement data and represent by the real hardware. Therefore, we focus on the results by experiments with three neurons in hidden-layers 3.

Experiment to acquire walk movement data (on a treadmill)

Quadrupeds tend to change their gait patterns from walking to trotting and from trotting to bounding. Considering that walking is the first pattern to appear (at low speed in Fig. 1), we select walking in the experiment. In this experiment, the gait pattern is set to walking, the walking cycle time is set to 1 s (Additional file 2: Movie S1), and the experiment is conducted for 5 s. The sensor data are acquired averagely at intervals of 25 ms. After acquiring the movement data, the noise in the data (torque, gyro, acceleration, load cell) is removed using a low-pass filter. We assess the joint angular data to confirm that the robot was indeed walking, and we check the phase difference by looking at the peaks in the graph. Template matching is performed to check the peak values in the graph (Fig. 8). The data normalized are set to that the maximum value is equal to 1. This normalization is performed for displaying to confirm the phase differences specifically, but is not applied to the autoencoder training terms. Based on the result, the sequence RH, RF, LH, LF is confirmed, thereby confirming that the robot was indeed walking.

Fig. 8

Acquisition of movement data in walking on hips (left) and knees (right). Blue (Orange) lines indicate the target position data (the target position data with template-matching). According to the yellow boxes emphasizing a peak of the positional data, it is confirmed that the walk gait is achieved

Furthermore, we conduct this experiment 13 times in the same process. Here, we define these data as the motion data for walking. Moreover, of the 13 sets of data acquired, we classify 12 as walking data and one as walking test data. Before training the autoencoder, we normalize the data in every dimension to have zero mean and to be in the range [0, 1].

Results of autoencoder training

To extract the features of the movement data, we train the network using the movement data from this experiment. We use the autoencoder to reduce the input dimensionality from 34 to three. After the training, we input the 5 s of walking test data into the autoencoder and, then, confirm the waveforms in hidden-layer 3 (Fig. 9). We confirm that waves with a period of 1 s and waves with a period of 0.5 s appeared.

Fig. 9

Behaviors of neurons in hidden-layer 3 (features). In this case, the first and third features involve the periodic signal with period nearly 1[s], while the second feature generate the signal with the half period of the first and third one. These results imply that the features in hidden-layer 3 associate with the period in given gait pattern with a period 1[s] and generate the higher frequency mode with period 0.5[s]

Results of feature extraction

To determine the roles played by the extracted features, we use the neurons in hidden-layer 3 one by one by turning off the other two neurons. While observing the movement of the robot, we input the angular data in the output layer to the robot. To calculate the phase difference of each leg, we use template matching with the template

$$f\left( {t,\varphi } \right)_{template} = \frac{1}{2}\left( {\cos \left( {\frac{2\pi }{T}t + \varphi \pi - \pi } \right) + 1} \right) ,$$

and find \(\varphi\) with

$$\mathop {\hbox{max} }\limits_{\varphi } f\left( t \right)_{data} \cdot f\left( {t,\varphi } \right)_{template} {\text{subj}}.{\text{ to}}\, - 2 \le \varphi \le 0 .$$

We adopt the \(\varphi\) with the highest match value as the phase of the graph. Here, T is the largest cycle time of the hidden-layer-3 neuron. We found that the period T become almost equal to 1.00[s]. Practically, we adopt the average of peak-to-peak intervals in each given experimental data.

We represent each phase by the angle that the hand of each circle forms with the vertical (Fig. 3), measuring negative to the left and positive to the right (Fig. 10, right panel) [26].

Fig. 10

Results of using all neurons in hidden-layer 3. The left graph presents the output data in Hip angle values by autoencoder with regards to input given in Fig. 8. It is confirmed that the decoder function in the network are trained. The right panel illustrates the corresponding phase differences between legs based on output Hip angle values

Before checking the neurons one by one, we check the output obtained using all the neurons in hidden-layer 3 (Fig. 10, Additional file 3: Movie S2). From the results, the sequence RH, RF, LH, LF was executed, meaning that the robot was indeed walking. Thus, we reason that the training was successful because the output data are the same as the input data in the autoencoder.

Next, we check the role of the features one by one. From the results, we find other patterns of movement in the feature of walking, namely pacing, trotting, and bounding movements. Each gait movement that appeared is shown in movies (see Additional file 4: Movie S3, Additional file 5: Movie S4, Additional file 6: Movie S5 online) and in Fig. 11. In the graphs on the left in Fig. 11, the peak is colored yellow. On the right in Fig. 11, the results of calculating the phase difference of each leg are shown. In Fig. 11a, the right and left legs move with opposite phases, and this phase difference in each leg is the same as for the pacing movement. Thus, a feature of the pacing movement is found in the walking movement. In Fig. 11b, the diagonal legs move with almost opposite phases, which is the same phase difference as for the trotting movement. Here, a feature of the trotting movement is found. In Fig. 11c, the fore and hind legs move with almost opposite phases, and we argue that this feature contains an element of the bounding movement. These movements that appeared in the features differ entirely from the movement used in the training (walking data).

Fig. 11

Phase-difference results for the gaits of a pacing, b trotting, and c bounding (see Additional file 4: Movie S3, Additional file 5: Movie S4, Additional file 6: Movie S5 online). The motion data are independently decoded by only one feature on, otherwise off. The pacing, trotting, and bounding are induced by the first, third, and second features in Fig. 9, respectively

At the last, elements of the phase differences of other gait patterns appear despite learning with only the walking data. However, it should be noted that the robot cannot walk by itself if we use only one neuron in hidden-layer 3.

Experiment to acquire trot movement data (on a treadmill)

In addition to the experiment regarding gait patterns when moving on a treadmill, we investigated gait patterns when trotting on a treadmill under the same conditions as those in the former experiment. Template matching was performed to check the peak values in the graph (Fig. 12). Based on the result, the sequence RH, RF, LH, LF was confirmed, thus confirming that the robot was indeed trotting.

Fig. 12

Acquisition of movement data in trotting on hips (left) and knees (right). Blue (Orange) lines indicate the target position data (the target position data with template-matching). It is confirmed that the trot gait is achieved

After the training, we input the 5 s of trotting test data into the autoencoder and, then, confirm the waveforms in hidden-layer 3 (Fig. 13). Likewise, in the result of walking experiment, it was confirmed that waves with a period of 1 s and waves with a period of 0.5 s appeared. In the same manner, we represent each phase by the angle that the hand of each circle forms with the vertical (Fig. 3), measuring negative to the left and positive to the right (Fig. 14, right panel).

Fig. 13

Behaviors of neurons in hidden-layer 3 (feature) Trot-neuron3. As with the result in Fig. 9, two features involve the periodic signal with period nearly 1[s], while the other one generate the signal with the half period. The role of these features are investigated in the result in Fig. 15 compering with the result in Fig. 11

Fig. 14

Results of using all neurons in hidden-layer 3 (Trot). Likewise, in the result in Fig. 10, it is confirmed that the decoder function in the network are trained in a trot gait case by comparing with the input data as shown in Fig. 12

Next, we procedurally checked the role of the features one by one in the same manner as that used in walk gait analysis. We found that the autoencoder with trot training could involve the trot (Additional file 7: Movie S6) and pronk (Additional file 8: Movie S7) gait as show in Fig. 15. In this experiment, as trotting gait was learned in advance, the pronk gait was non-obviously generated. Compared to the walk-training case, the trot gait did not generate various gaits such as walk and pace.

Fig. 15

Phase-difference results for the gaits of a trotting, and b pronking. In this case, the feature with the period 1[s] (0.5[s]) often induced the trotting (pronking) gait. Note that twelve times experiments are performed, so, in each experiment, the lower and the higher frequency feature induce trotting and pronking, respectively

Experiment to acquire walk movement data (in air)

We secure the robot to the air so that its legs do not touch the ground and its body does not move from side to side. Under these conditions, we conduct an experiment to acquire movement data, in which the robot executes the walking phase in the air (Additional file 9: Movie S8). We acquire 5 s of movement data 13 times, 12 as learning data and one as test data (hereinafter referred to as the “air data”). Under these conditions, the values obtained from the gyro, acceleration, and load-cell sensors do not change because the robot is attached to the air tightly and its body does not move. Given this, we remove these data from the neural network and check the output obtained using one neuron in hidden-layer 3 12 times.

Summary of results

Figure 16 shows the appearance ratio of each gait pattern, in which those ratios are calculated by doing the same training 12 times. The appearance ratio was calculated the following procedures: Firstly, the in-phase legs are defined if the difference of each ψ is within 0.5. This operation regards the legs with over 0.5π phase differences as out-phase. Secondly, the gate pattern is determined based on the combinations between in-phase legs in each experimental data. Some of them display the characteristic gate patterns involving pace, trot, bound, and pronk, while there are no in-phase legs in several results. Finally, the appearance ratio is provided with percentage. Note that this procedure is applied to given data one by one. Therefore, summation of the percentage values can be over 100% when the trained autoencoder output the multiple gait patterns.

Fig. 16

Comparing the repeatability of the results for each gait pattern. In case of walk training, pace, trot, and bound are relatively induced, while in case of trot training, trot and pronk are often generated. In contrast, by data in the air, the typical gait did not appear

Furthermore, comparing with the walking experiment performed on the treadmill, the walking data confirms that the repeatability lowered more than the ground data. In summary, motion data on the ground could hold versatile gaits; walk data could hold pace and bound, and trot data could often represent the pronk gait. However, there was no significant motion in air.


We decomposed the movement data into three features and found elements of other movement patterns in the features of the walking movement. Despite learning only the walking movement, movement of the patterns of trotting and bounding was extracted. Although these movements could not locomote, we reason that they show the possibility of generating another gait patterns. Primarily, feasible locomotion cannot be realized because the autoencoder produces motor torque (current) outputs with low magnitude. This tendency is clearly observed because our tests were performed under inactivation of several neurons in the hidden layer.

The proposed method could support finding hidden latent features in measurement datasets; however, such features are limited to a set of static orbit on a graph. Thus, it lacks information related to the absolute timescale of a movement because the datasets used for the autoencoder comprise snapshots. Despite this limitation, the proposed method seems to be effective in case of that the analysis target shows the periodic movements presented here. When the measurement data holds some rhythms, our method exposes the relative information between datasets without a decision of the input–output property in advance.

Our results are consistent with those of other theoretical works [46,47,48], which theoretically emphasize the role of motion in the main body along the roll, pitch, and yaw axes for generating gait transitions. Additionally, our results indicate the essential role of sensory feedback information as shown in Fig. 16, in which the locomotion patterns often appeared only when the main body reactions involving both gyro and acceleration together with load cells were captured by the autoencoder. To quest for the reason why different motions were respectively appeared after walking and trotting learning, we recorded every sensor dataset in both motions (Additional file 1: Fig. S2 for walk experiment and Additional file 1: Fig. S3 for trot experiment), and compared them. It is observed that the gyro sensor data in pitch involves the lower frequency rhythms (T s) in walking experiment and the high frequency data (T/2 s) in yaw and roll. Meanwhile, trot experiment comparatively demonstrates only high frequency rhythms in gyro sensors. That is to say, motions in sagittal plain are contrasting because the acceleration data in x and z axis are also inter-connected with one in pitch. This imply that the main body movement in pitch is one of the essential matter for gait patterns variations. The extra experiment based on another paradigm could clarify the role of main body motions to gait pattern generations. Our method is expected to be a tool for exploring the significance of embodied systems in locomotive robots.

Another study succeeded in generating an unlearned motion pattern using a CPG and a controller incorporating such an oscillator [10]. However, the controller had to be designed beforehand and the robot was realized by a simple mechanism with one degree of freedom. By contrast, in the present research, without considering the input/output relationship, we (i) developed a robot with a relatively complicated mechanism, (ii) extracted features of motion data by machine learning without using a controller, and (iii) successfully generated an element of an unlearned novel motion pattern. Specifically, the network learned the movement of walking, and elements of the unlearned movements of trotting and bounding were generated in feature space. From this, we reason that (i) the movement pattern of walking includes elements of the movement patterns of trotting and bounding and (ii) the generation and transition of the gait pattern could be realized by promoting these features with some form of stimulus. However, the motion generated in this study is in the form of motion patterns only, and the robot cannot locomote by itself.

Furthermore, when the experiment was conducted floating in air, because the reproducibility of the appearance of other movement patterns decreased remarkably, the state of the legs and the body, the load on the joints, there is the possibility that the relationship with the so-called environment is necessary.

Notably, our work does not claim any energetic advantage for locomotion; rather, it proffers that one specific gait motion can obtain several other motions by decomposing one specific motion to find hidden motions. From these findings, we reason that the internal elements of the system changed because of the diversity and redundancy of the neural network, and that information about the dynamically changing environment and some form of self-model were created in the network. Therefore, we reason from the results that the relationship with the environment is indispensable for self-body model building, and that it created the new behavior pattern. Finally, we suggest that there could be many other unlearned gait patterns.


In this study, we derived and analyzed the feature values of robot sensor and actuator data and investigated the role played by each feature value in locomotion. We decomposed the movement data into three features and found that several different gait patterns were extracted from the walking data. Despite learning only walking movement, the movement patterns of trotting and bounding were extracted. This suggests that movement data obtained through hardware contain various gait patterns. Although the present robot could not locomote with these movements, this research suggests the possibility of generating unlearned movements.

Future work involves the reproduction of the movement controlled by the torque outputs from the autoencoder. This reproduction could then be evaluated to investigate how feature-abstraction based movements contribute to feasible locomotion via real hardware experiments.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.


  1. 1.

    Eadweard Muybridge (Chapman and Hall, London, 1899, 1957) Animals in Motion. Dover Pub

  2. 2.

    Hoyt Donald F, Richard Taylor C (1981) Gait and the energetics of locomotion in horses. Nature 292(16):239–240

    Article  Google Scholar 

  3. 3.

    Hildebrand Milton (1965) Symmetrical gaits of horses. Science 150:701–708

    Article  Google Scholar 

  4. 4.

    Thomas Graham Brown and Charles Scott Sherrington (1911) The intrinsic factor in the act of progression in the mammal. Proc R Soc London Ser B84:308–319

    Google Scholar 

  5. 5.

    Grillner S, Zangger P (1979) On the central generation of locomotion in the low spinal cat. Exp Brain Res 34:241–261

    Article  Google Scholar 

  6. 6.

    Grillner S (1975) Locomotion in vertebrates: central mechanisms and reflex interaction. Physiol. Review 55:367–371

    Article  Google Scholar 

  7. 7.

    Shik ML, Orlovsky GN (1976) Neurophysiology of locomotor automatism. Physiol Rev 56:465–501

    Article  Google Scholar 

  8. 8.

    Philippson M (1905) L’autonomie et la centralisation dans le système nerveux des animaux Bruxelles, Falk 7: l–208

  9. 9.

    Afelt Z, Kasicki S (1975) Limb coordinations during locomotion in cats and dogs. Acta Neurobiol. Exp. 35:369–376

    Google Scholar 

  10. 10.

    Owaki Dai, Ishiguro Akio (2017) A quadruped robot exhibiting spontaneous gait transitions from walking to trotting to galloping. Sci Rep 7(1):277

    Article  Google Scholar 

  11. 11.

    Fukuoka Y et al (2013) Analysis of the gait generation principle by a simulated quadruped model with a CPG incorporating vestibular modulation. Biol Cybern 107:695–710

    MathSciNet  Article  Google Scholar 

  12. 12.

    Fukuoka Yasuhiro, Habu Yasushi, Fukui Takahiro (2015) A simple rule for quadrupedal gait generation determined by leg loading feedback: a modeling study. Sci Rep 5:8169

    Article  Google Scholar 

  13. 13.

    Righetti L, Ijspeert AJ (2008) Pattern generators with sensory feedback for the control of quadruped locomotion. In: IEEE international conference on robotics and automation. pp 819–824

  14. 14.

    Auke Jan Ijspeert (2008) Central pattern generators for locomotion control in animals and robots. Preprint of Neural Netw 21(4):642–653

    Article  Google Scholar 

  15. 15.

    Kimura Hiroshi (1999) Realization of dynamic walking and running of the quadruped using neural oscillator. Autonomous Robots 7(3):247–258

    Article  Google Scholar 

  16. 16.

    LaValle SM (2006) Planning algorithms

  17. 17.

    van der Weele JP, Banning EJ (2001) Mode interaction in horses, tea, and other nonlinear oscillators: the universal role of symmetry. Am J Phys 69:953

    Article  Google Scholar 

  18. 18.

    Funato T, Aoi S, Oshima H, Tsuchiya K (2010) Variant and invariant patterns embedded in human locomotion through whole body kinematic coordination. Exp Brain Res 205:497–511

    Article  Google Scholar 

  19. 19.

    Mussa-Ivaldi FA, Giszter SF, Bizzi E (1994) Linear combinations of primitives in vertebrate motor control. Proc Natl Acad Sci USA 91:7534–7538

    Article  Google Scholar 

  20. 20.

    Grillner S (1985) Neurobiological bases of rhythmic motor acts in vertebrates. Science 228:143–149

    Article  Google Scholar 

  21. 21.

    Ivanenko YP, Poppele RE, Lacquaniti F (2004) Five basic muscle activation patterns account for muscle activity during human locomotion. J Physiol 556:267

    Article  Google Scholar 

  22. 22.

    Biancardi Carlo M, Minetti Alberto E (2012) Biomechanical determinants of transverse and rotary gallop in cursorial mammals. J Exp Biol 215:4144–4156

    Article  Google Scholar 

  23. 23.

    Biewener Andrew A (1990) Biomechanics of mammalian terrestrial locomotion. Science 250(4984):1097–1103

    Article  Google Scholar 

  24. 24.

    Cohen Avis H, Gans Carl (1975) Muscle activity in rat locomotion: movement analysis and electromyography of the flexors and extensors of the elbow. J Morphol 146:177–196

    Article  Google Scholar 

  25. 25.

    Wickler SJ, Hoyt DF, Cogger EA, Myers G (2003) The energetics of the trot-gallop transition. J Exp Biol 206:1557–1564

    Article  Google Scholar 

  26. 26.

    Schöner G, Jiang WY, Kelso JA (1990) A synergetic theory of quadrupedal gaits and gait transitions. J Theor Biol 142:359–391

    Article  Google Scholar 

  27. 27.

    Golubitsky M, Stewart I, Buono PL, Collins JJ (1999) Symmetry in locomotor central pattern generators and animal gaits. Nature 401:693–695

    Article  Google Scholar 

  28. 28.

    Bassler U (1986) On the definition of central pattern generator and its sensory control. Biol Cybern 54:65–69

    Article  Google Scholar 

  29. 29.

    Aoi Shinya, Manoonpong Poramate, Ambe Yuichi, Matsuno Fumitoshi (2017) Adaptive control strategies for interlimb coordination in legged robots: a review. Front Neurorobot 11:39

    Article  Google Scholar 

  30. 30.

    Kuo Arthur D (2002) The relative roles of feedforward and feedback in the control of rhythmic movements. Mot Control 6:129–145

    Article  Google Scholar 

  31. 31.

    Willems JC, Polderman JW (1998) Introduction to mathematical systems theory: a behavioral approach. Springer, Berlin

    Google Scholar 

  32. 32.

    Willems JC (1991) Paradigms and puzzles in the theory of dynamical systems. IEEE Trans Automat Control 36:259–294

    MathSciNet  Article  Google Scholar 

  33. 33.

    Willems JC (1997) On interconnections, control and feedback. IEEE Trans Automat Control 42:326–339

    MathSciNet  Article  Google Scholar 

  34. 34.

    Dominici N, Ivanenko YP, Cappellini G, d’Avella A, Mondì V, Cicchese M, Fabiano A, Silei T, Di Paolo A, Giannini C, Poppele RE, Lacquaniti F (2011) Locomotor primitives in newborn babies and their development. Science 334:997

    Article  Google Scholar 

  35. 35.

    d’Avella Andrea, Saltiel Philippe, Bizzi Emilio (2003) Combinations of muscle synergies in the construction of a natural motor behavior. Nat Neurosci 6:300–308

    Article  Google Scholar 

  36. 36.

    d’Avella A, Tresch MC (2001) Modularity in the motor system: decomposition of muscle patterns as combinations of time-varying synergies. Adv Neural Inf Process Syst 14:141–148

    Google Scholar 

  37. 37.

    Ijspeert A, Nakanishi J, Hoffmann H, Pastor P, Schaal S (2013) Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput 25(2):328–373

    MathSciNet  Article  Google Scholar 

  38. 38.

    Holden D, Saito J, Komura T, Joyce T (2015) Learning motion manifolds with convolutional autoencoders. SIGGRAPH Asia Technical Briefs Article No. 18

  39. 39.

    Troje Nikolaus F (2002) Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J Vision 2:371–387

    Article  Google Scholar 

  40. 40.

    Chen N, Bayer J, Urban S, van der Smagt P (2015) Efficient movement representation by embedding dynamic movement primitives in deep autoencoders. In: International conference on humanoid robots

  41. 41.

    Chen N (2017) Efficient movement representation and prediction with machine learning. Doctoral dissertation, Technische Universität München

  42. 42.

    Y Motegi, Y Hijioka, M Murakami (2018) Human motion generative model using variational autoencoder. Int J Model Optim 8(1)

  43. 43.

    Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    MathSciNet  Article  Google Scholar 

  44. 44.

    Moore BC (1981) Principal component analysis in linear systems: controllability, observability and model reduction. IEEE Trans Autom Control 26(1):17–32

    MathSciNet  Article  Google Scholar 

  45. 45.

    Hoerl E, Kennard RW (1970) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1):55–67

    Article  Google Scholar 

  46. 46.

    Kurita Y, Matsumura Y, Kanda S, Kinugasa H (2008) Gait patterns of quadrupeds and natural vibration modes. J Syst Design Dyn 2(6):1316–1326

    Article  Google Scholar 

  47. 47.

    Tero A, Akiyama M, Owaki D, Kano T, Ishiguro A, Kobayashi R (2013) Interlimb neural connection is not required for gait transition in quadruped locomotion. arXiv preprint arXiv:1310.7568

  48. 48.

    Kano T, Owaki D, Fukuhara A, Kobayashi R, Ishiguro A (2015) New hypothesis for the mechanism of quadruped gait transition. In: The 1st international symposium on swarm behavior and bio-inspired robotics, pp 275–278

Download references


This work was partly supported by JSPS KAKENHI Grant Number 16K00361 and the Kayamori Information Science Promotion Foundation.



Author information




HY, YIshii and YIkemoto conceived and designed the experiments. HY, SK, and YIkemoto performed the hardware experiments. HY, SK, YIshii, and YIkemoto analyzed the data. HY, SK, YIshii and YIkemoto wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yusuke Ikemoto.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

 The sets of data for learning.

Additional file 2.

 The experiment for walking data acquisition on a ground.

Additional file 3.

Reproduction of the movement obtained using all the neurons in hidden-layer 3.

Additional file 4.

Pacing movements in a walking gate observed by one neuron-activation in hidden-layer 3.

Additional file 5.

Trotting movements in a walking gate observed by one neuron-activation in hidden-layer.

Additional file 6.

Bounding movements in a walking gate observed by one neuron-activation in hidden-layer.

Additional file 7.

Trotting movements in a trotting gate observed by one neuron-activation in hidden-layer.

Additional file 8.

Pronking movements in a trotting gate observed by one neuron-activation in hidden-layer.

Additional file 9.

The experiment for walking data acquisition in the air.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yamamoto, H., Kim, S., Ishii, Y. et al. Generalization of movements in quadruped robot locomotion by learning specialized motion data. Robomech J 7, 29 (2020).

Download citation


  • Quadruped robot
  • Gait pattern
  • Movement decomposition
  • Machine learning
  • Autoencoder