Skip to main content

Demonstration of position estimation for multiple construction vehicles of different models by using 3D LiDARs installed in the field

Abstract

The construction industry faces a labor shortage problem, so construction vehicles need to be automated. For automation, a position estimation method is expected that is independent of the work environment and can accurately estimate the position of targets. This paper aims to develop a position estimation method for multiple construction vehicles using 3D LiDAR installed in a work environment. By focusing on the shape of construction vehicles, this method can estimate the location of construction vehicles in places where conventional methods cannot be used, such as valleys or roofs. Because the shape of the construction vehicle changes depending on the work equipment and steering operation, each joint angle was obtained, and the 3D model used for estimation was updated. As a result of the experiment, it was verified that the position and orientation of multiple construction vehicles can be estimated with an accuracy that satisfies the required accuracy.

Introduction

The current construction industry faces a labor shortage problem. To solve this problem, automation of construction vehicles using information and communication technology (ICT) is being promoted, and research aimed at practical applications is being actively conducted [1,2,3,4]. These studies focused on individual tasks and estimated the position and orientation of construction vehicles using a global navigation satellite system (GNSS) and simultaneous localization and mapping (SLAM). However, depending on the location, it may not be possible to estimate the position using typical position estimation technologies, such as GNSS and SLAM, which are used for mobile robots [5, 6]. For example, GNSS cannot be estimated in places where satellite data are difficult to receive and SLAM tends to fail in an environment without surrounding features. In these methods, it is necessary to attach a sensor outside the vehicle, and there is a risk that the vehicle will be damaged while working. Therefore, a position estimation method for vehicles from the environmental perspective is desirable.

The total station (TS) is a well-known method for estimating the position from the surrounding without being attached to the vehicle. This observation device combines an optical ranging sensor that measures the distance and a theodolite that measures the angle [7]. However, the installation place is limited because the estimation cannot be performed if the prism becomes invisible. In addition, a method for position estimation using information obtained by radio frequency identification (RFID) as land markers has been proposed [8]. A method to identify the relative position and orientation of the robot using a beacon with an ultrasonic receiver and an infrared oscillator installed in the environment, and a robot equipped with an ultrasonic transmitter and infrared receiver is proposed [9]. However, these methods require a large number of sensors to be installed in the work environment, and the estimation range is limited to a narrow work environment.

As an approach for estimating the position with LiDAR installed in the environment, an articulated vibrating roller is proposed, which estimates the position by weighting using the feature value based on the reflection intensity and performing template matching with a prepared template image [10]. However, there are some issues with this method. Using a 2D point cloud, the accuracy of template matching was adversely affected by the inclination and unevenness of the ground surface of the body. Furthermore, because there is little feature information, similar shapes can be incorrectly detected. In addition, it does not consider the case where there are multiple construction vehicles in a work environment. Thus, it cannot be applied to estimate multiple vehicles. A method has been proposed for estimating the position of multiple vehicles using 3D point clouds without the need for specific vehicle models, which distinguishes between background and object point clouds and performs object detection [11]. Another method involves mapping 3D point clouds to 2D images and performing object tracking by matching detected objects from past frames as templates [12]. However, these methods for object extraction from the source point cloud may encounter issues such as misrecognition when dynamic changes occur in the background or when objects are in close proximity to the background. On construction sites, sediment transport may change the topography, and close proximity between vehicles is likely to lead to object detection failures. Other methods have been proposed to estimate the positions within 3D point clouds using deep learning, such as using long short-term memory (LSTM) deep networks to detect features of pedestrians or column units [13, 14]. However, these approaches have the disadvantage of requiring learning data for each construction vehicle, which makes their implementation difficult. A method for position estimation using 3D point clouds and 3D vehicle models has also been proposed [15]. However, since the method supposed that maintain roads are flat and using normal vectors on the flat roads as a constraint, it would be difficult to use the method on rough terrain at a construction site. Then, it has not been evaluated when vehicles are driving.

In previous research, methods for position estimation using background subtraction or under the constrain of the flat road have been proposed, but we consider that these methods are not suitable for use in environments such as construction sites with rough terrain, or situations where multiple vehicles are working simultaneously and getting close to each other. Therefore, we aim to develop a position estimation system that can be used in actual construction sites, which does not rely on environmental information, and uses the 3D vehicle models to enable position estimation for multiple vehicles.

Our targeted construction sites include tunnels, deep valleys, and quarries, as well as potentially large buildings and high cliffs. In these environments, the use of GNSS is often not feasible, and cellular signals may not be available, preventing the acquisition of GNSS correction data. In addition, these similar terrain features in the environment may cause incorrect estimation of position.

Therefore, in this study, we propose a position estimation method for multiple small construction vehicles of different models using point cloud information obtained from 3D LiDARs installed in the work environment and 3D models of construction vehicles. This method enables the estimation of the position of multiple construction vehicles with respect to the number of 3D LiDARs deployed, as well as the measurement of the work environment. This means that a large number of construction vehicles can be deployed in the field while utilizing a relatively small number of 3D LiDARs for both position estimation and environmental measurement. To correspond to different vehicle types, it is necessary to consider the shape changes owing to the working equipment and drive units. For example, the shape of a crawler dump truck does not change while it is running, whereas the shape of a wheel loader changes because it has an articulated drive unit. To verify that the proposed method can be used to autonomously drive actual small construction vehicles to perform the prepared earth-moving task, an autonomous driving system for construction vehicles was constructed and a demonstration was conducted. This study demonstrates earth-moving tasks by excavating a pile of earth and sand with a wheel loader, loading the earth into the bed of a crawler dump truck, and unloading the load after the crawler dump truck has traveled to the earth removal position.

The paper is organized as follows. First, we describe the autonomous driving system for the construction vehicles that we have developed, as well as the target construction vehicles. Then, we present our proposed position estimation method. Next, we report the results of a driving experiment that simulated the transportation of soil and sand using a 3-ton wheel loader and a crawler dump truck. We then discuss the insights gained from this experiment. Finally, we present the conclusion of this paper.

Autonomous driving system

This section describes the system constructed to perform the autonomous driving of small construction vehicles. The system configuration is illustrated in Fig. 1. The system uses a noetic version of the robot operating system (ROS) as middleware. Each construction vehicle shares information with a PC at the base via Wi-Fi, and each 3D LiDAR shares information via a wired connection. The user can send instructions from the mobile terminal to the small construction vehicle equipped with a retrofit device for autonomous driving.

Details of the small construction vehicles that were the focus of this study were provided. The retrofit devices developed by our team for retrofitting these small construction vehicles were used in this study [16][17]. The pure pursuit method was adopted to control the actual travel of the construction vehicles [18].

Fig. 1
figure 1

System configuration for autonomous driving of small construction vehicles

Crawler dump truck

A crawler dump truck is a rough-terrain transport vehicle used for earth-moving operations. IC35 of the IHI was used in this study. A photograph of the IC35 is shown in Fig. 2. The dimensions of the IC35, excluding the driver’s seat, was 3.20 m in length, 1.52 m in width, and 1.68 m in height. The load capacity was 3 tons, and the machine mass is 2.2 tons. The average traveling speed was 6 km/h at low speed and 10 km/h at high speed. The left and right crawlers can move forward and backward, with two independent control levers linked to forward and backward movements.

Wheel loader

A wheel loader is a construction vehicle that clears ground for earth excavation. Komatsu WA30, was used in this study. A photograph of the wheel loader is shown in Fig. 3. The dimensions of WA30, excluding the driver’s seat, was 4.03 m in length, 1.50 m in width, and 1.86 m in height. The bucket capacity was 0.4 \(m^2\) and the machine mass was 1.95 tons. The maximum traveling speed was 15 km/h. The machine was driven by an articulating system that folds between the body and work equipment when the handle was turned, and the articulation angle was 40\(\deg\). In addition to the gas pedal and brake, the operator had a control lever that moved back and forth and left and right to move the work equipment.

Fig. 2
figure 2

Crawler dump truck of the IHI

Fig. 3
figure 3

Wheel loader of Komatsu

Position estimation

Fig. 4
figure 4

Flowchart of our proposed position estimation method

This position estimation method obtains the shape of the construction vehicle from point clouds obtained from 3D LiDARs installed at multiple locations and then matches it with a 3D model of a construction vehicle to estimate its position, as shown in Fig. 4.

Merging point cloud

To match the 3D model, it is necessary to merge the point clouds obtained from multiple 3D LiDARs into a single point cloud. For this purpose, the local coordinate system of one LiDAR is defined as the origin, and appropriate coordinate transformations are performed for the position and orientation of each 3D LiDAR to merge them into a single point cloud. The merged point cloud can be expressed as

$$\begin{aligned} P = \{ q_jRot_i+T_i \mid q_j \in Q_i, 0 \leqq i < N \}, \end{aligned}$$
(1)

where P is the point cloud of the merged point cloud,

\(Q_i\) is the point cloud of the i-th 3D LiDAR,

\(q_j\) is the 3D position of the j-th point of \(Q_i\) point cloud,

N is the number of LiDAR installed in the field,

\(T_i\) is the translation vector of the transformation matrix of the i-th 3D LiDAR, and

\(Rot_i\) is the rotation matrix of the transformation of the i-th 3D LiDAR.

The position relationships among LiDARs are typically measured through a process called LiDAR calibration. This involves aligning the point clouds of a calibration object among the different LiDARs, and making manual adjustments to determine the \(Rot_i\) and \(T_i\) matrices. Specifically, for each sensor, a point cloud of an object of known shape placed in the area is obtained, and by matching the point cloud data with the known shape, the position and orientation of each sensor is obtained. These matrices represent the rotational and translational differences between the LiDARs in the coordinate system of one of the sensors.

To reduce the amount of processing, the merged point cloud is downsampled using a voxel grid filter (VGF), which is a downsampling method that divides the 3D space into grids with a sampling interval of D and approximates the point cloud inherent in each grid as its center of mass. The merged point cloud is shown in Fig. 5. In this point cloud, the surface point clouds of the crawler dump and sand pile can be observed. To ensure accurate and real-time estimation, it is crucial to set an appropriate threshold value D, as point cloud sparsity or density can greatly affect the results. Based on practical experience and the average speed of the subject vehicle, we set D to 0.2. This value is derived from the amount of movement per unit time of the vehicle.

Fig. 5
figure 5

Merged point cloud. It sees a surface point cloud of crawler dump truck and an earth and the sand pile

Update 3D model

For accurate matching of the obtained point cloud and 3D model, it is necessary to update the 3D model of the construction vehicle to match the moving parts of the actual vehicle. In particular, for a wheel loader, the point cloud model divided into four parts (arm, bucket, front wheel, and driver’s seat/rear wheel) must be combined based on the angle data of each joint acquired by the encoder and inertial measurement unit (IMU), and the 3D model must be updated. An example of updating the 3D model of a wheel loader is shown in Fig. 6.

Fig. 6
figure 6

The 3D model of a wheel loader. It is updated by joint data obtained from the vehicle using encoders and IMU

During the matching process, if the positions and orientations of the 3D model and merged point cloud are significantly different, it causes a mismatch. Therefore, the initial position and orientation of the 3D model were changed using the previous estimation results. This enables a rough match. Finally, to prevent inaccuracy in matching due to a lost point cloud, the points of the 3D model outside the sphere of radius R, with the origin at the point of the previously obtained point cloud around the vehicle, are deleted to prevent the corresponding points from being generated incorrectly. This allows the 3D model to update the surface point cloud of the point cloud because of the missing points. This study refers to this method as remodeling using a predictive model (RM) algorithm [19]. Figure 7 shows a conceptual scheme of the RM algorithm.

Fig. 7
figure 7

Conceptual scheme of the RM algorithm. Remove the points of the 3D model outside the sphere of radius R, whose origin is the point of the point cloud around the vehicle obtained the previous time

Matching with 3D model

Matching the 3D model and merged point cloud to obtain the translation and rotation matrices enables position estimation. An iterative closest point (ICP) is used for matching [20]. ICP requires rough matching. For the second and later estimations, because the 3D model is roughly matched in Sec. 3.2, the position and orientation are manually assigned to the vehicles for the first estimation. This allows us to estimate multiple construction vehicles of different models by continuing to capture the point clouds in the vicinity, as each vehicle has its own starting point for estimation.

Demonstration

In this section, experiments are conducted assuming an environment where GNSS is not available to verify that the position estimation obtained by the proposed method can be applied to automate earth moving operations. Therefore, Real-time kinematic GNSS (RTK-GNSS) is used only for evaluation in these experiments.

Demonstration setup

The demonstration was conducted in the field as shown in Fig. 8. In the field, there is a pile of soil in the center, with an area of 50 ms in the X direction and 25 ms in the Y direction. The small construction vehicles, crawler dump truck (CD) and wheel loader (WL), equipped with the retrofit device as described in Sec., were used in the demonstration.

The 3D model of each construction vehicle was created using Open3D’s reconstruction system from the depth images captured using the Realsense D435i. The Robosense RS-LiDAR-M1 was used as the 3D LiDAR system. This is a solid-state LiDAR that can obtain high-resolution images with horizontal and vertical resolutions of 0.2\(^{\circ }\). Two LiDARs were placed 54 m apart facing each other.

In this study, we evaluated the position estimation accuracy using the mean absolute error (MAE), which is the average of the absolute errors between the estimated position and the true value. RTK-GNSS estimates were used as the true values in the position estimation evaluation. In this demonstration, the Yaw angle was also calculated using two RTK-GNSS.

The laptop used for location estimation in this experiment has an AMD Ryzen 9 4900HS CPU and 16 GB memory, and all programs run in ROS noetic.

Fig. 8
figure 8

Demonstration field. 3D LiDAR is placed facing each other across the work area where construction vehicles are traveling

Case 1: results of estimation of position at standstill

In this case, we verified the accuracy of the proposed position estimation method when the vehicle was stopped. A wheel loader and crawler dump truck, which are small construction vehicles were stopped at a specified position, and the estimation was performed for approximately 20 s. The merged point cloud is shown in Fig. 9.

The position estimation results for the crawler dump truck and wheel loader and the error obtained from RTK- GNSS are shown in Figs. 10 and 12, respectively. The yaw angle estimation results and yaw angle obtained from RTK- GNSS are shown in Figs. 13 and 11, respectively. The MAE\(\pm {\sigma {}}\) calculated from the estimated values obtained using this method was \(0.031\pm {0.025}\) m for the crawler dump truck and \(0.111\pm {0.024}\) m for the wheel loader. The yaw angles were \(0.014\pm {0.002}\) and \(0.006\pm {0.011}\) rad, respectively. These results indicate that the estimation method is highly accurate. In particular, as shown in Figs. 13 and 11, the yaw angle estimated by this method has a smaller variance than the estimated angle obtained by RTK-GNSS, indicating that the estimation is accurate. These experimental results confirm that there is a steady-state error even though vehicles are stationary. This may be caused by an error between the coordinate system of the merged point cloud and the coordinate system of the GNSS. The coordinate system of the GNSS was adjusted to align with the origin of the merged point cloud reference, in order to use GNSS values as true values, but an error may have occurred during the installation of the GNSS on the actual vehicle. In addition, there was a temporary increase in estimation error (as shown in Figs. 10 and 12). This can be attributed to noise in the point cloud caused by sand dust which is induced by the movement of the crawler. The dust reflects the laser from the 3D LiDAR. Furthermore, the RM algorithm may update the model incorrectly if sand dust overlaps the occlusion spots and the point cloud appears temporarily. In this case, matching accuracy is expected to be worse. These issues will be discussed in the discussion section. In addition, the positional error of the wheel loader is less accurate than that of the crawler dump truck. No significant differences were observed in maximum position error. The reason for high maximum yaw angle error of the wheel loader may be due to the error in the RTK-GNSS used as the true value.

Fig. 9
figure 9

Merged point cloud obtained when the vehicles are stopped

Fig. 10
figure 10

Position error results of the estimated positions compared to RTK-GNSS for a crawler dump truck

Fig. 11
figure 11

Position error results of the estimated positions compared to RTK-GNSS for a wheel loader

Fig. 12
figure 12

Yaw angle error results of the estimated positions compared to RTK-GNSS for a crawler dump truck at case 1

Fig. 13
figure 13

Yaw angle error results of the estimated positions compared to RTK-GNSS for a wheel loader at case 1

Case 2: results of the estimation of the driving position

In this case, a driving plan for a earth-moving task is performed to verify the accuracy of the position estimation as the vehicle moves.

A series of sediment transport operations performed is shown in Fig. 14 and Fig. 15. From 1 to 3, the wheel loader is scooping the sediment. From 4 to 5, the wheel loader moves the crawler dump truck to the sand-loading position. From 6 to 8, the wheel loader loads the earth onto the bed of the crawler dump truck. From 9 to 10, the wheel loader moves to the point of the load. For each task, a path was given and tracking control was performed using the pure pursuit method based on the estimated position.

The position estimation results for the crawler dump truck and wheel loader and the positions obtained from RTK- GNSS are shown in Figs. 16 and 17, respectively. The yaw angle estimation results and yaw angle obtained from RTK- GNSS are shown in Figs. 18 and 19, respectively.

The MAE\(\pm {\sigma {}}\) calculated from the estimated values obtained using this method was \(0.089\pm {0.067}\) m for the crawler dump truck and \(0.115\pm {0.052}\) m for the wheel loader. The yaw angles were \(0.032\pm {0.027}\) rad and \(0.097\pm {0.065}\) rad, respectively. For both construction vehicles, the accuracy was low while moving than in stationary. These experimental results indicate relatively large errors in the position estimation. In particular, there are larger errors in areas around X=42, Y=12 in Fig. 16 and X=18, Y=13 in Fig. 17 than in other areas. These areas are at the timing of the turning operation, and we consider the matching errors to be caused by the sand dust generated during the turning; this is same reason in "Case 1: results of estimation of position at standstill" section. In addition, the larger error for wheel loaders compared to crawler dump trucks may be attributed to time-delay of the encoder date for relative joint posture through Wi-Fi. This time-delay may result in a difference between the actual shape of the vehicle and the 3D model shape, potentially leading to errors in the matching process. However, there was no situation where the position estimation failed and the vehicles were lost.

Finally, the results of the experiment are summarized. The MAE and maximum error for each position and yaw angle are listed Table. 1. The results for cases 1 and 2 show that the error is larger while moving than in stationary. However, since the proposed position estimation method can be used for tracking control, these errors are at a level that is sufficient for autonomous driving.

Table 1 MAE and maximum MAE results for Case 1 and Case 2 position and Yaw angle
Fig. 14
figure 14

Sequentially numbered images of point clouds during case 2 demonstration

Fig. 15
figure 15

Sequentially numbered images of video during case 2 demonstration

Fig. 16
figure 16

Path comparison of the crawler dump truck estimated position and GNSS true position

Fig. 17
figure 17

Path comparison of the wheel loader estimated position and GNSS true position

Fig. 18
figure 18

Yaw angle error results of the estimated positions compared to RTK-GNSS for a crawler dump truck at case 2

Fig. 19
figure 19

Yaw angle error results of the estimated positions compared to RTK-GNSS for a wheel loader at case 2

Discussion

The proposed position estimation method can estimate positions of construction vehicles based on the shape of the vehicle. Therefore, the proposed method can estimate the positions of vehicles located in plains, cave areas and few features can be estimated valleys where the communication is hampered, or the sky above the vehicle is blocked.

In Japan, the required accuracy for maneuvering is \(\pm {0.4}\) m for positional accuracy and \(\pm {0.1}\) rad for orientation accuracy, based on the course standards specified for practical training of rough-terrain vehicles. The experimentally verified estimation accuracy meets these accuracy requirements and is sufficient for autonomous driving.

However, this required accuracy is at a minimum level at which work can be performed in the field, and further accuracy is required for optimal work. This method still has issues to be addressed.

The demonstration results (see Table 1) show that the position estimation accuracy is low in wheel loader than that of crawler dump truck. This error can be attributed to the following two factors. The first is the error in the point cloud model of the 3D model. The 3D model uses the point cloud created by Open3D’s reconstruction system directly, but there are variations in the unevenness of the surface. Because the wheel loader has a complex shape with multiple moving parts, these variations may have caused errors during the matching process. In addition, the errors between the joint positions of the parts and the joint positions of the real vehicles are also considered to have an effect. However, the crawler dump truck has a simpler shape than that of the wheel loader, which may have resulted in small matching errors. The second is the downsampling effect. The arm width of a wheel loader is approximately 30 cm at the narrowest point. Therefore, if the sampling intervals are small, the number of point clouds around the wheel loader becomes small, and it is possible that the number of point clouds is not sufficient for accurate matching. However, a smaller sampling interval increases the computational complexity and affects the real-time performance. These factors will be addressed in future studies.

Fig. 20
figure 20

A point cloud capturing the sand dust raised by a vehicle’s traveling (highlighted with white circles)

Our proposed method has the following limitations based on the results of the demonstration.

  • Influence of point cloud measurement error on estimation accuracy In our proposed method, position estimation is performed using point cloud matching technology, which can be influenced by measurement errors in the obtained point cloud. In real-world environments, measurement errors can occur due to environmental factors such as sand dust and rain. For example, as shown in Fig. 20, the dust can be seen rising from the area where the vehicle has traveled. In the case of estimation in occlusion conditions, this sand dust can cause the vehicle to lose its position or misidentify that the vehicle moves.

  • Increase in calculation costs according to increase the number of vehicles As the number of vehicles increases, the computation time for position estimation using point cloud matching technology increases monotonically due to the larger number of models to be matched. Using the vehicle models employed in this experiment, the estimated position for a crawler dump truck can be obtained in approximately 8 ms, while for a wheel loader, it takes about 15 ms. The number of vehicles that can be estimated simultaneously depends on the scan period and computing power. For example, under a scan period of 10 Hz, our 3D LiDAR, algorithm, and CPU can estimate up to 12 crawler dumps in the same period. We believe that it is possible to handle a higher number of vehicles by upgrading the CPU or parallelizing the system.

  • Measurement range limitation The measurement range of each 3D LiDAR unit is limited in order to maintain a point cloud density sufficient for matching. Assuming a minimum distance of 0.2 ms between points, the range is limited to approximately 57 ms. We consider that more 3D LiDAR units will need to cover the large area of the construction site. Additionally, some algorithm to optimize their placement for reducing the occlusion will be needed.

Conclusion

In this paper, we proposed and implemented a position estimation method for multiple small construction vehicles using point clouds obtained from 3D LiDARs installed in the work environment and 3D models of construction vehicles. The results of the demonstrations in the rough sand field, even under conditions where sand dust is generated, using the crawler dump truck and wheel loader verified that our method was able to estimate the position of two different types of vehicles with sufficient accuracy to meet the required accuracy for controlling vehicles.

Availability of data and materials

Not applicable

References

  1. Komatsu T, Konno Y, Kiribayashi S, Nagatani K, Suzuki T, Ohno K, Suzuki T, Miyamoto N, Shibata Y, Asano K (2021) Autonomous driving of six-wheeled dump truck with a retrofitted robot. In: Field and Service Robotics. Springer, Singapore

    Book  Google Scholar 

  2. Shi J, Sun D, Qin D, Hu M, Kan Y, Ma K, Chen R (2020) Planning the trajectory of an autonomous wheel loader and tracking its trajectory via adaptive model predictive control. Robot Auton Syst 131:103570

    Article  Google Scholar 

  3. Kawabe T, Takei T, Imanishi E (2021) Path planning to expedite the complete transfer of distributed gravel piles with an automated wheel loader. Adv Robot 35(23):1418–1437

    Article  Google Scholar 

  4. Matsumoto K, Yamaguchi A, Oka T, Yasumoto M, Hara S, Iida M, Teichmann M (2020) Simulation-based reinforcement learning approach towards construction machine automation. In: Proceedings of the 37th International Symposium on Automation and Robotics in Construction (ISARC), pp. 457–464. International Association for Automation and Robotics in Construction (IAARC), Kitakyushu, Japan. https://doi.org/10.22260/ISARC2020/0064

  5. Ikai N (2021) The horizontal error of rtk-gnss positioning under a forest canopy and the evaluation of the effects of gnss receiver settings on the error. J Jpn Forest Soc 103(6):395–400. https://doi.org/10.4005/jjfs.103.395

    Article  Google Scholar 

  6. Alsayed Z, Bresson G, Verroust-Blondet A, Nashashibi F (2017) Failure detection for laser-based slam in urban and peri-urban environments. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 1–7. https://doi.org/10.1109/ITSC.2017.8317824

  7. Vaidis M, Giguère P, Pomerleau F, Kubelka V (2021) Accurate outdoor ground truth based on total stations. CoRR abs/2104.14396. arxiv2104.14396

  8. Shamsfakhr F, Motroni A, Palopoli L, Buffi A, Nepa P, Fontanelli D (2021) Robot localisation using uhf-rfid tags: A kalman smoother approach \(\dagger\). Sensors. https://doi.org/10.3390/s21030717

    Article  Google Scholar 

  9. A. D. Smith, H. J. Chang and E. J. Blanchard, "An outdoor high-accuracy local positioning system for an autonomous robotic golf greens mower," 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 2012, pp. 2633-2639. https://doi.org/10.1109/ICRA.2012.6224990

  10. Kikuchi K, Nagatani K, Komatsu T, Kiribayashi S, Asano K, Shibata Y, Ohno K, Suzuki T, Hirata Y (2020) Position estimation using environment-installed laser range finders and traveling control for autonomous surface compression work of vibration roller. J Robot Soc Japan 38(9):872–881. https://doi.org/10.7210/jrsj.38.872

    Article  Google Scholar 

  11. Tianya Zhang, Peter J. Jin (2022). Roadside LiDAR Vehicle Detection and Tracking Using Range and Intensity Background Subtraction. Journal of Advanced Transportation, 2022, 1–14. https://doi.org/10.1155/2022/2771085

  12. Zhang J, Xiao W, Coifman B, Mills JP (2020) Vehicle tracking and speed estimation from roadside lidar. IEEE J Sel Top Appl Earth Obs Remote Sens 13:5597–5608

    Article  Google Scholar 

  13. Yamada H, Ahn J, Mozos OM, Iwashita Y, Kurazume R (2020) Gait-based person identification using 3d lidar and long short-term memory deep networks. Adv Robot 34(18):1201–1211. https://doi.org/10.1080/01691864.2020.1793812

    Article  Google Scholar 

  14. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom, "PointPillars: Fast Encoders for Object Detection From Point Clouds," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 12689–12697. https://doi.org/10.1109/CVPR.2019.01298

  15. Gu B, Liu J, Xiong H, Li T, Pan Y (2021) Ecpc-icp: A 6d vehicle pose estimation method by fusing the roadside lidar point cloud and road feature. Sensors (Basel, Switzerland) 21. https://doi.org/10.3390/s21103489

    Article  Google Scholar 

  16. Inagawa M, Kawabe T, Takei T (2022) Localization and path following by using installed 3dlidars for automated crawler dump. The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022:1–103

  17. Kawabe T, Inagawa M, Takei T (2022) Implementation of the remote control system for retrofitted wheel loaders. The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022:1–105

  18. R. C. Coulter. Implementation of the pure pursuit path tracking algorithm. Technical Report CMU-RI-TR-92-01, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, January 1992.

  19. Inagawa, Masahiro & Kawabe, Tomohito & Takei, Toshinobu. (2023). Demonstration of localization for construction vehicles using 3D LiDARs installed in the field. Journal of Field Robotics. https://doi.org/10.1002/rob.22211

  20. Rusinkiewicz S, Levoy M (2001) Efficient variants of the icp algorithm. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152. https://doi.org/10.1109/IM.2001.924423

Download references

Acknowledgements

Not applicable

Funding

This study was supported by Japan Science and Technology Agency (JST), Moonshot Research and Development Program, Grant No. JPMJMS2032.

Author information

Authors and Affiliations

Authors

Contributions

The first author conducted the study under the supervision of the third author. The first and third authors wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Masahiro Inagawa.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors consent for the publication of the manuscript in Robomech Journal.

Competing interests

The authors declare no competing financial interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Inagawa, M., Kawabe, T., Takei, T. et al. Demonstration of position estimation for multiple construction vehicles of different models by using 3D LiDARs installed in the field. Robomech J 10, 15 (2023). https://doi.org/10.1186/s40648-023-00252-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40648-023-00252-0

Keywords