| Title |
Multi-sensor Fusion Vision Algorithm for Robot Autonomous Mobility Enhancement |
| DOI |
https://doi.org/10.5573/IEIESPC.2026.15.2.293 |
| Keywords |
Robotics; Sensors; SLAM; Feature fusion; Inertial measurement cell |
| Abstract |
A common approach to improving the autonomous mobility of wheeled robots is to utilize instantaneous localization and map creation algorithms. This approach suffers from accuracy degradation when faced with situations such as low texture and robot steering. In order to solve this problem and enhance the autonomous mobility of wheeled robots, an improved instantaneous localization and map creation algorithm based on multi-feature fusion of points, lines and surfaces is designed. To address the shortcomings of this multi-feature fusion algorithm in the case of robot steering, a new algorithm based on multi-sensor fusion has been designed. The study found that the improved algorithms for instantaneous localization and map creation, which were based on points, lines, and surfaces, had a maximum and minimum root mean square error of 0.058 m and 0.015 m, respectively. Additionally, there were no instances of tracking loss under different data packets. In contrast, the pre-improved algorithm experienced three instances of tracking loss. The improved multi-sensor fusion-based algorithm’s running time was 97.8 ms with closed-loop detection and 73.2 ms without it on the indoor_general_quad packet. Both improved algorithms designed in the study have good performance and can provide technical support for the improvement of wheeled robots’ automatic mobility capability. |