New Algorithm Gives Autonomous Driving Devices “Eyes” and “Brains”

Point cloud registration process based on voxel plane features; (a) initial source and target point clouds; (b) point clouds divided by an octagonal tree; (c) extracted planar features; (d) an example of calculating three pairs of corresponding plane features of a rotation matrix; (e) an example of transformation matrix clustering, where the position represents the translation vector and the color represents the rotation matrix; (f) a quick validation of the transformation matrix, gray indicates that no matching coplanar plane was found; (g) fine verification of the transformation matrix, colored squares represent valid voxels ;(h) Registration results Courtesy of Fuzhou University

(a) Before point cloud data registration; (b) After point cloud data registration; (c) and (d) Local details after point cloud registration Courtesy of Fuzhou University

Global localization combined plane features and enhanced descriptors; (a) extracted planar features; (b) combined plane features; (c) enhanced descriptors Courtesy of Fuzhou University

The research team of Fuzhou University proposed a point cloud registration and positioning method based on voxel and adaptive threshold region growth to extract planar features, which is quite endowed with unmanned device eyes and brains, and is one of the important technologies to achieve automatic driving. The research results were published online on May 5 in the ISPRS Journal of Photogrammetry and Remote Sensing, a leading international journal in the field of photogrammetry and remote sensing, entitled “Point Cloud Registration and Localization Based on Voxel Plane Features”. The first author of the paper is Li Jianwei, an associate researcher at Fuzhou University, and the corresponding author is Associate Professor Wang Qianfeng.

One of the core technologies of autonomous driving represented by unmanned vehicles is an intelligent technology called “real-time positioning and construction” (abbreviated as SLAM), which is one of the most difficult points to be broken through in the field of artificial intelligence and automation today. The research team of Fuzhou University has made a breakthrough in this field, proposed an efficient extraction method for point cloud features, and established a point cloud coarse registration framework and global positioning method using the extracted features, which are used to reconstruct the three-dimensional environment and determine their own posture relative to the environment.

It is understood that the registration success rate of the algorithm has reached more than 96%, which is one of the best registration methods in the field at present; and the positioning success rate has also been significantly improved, exceeding 91%. The algorithm can allow unmanned equipment to perceive and reconstruct the surrounding environment in real time, determine its current position and posture, and has the advantage of fast computing speed, which can provide strong adaptability for the equipment and has broad application prospects in the fields of robot pathfinding, automatic driving and augmented reality. Not only that, the algorithm innovatively realizes the efficient fusion of different methods at the feature extraction level to meet the requirements of positioning and posture setting in larger scenes, shorter time and higher precision. (Source: China Science Daily Wen Caifei)

Related paper information:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button