Article Preview
TopIntroduction
With the continuous development of advanced driving assistance system (ADAS) technology in recent years, there is an increasing demand for real-time and accurate vehicle environment perception (Y. L. Zhang et al., 2022; Hu et al., 2019; Miao et al., 2020). Before ADAS technology can be safely implemented, simulation and validation of the relevant environment perception algorithms have become essential to ensure their safety and stability. Prescan is an autonomous driving scenario simulation software developed by Tass International, which has the advantages of simple operation, complete vehicle sensor models, and controlled weather visualization. It can customize various autonomous driving scenarios according to requirements and occupies an important place in the field of autonomous driving scenario simulation (C. S. Wang et al., 2021). At present, it is a hot topic for many technical researchers to achieve real-time and accurate vehicle image stitching in a laboratory environment to maximize the range of vehicle environment perception using existing information at a lower cost; thus, meeting the need to conduct tests of relevant environmental perception algorithms.
Considering the application scenario in this paper, the height of the vehicle camera is not always at the same level due to ground undulations and potholes during driving, so it is especially important to extract feature points with high rotation invariance. The scale-invariant feature transform (SIFT) algorithm proposed by Lowe (2004) performs better in terms of scale and rotation invariance in traditional image stitching techniques. Therefore, this paper is based on SIFT algorithm for the study of vehicle image stitching technology. Although the performance of scale invariance and rotation invariance of this algorithm is superior, there is high complexity with numerous computational efforts (Y. Y. Liu et al., 2022). To solve this problem, based on the SIFT algorithm, Y. L. Zhang and Xie (2021) extended the original 26 reference points in the Gaussian difference space to 32 reference points within the Manhattan distance, respectively. The descriptor model was reconstructed to reduce the traditional 128-dimensional SIFT feature descriptor to 64 dimensions, effectively reducing the complexity of the algorithm.
Y. Liu et al. (2022) used a bidirectional k-nearest neighbor matching algorithm to increase the correct matching rate and used the random sample consensus (RANSAC) algorithm to filter outliers to obtain a more accurate homography matrix. Yan and Ma (2022) used the singular value decomposition (SVD) algorithm to reduce the dimensionality of SIFT feature vector and the k-dimensional tree (k-d tree) algorithm for feature matching to obtain an image without stitching gaps. Yang et al. (2021) adopted the features from the accelerated segment test (FAST) algorithm to extract corner points and then used the speed-up robust feature (SURF) operator to determine the main direction to generate descriptors and added a pre-sampling step to the RANSAC algorithm to remove the mismatched pairs, which effectively improved the matching accuracy of the algorithm.