초록 |
Autonomous vehicles utilize various sensors, such as cameras, radars, and LiDAR, to guarantee precise and reliable perception in diverse environments. However, fulfilling all the perception tasks with a single sensor led to active research in sensor fusion to identify the strengths and weaknesses of each sensor in analyzing the comprehensive environment. This paper investigated the fusion of the camera’s visual information and radar distance data to achieve more accurate and efficient environmental perception while driving. The study focused on the fusion method in terms of both data and speed, effectively combining radar detection points with the camera image plane to generate feature maps. An efficient approach was proposed, leveraging the characteristics of the YOLOv5 network for selective grid detection, and its effectiveness was explored. To evaluate the performance of the proposed method, YOLOv5-based detection models were trained and validated with RGB images from cameras and fused images with radar data. Furthermore, the impact of the selective grid detection method on processing speed was assessed by comparing the average processing speed per validation dataset. Experimental results confirmed both the effectiveness and efficiency of the proposed approach in facilitating object detection. |