초록 |
In an automated driving system, recognizing range information is essential in order to understand the surrounding environment. As a result, we proposed depth completion method, filling the area with depth information of point cloud, which is projected onto the image plane, and high resolution color data from the image. The projected point cloud is placed into the shift-convolution network which expands received sparse LiDAR data to the pixel level, and then it is inserted synchronously into the convolutional neuron network(CNN) with image. Fully completed ground truth is formed by using max and median filters sequentially, and it is taken as input of shift-convolution to make an expanded point cloud that focuses more on completing an empty area than section the contour off. Finally, CNN uses point cloud to get the exact depth information and image for separating objects along the outline. The system that uses expanded point cloud has approved almost 9 % more than the system that does not. |