Jurado Rodriguez, D; Jurado, JM; Pauda, L; Neto, A; Munoz Salinas, R; Sousa, JJ;
COMPUTERS & GRAPHICS-UK
Environment understanding in real-world scenarios has gained an increased interest in research and industry. The advances in data capture and processing allow a high-detailed reconstruction from a set of multi-view images by generating meshes and point clouds. Likewise, deep learning architectures along with the broad availability of image datasets bring new opportunities for the segmentation of 3D models into several classes. Among the areas that can benefit from 3D semantic segmentation is the automotive industry. However, there is a lack of labeled 3D models that can be useful for training and use as ground truth in deep learning-based methods. In this work, we propose an automatic procedure for the generation and semantic segmentation of 3D cars that were obtained from the photogrammetric processing of UAV-based imagery. Therefore, sixteen car parts are identified in the point cloud. To this end, a convolutional neural network based on the U-Net architecture combined with an Inception V3 encoder was trained in a publicly available dataset of car parts. Then, the trained model is applied to the UAV-based images and these are mapped on the photogrammetric point clouds. According to the preliminary image-based segmentation, an optimization method is developed to get a full labeled point cloud, taking advantage of the geometric and spatial features of the 3D model. The results demonstrate the method's capabilities for the semantic segmentation of car models. Moreover, the proposed methodology has the potential to be extended or adapted to other applications that benefit from 3D segmented models.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.