Detalhes
Nome
Artur José CordeiroDesde
01 outubro 2025
Nacionalidade
PortugalCentro
Robótica Industrial e Sistemas InteligentesContactos
+351222094171
artur.j.cordeiro@inesctec.pt
2026
Autores
Cordeiro, A; Rocha, LF; Boaventura-Cunha, J; Figueiredo, D; Souza, JP;
Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS
Abstract
Robotic bin-picking is a critical operation in modern industry, which is characterised by the detection, selection, and placement of items from a disordered and cluttered environment, which can be boundary limited or not, e.g. bins, boxes or containers. In this context, perception systems are employed to localise, detect and estimate grasping points. Despite the considerable progress made, from analytical approaches to recent deep learning methods, challenges still remain. This is evidenced by the growing innovation proposing distinct solutions. This paper aims to review perception methodologies developed since 2009, providing detailed descriptions and discussions of their implementation. Additionally, it presents an extensive study, detailing each work, along with a comprehensive overview of the advancements in bin-picking perception.
2025
Autores
Cordeiro, A; Rocha, LF; Boaventura-Cunha, J; Pires, EJS; Souza, JP;
Publicação
COMPUTERS & INDUSTRIAL ENGINEERING
Abstract
The implementation of deep learning approaches based on instance segmentation data remains a challenge for customized scenarios, owing to the time-consuming nature of acquiring and annotating real-world instance segmentation data, which requires a significant investment of semi-professional user labour. Obtaining high-quality labelled data demands expertise and meticulous attention to detail. This requirement can significantly impact the overall implementation process, adding to the complexity and resource requirements of customized scenarios with diverse objects. The proposed work addresses the challenge of generating labelled data for large-scale robotic bin-picking datasets by proposing an easy-to-use automated framework designed to create customized data with accurate labels from CAD models. The framework leverages a photorealistic rendering engine integrated with physics simulation, minimizing the gap between synthetic and real-world data. Models trained using the synthetic data generated by this framework achieved an Average Precision of 86.95%, comparable to the performance of models trained on real-world datasets. Furthermore, this paper provides a comprehensive multi-metric analysis across diverse objects representing distinct industrial applications, including naval, logistics, and aerospace domains. The evaluation also includes the use of three distinct instance segmentation networks, alongside a comparative analysis of the proposed approach against two generative model techniques.
2024
Autores
Cordeiro, A; Rocha, LF; Boaventura Cunha, J; de Souza, JPC;
Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
Numerous pose estimation methodologies demonstrate a decrement in accuracy or efficiency metrics when subjected to highly cluttered scenarios. Currently, companies expect high-efficiency robotic systems to close the gap between humans and machines, especially in logistic operations, which is highlighted by the requirement to execute operations, such as navigation, perception and picking. To mitigate this issue, the majority of strategies augment the quantity of detected and matched features. However, in this paper, it is proposed a system which adopts an inverse strategy, for instance, it reduces the types of features detected to enhance efficiency. Upon detecting 2D polygons, this solution perceives objects, identifies their corners and edges, and establishes a relationship between the features extracted from the perceived object and the known object model. Subsequently, this relationship is used to devise a weighting system capable of predicting an optimal final pose estimation. Moreover, it has been demonstrated that this solution applies to different objects in real scenarios, such as intralogistic, and industrial, provided there is prior knowledge of the object's shape and measurements. Lastly, the proposed method was evaluated and found to achieve an average overlap rate of 89.77% and an average process time of 0.0398 seconds per object pose estimation.
2023
Autores
Cordeiro, A; Rocha, LF; Costa, C; Silva, MF;
Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2
Abstract
Bin picking based on deep learning techniques is a promising approach that can solve several analytical methods problems. These systems can provide accurate solutions to bin picking in cluttered environments, where the scenario is always changing. This article proposes a robust and accurate system for segmenting bin picking objects, employing an easy configuration procedure to adjust the framework according to a specific object. The framework is implemented in Robot Operating System (ROS) and is divided into a detection and segmentation system. The detection system employs Mask R-CNN instance neural network to identify several objects from two dimensions (2D) grayscale images. The segmentation system relies on the point cloud library (PCL), manipulating 3D point cloud data according to the detection results to select particular points of the original point cloud, generating a partial point cloud result. Furthermore, to complete the bin picking system a pose estimation approach based on matching algorithms is employed, such as Iterative Closest Point (ICP). The system was evaluated for two types of objects, knee tube, and triangular wall support, in cluttered environments. It displayed an average precision of 79% for both models, an average recall of 92%, and an average IOU of 89%. As exhibited throughout the article, this system demonstrates high accuracy in cluttered environments with several occlusions for different types of objects.
2023
Autores
Cordeiro, A; Souza, JP; Costa, CM; Filipe, V; Rocha, LF; Silva, MF;
Publicação
ROBOTICS
Abstract
Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.