2021
Autores
Arrais, R; Costa, CM; Ribeiro, P; Rocha, LF; Silva, M; Veiga, G;
Publicação
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY
Abstract
For remaining competitive in the current industrial manufacturing markets, coating companies need to implement flexible production systems for dealing with mass customization and mass production workflows. The introduction of robotic manipulators capable of mimicking with accuracy the motions executed by highly skilled technicians is an important factor in enabling coating companies to cope with high customization. However, there are some limitations associated with the usage of a fully automated system for coating applications, especially when considering customized products of large dimensions and complex geometry. This paper addresses the development of a collaborative coating cell to increase the flexibility and efficiency of coating processes. The robot trajectory is taught with an intuitive programming by demonstration system, in which an icosahedron marker with multicoloured LEDs is attached to the coating tool for tracking its trajectories using a stereoscopic vision system. For avoiding the construction of fixtures and allowing the operator to freely place products within the coating work cell, a modular 3D perception system was developed, relying on principal component analysis for performing the initial point cloud alignment and on the iterative closest point algorithm for 6 DoF pose estimation. Furthermore, to enable safe and intuitive human-robot collaboration, a non-intrusive zone monitoring safety system was employed to track the position of the operator in the cell.
2022
Autores
Tinoco, V; Silva, MF; Santos, FN; Valente, A; Rocha, LF; Magalhaes, SA; Santos, LC;
Publicação
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION
Abstract
Purpose The motivation for robotics research in the agricultural field has sparked in consequence of the increasing world population and decreasing agricultural labor availability. This paper aims to analyze the state of the art of pruning and harvesting manipulators used in agriculture. Design/methodology/approach A research was performed on papers that corresponded to specific keywords. Ten papers were selected based on a set of attributes that made them adequate for review. Findings The pruning manipulators were used in two different scenarios: grapevines and apple trees. These manipulators showed that a light-controlled environment could reduce visual errors and that prismatic joints on the manipulator are advantageous to obtain a higher reach. The harvesting manipulators were used for three types of fruits: strawberries, tomatoes and apples. These manipulators revealed that different kinematic configurations are required for different kinds of end-effectors, as some of these tools only require movement in the horizontal axis and others are required to reach the target with a broad range of orientations. Originality/value This work serves to reduce the gap in the literature regarding agricultural manipulators and will support new developments of novel solutions related to agricultural robotic grasping and manipulation.
2022
Autores
Cordeiro, A; Rocha, LF; Costa, C; Costa, P; Silva, MF;
Publicação
2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)
Abstract
Bin picking is a highly researched topic, due to the need for automated procedures in industrial environments. A general bin picking system requires a highly structured process, starting with data acquisition, and ending with pose estimation and grasping. A high number of bin picking problems are being presently solved, through deep learning networks, combined with distinct procedures. This study provides a comprehensive review of deep learning approaches, implemented in bin picking problems. Throughout the review are described several approaches and learning methods based on specific domains, such as gripper oriented and object oriented, as well as summarized several methodologies, in order to solve bin picking issues. Furthermore, are introduced current strategies used to simplify particular cases and at last, are presented peculiar means of detecting object poses.
2022
Autores
de Souza, JPC; Amorim, AM; Rocha, LF; Pinto, VH; Moreira, AP;
Publicação
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION
Abstract
Purpose The purpose of this paper is to present a programming by demonstration (PbD) system based on 3D stereoscopic vision and inertial sensing that provides a cost-effective pose tracking system, even during error-prone situations, such as camera occlusions. Design/methodology/approach The proposed PbD system is based on the 6D Mimic innovative solution, whose six degrees of freedom marker hardware had to be revised and restructured to accommodate an IMU sensor. Additionally, a new software pipeline was designed to include this new sensing device, seeking the improvement of the overall system's robustness in stereoscopic vision occlusion situations. Findings The IMU component and the new software pipeline allow the 6D Mimic system to successfully maintain the pose tracking when the main tracking tool, i.e. the stereoscopic vision, fails. Therefore, the system improves in terms of reliability, robustness, and accuracy which were verified by real experiments. Practical implications Based on this proposal, the 6D Mimic system reaches a reliable and low-cost PbD methodology. Therefore, the robot can accurately replicate, on an industrial scale, the artisan level performance of highly skilled shop-floor operators. Originality/value To the best of the authors' knowledge, the sensor fusion between stereoscopic images and IMU applied to robot PbD is a novel approach. The system is entirely designed aiming to reduce costs and taking advantage of an offline processing step for data analysis, filtering and fusion, enhancing the reliability of the PbD system.
2022
Autores
Baptista T.S.; Rito M.; Chamadoira C.; Rocha L.F.; Evans G.; Cunha J.P.S.;
Publicação
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
Abstract
The iHandU system is a wearable device that quantitatively evaluates changes in wrist rigidity during Deep Brain Stimulation (DBS) surgery, allowing clinicians to find optimal stimulation settings that reduce patient symptoms. Robotic accuracy is also especially relevant in DBS surgery, as accurate electrode placement is required to increase effectiveness and reduce side effects. The main goal of this work is to integrate the advantages of each system in a closed-loop system between an industrial robot and the iHandU system. For this purpose, a comparative analysis of a Leksell stereotactic frame and neuro-robotic system accuracies was performed using a lab-made phantom. The neuro-robotic system reached 90% of trajectories, while the stereotactic frame reached all trajectories. There are significant differences in accuracy errors between these trajectories (p < 0.0001), which can be explained by the high correlation between the neuro-robotic system errors and the distance from the trajectory to the origin of the Leksell coordinate system (?=0.72). Overall accuracy is comparable to existing neuro-robotic systems, achieving a deviation of (1.0 ± 0.5) mm at the target point. The accuracy of DBS electrode positioning and stimulation parameters choice leads to better long-term clinical outcomes in Parkinson's disease patients. Our neuro-robotic system combines real-time feedback assessment of the patient's symptomatic response and automatic positioning of the DBS electrode in a specific brain area.
2025
Autores
Nascimento, R; Gonzalez, DG; Pires, EJS; Filipe, V; Silva, MF; Rocha, LF;
Publicação
IEEE ACCESS
Abstract
The increasing demand for automated quality inspection in modern industry, particularly for transparent and reflective parts, has driven significant interest in vision-based technologies. These components pose unique challenges due to their optical properties, which often hinder conventional inspection techniques. This systematic review analyzes 24 peer-reviewed studies published between 2015 and 2025, aiming to assess the current state of the art in computer vision-based inspection systems tailored to such materials. The review synthesizes recent advancements in imaging setups, illumination strategies, and deep learning-based defect detection methods. It also identifies key limitations in current approaches, particularly regarding robustness under variable industrial conditions and the lack of standardized benchmarks. By highlighting technological trends and research gaps, this work offers valuable insights and directions for future research-emphasizing the need for adaptive, scalable, and industry-ready solutions to enhance the reliability and effectiveness of inspection systems for transparent and reflective parts.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.