Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About
Download Photo HD

About

I'm a fast learner software engineer, always looking to expand my knowledge in new technologies and with great interest in science (computer science and engineering, robotics, biotechnology, space exploration, among others).

My main research areas are augmented reality, 3D perception, computer vision, safety critical systems, assembly automation, localization and mapping of autonomous vehicles among many others within the industrial and mobile robotics fields.

Interest
Topics
Details

Details

004
Publications

2020

Detecting and Solving Tube Entanglement in Bin Picking Operations

Authors
Leão, G; Costa, CM; Sousa, A; Veiga, G;

Publication
Applied Sciences

Abstract
Manufacturing and production industries are increasingly turning to robots to carry out repetitive picking operations in an efficient manner. This paper focuses on tackling the novel challenge of automating the bin picking process for entangled objects, for which there is very little research. The chosen case study are sets of freely curved tubes, which are prone to occlusions and entanglement. The proposed algorithm builds a representation of the tubes as an ordered list of cylinders and joints using a point cloud acquired by a 3D scanner. This representation enables the detection of occlusions in the tubes. The solution also performs grasp planning and motion planning, by evaluating post-grasp trajectories via simulation using Gazebo and the ODE physics engine. A force/torque sensor is used to determine how many items were picked by a robot gripper and in which direction it should rotate to solve cases of entanglement. Real-life experiments with sets of PVC tubes and rubber radiator hoses showed that the robot was able to pick a single tube on the first try with success rates of 99% and 93%, respectively. This study indicates that using simulation for motion planning is a promising solution to deal with entangled objects.

2020

Perception of Entangled Tubes for Automated Bin Picking

Authors
Leão, G; Costa, CM; Sousa, A; Veiga, G;

Publication
Advances in Intelligent Systems and Computing

Abstract
Bin picking is a challenging problem common to many industries, whose automation will lead to great economic benefits. This paper presents a method for estimating the pose of a set of randomly arranged bent tubes, highly subject to occlusions and entanglement. The approach involves using a depth sensor to obtain a point cloud of the bin. The algorithm begins by filtering the point cloud to remove noise and segmenting it using the surface normals. Tube sections are then modeled as cylinders that are fitted into each segment using RANSAC. Finally, the sections are combined into complete tubes by adopting a greedy heuristic based on the distance between their endpoints. Experimental results with a dataset created with a Zivid sensor show that this method is able to provide estimates with high accuracy for bins with up to ten tubes. Therefore, this solution has the potential of being integrated into fully automated bin picking systems. © 2020, Springer Nature Switzerland AG.

2019

Map-Matching Algorithms for Robot Self-Localization: A Comparison Between Perfect Match, Iterative Closest Point and Normal Distributions Transform

Authors
Sobreira, H; Costa, CM; Sousa, I; Rocha, L; Lima, J; Farias, PCMA; Costa, P; Paulo Moreira, AP;

Publication
Journal of Intelligent and Robotic Systems: Theory and Applications

Abstract
The self-localization of mobile robots in the environment is one of the most fundamental problems in the robotics navigation field. It is a complex and challenging problem due to the high requirements of autonomous mobile vehicles, particularly with regard to the algorithms accuracy, robustness and computational efficiency. In this paper, we present a comparison of three of the most used map-matching algorithms applied in localization based on natural landmarks: our implementation of the Perfect Match (PM) and the Point Cloud Library (PCL) implementation of the Iterative Closest Point (ICP) and the Normal Distribution Transform (NDT). For the purpose of this comparison we have considered a set of representative metrics, such as pose estimation accuracy, computational efficiency, convergence speed, maximum admissible initialization error and robustness to the presence of outliers in the robots sensors data. The test results were retrieved using our ROS natural landmark public dataset, containing several tests with simulated and real sensor data. The performance and robustness of the Perfect Match is highlighted throughout this article and is of paramount importance for real-time embedded systems with limited computing power that require accurate pose estimation and fast reaction times for high speed navigation. Moreover, we added to PCL a new algorithm for performing correspondence estimation using lookup tables that was inspired by the PM approach to solve this problem. This new method for computing the closest map point to a given sensor reading proved to be 40 to 60 times faster than the existing k-d tree approach in PCL and allowed the Iterative Closest Point algorithm to perform point cloud registration 5 to 9 times faster. © 2018 Springer Science+Business Media B.V., part of Springer Nature

2019

Collaborative Welding System using BIM for Robotic Reprogramming and Spatial Augmented Reality

Authors
Tavares, P; Costa, CM; Rocha, L; Malaca, P; Costa, P; Moreira, AP; Sousa, A; Veiga, G;

Publication
Automation in Construction

Abstract
The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality. © 2019 Elsevier B.V.

2019

Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations

Authors
Costa, CM; Veiga, G; Sousa, A; Rocha, L; Augusto Sousa, AA; Rodrigues, R; Thomas, U;

Publication
19th IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2019

Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator. © 2019 IEEE.