Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Luis Rocha received his Ph.D. in Electrical and Computer Engineering from the Faculty of Engineering University of Porto in 2014. He has been a researcher at INESC TEC since 2010 and he is presently responsible for the research area of industrial manipulators in the Center for Robotics in Industry and Intelligent Systems (CRIIS). He has published more than 40 papers in international scientific journals and conference proceedings. His main research interests are on the development of agile and human-centered industrial robotic systems, namely through the development of novel human-robot interaction solutions, robot programming procedures, and advanced perceptions systems. He has been the coordinator of the INESC team that participated in the following projects: H2020 MARI4_YARD, Xweld (H2020 Trinitiy Cascade Funding), AI4R.WELD (H2020 ZDMP Cascade Funding), Interreg POCTEC 2014-2020 Manufactur4.0, P2020 PRODUTECH4S&C and PRODUTECH-SIF.

Details

Details

033
Publications

2023

Object Segmentation for Bin Picking Using Deep Learning

Authors
Cordeiro, A; Rocha, LF; Costa, C; Silva, MF;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract

2023

Bin Picking for Ship-Building Logistics Using Perception and Grasping Systems

Authors
Cordeiro, A; Souza, JP; Costa, CM; Filipe, V; Rocha, LF; Silva, MF;

Publication
ROBOTICS

Abstract
Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.

2023

Deep learning-based human action recognition to leverage context awareness in collaborative assembly

Authors
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;

Publication
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract

2023

Comparison of 3D Sensors for Automating Bolt-Tightening Operations in the Automotive Industry

Authors
Dias, J; Simões, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;

Publication
Sensors

Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm ± 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm ± 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm ± 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.

2023

Quality Control of Casting Aluminum Parts: A Comparison of Deep Learning Models for Filings Detection

Authors
Nascimento, R; Ferreira, T; Rocha, C; Filipe, V; Silva, MF; Veiga, G; Rocha, L;

Publication
2023 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)

Abstract

Supervised
thesis

2018

Cinemática Composta de Manipuladores Móveis

Author
Gonçalo Daniel Ribeiro da Silva

Institution
UP-FEUP

2018

Smart Collision Avoidance System for a Dual-Arm Manipulator

Author
Inês Pinto Frutuoso

Institution
UP-FEUP

2018

Development of robotic manipulators for scalable production lines

Author
Paulo Diogo Carvalho Ribeiro

Institution
UP-FEUP