Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luis Rocha doutorado em Engenharia Electrotécnica e de Computadores pela Faculdade de Engenharia da Universidade do Porto em 2014. É investigador do INESC TEC desde 2010 e é actualmente responsável pela área de investigação de manipuladores industriais no Centro de Robótica Industrial e Sistemas Inteligentes (CRIIS ). Publicou mais de 40 artigos em revistas científicas internacionais e em conferências. Os seus principais interesses de investigação focam-se no desenvolvimento de sistemas robóticos industriais mais ágeis e centrados no ser humano, nomeadamente via a investigação de novos mecanismos interação homem-robô, novas metolodologias de programação de robôs mais simplificadas e sistemas avançados de perceção. Foi coordenador da equipa do INESC que participou nos seguintes projetos: H2020 MARI4_YARD, Xweld (H2020 Trinitiy Cascade Funding), AI4R.WELD (H2020 ZDMP Cascade Funding), Interreg POCTEC 2014-2020 Manufactur4.0, P2020 PRODUTECH4S&C e PRODUTECH -SIF.

Detalhes

Detalhes

033
Publicações

2023

Object Segmentation for Bin Picking Using Deep Learning

Autores
Cordeiro, A; Rocha, LF; Costa, C; Silva, MF;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract

2023

Bin Picking for Ship-Building Logistics Using Perception and Grasping Systems

Autores
Cordeiro, A; Souza, JP; Costa, CM; Filipe, V; Rocha, LF; Silva, MF;

Publicação
ROBOTICS

Abstract
Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.

2023

Deep learning-based human action recognition to leverage context awareness in collaborative assembly

Autores
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;

Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract

2023

Comparison of 3D Sensors for Automating Bolt-Tightening Operations in the Automotive Industry

Autores
Dias, J; Simões, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;

Publicação
Sensors

Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm ± 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm ± 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm ± 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.

2023

Quality Control of Casting Aluminum Parts: A Comparison of Deep Learning Models for Filings Detection

Autores
Nascimento, R; Ferreira, T; Rocha, C; Filipe, V; Silva, MF; Veiga, G; Rocha, L;

Publicação
2023 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)

Abstract

Teses
supervisionadas

2018

Cinemática Composta de Manipuladores Móveis

Autor
Gonçalo Daniel Ribeiro da Silva

Instituição
UP-FEUP

2018

Smart Collision Avoidance System for a Dual-Arm Manipulator

Autor
Inês Pinto Frutuoso

Instituição
UP-FEUP

2018

Development of robotic manipulators for scalable production lines

Autor
Paulo Diogo Carvalho Ribeiro

Instituição
UP-FEUP