Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Ricardo B. Sousa (nascido em 1997 em Vila Nova de Gaia, Portugal) é investigador no CRIIS - Centro de Robótica Industrial e Sistemas Inteligentes do INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência. Mestrado em Engenharia Eletrotécnica e de Computadores (EEC) pela Faculdade de Engenharia da Universidade do Porto (FEUP), onde desenvolveu a sua dissertação sobre calibração extrínseca de sensores e odometria em robôs móveis. Atualmente, candidato ao grau de doutor em EEC na FEUP, focando a sua investigação em localização e mapeamento em ambientes dinâmicos por longos períodos de tempo de operação. Para além disso, é membro ativo da equipa de robótica 5dpo da FEUP, que participa em competições de robótica a nível nacional e internacional, onde contribui para o desenvolvimento e estudo de soluções em perceção, design de robôs e integração de sistemas. Os seus principais interesses de investigação incluem perceção, fusão sensorial, localização e mapeamento simultâneo (SLAM), calibração de sensores, sistemas de controlo e robôs móveis.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Ricardo Barbosa Sousa
  • Cargo

    Investigador
  • Desde

    15 novembro 2019
005
Publicações

2025

Indoor Benchmark of 3-D LiDAR SLAM at Iilab-Industry and Innovation Laboratory

Autores
Ribeiro, JD; Sousa, RB; Martins, JG; Aguiar, AS; Santos, FN; Sobreira, HM;

Publicação
IEEE ACCESS

Abstract
This paper presents an indoor benchmarking study of state-of-the-art 3D LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithms using the newly developed IILABS 3D - iilab Indoor LiDAR-based SLAM 3D dataset. Existing SLAM datasets often focus on outdoor environments, rely on a single type of LiDAR sensor, or lack additional sensor data such as wheel odometry in ground-based robotic platforms. Consequently, the existing datasets lack data diversity required to comprehensively evaluate performance under diverse indoor conditions. The IILABS 3D dataset fills this gap by providing a sensor-rich, indoor-exclusive dataset recorded in a controlled laboratory environment using a wheeled mobile robot platform. It includes four heterogeneous 3D LiDAR sensors - Velodyne VLP-16, Ouster OS1-64, RoboSense RS-Helios-5515, and Livox Mid-360 - featuring both mechanical spinning and non-repetitive scanning patterns, as well as an IMU and wheel odometry for sensor fusion. The dataset also contains calibration sequences, challenging benchmark trajectories, and high-precision ground-truth poses captured with a motion capture system. Using this dataset, we benchmark nine representative LiDAR-based SLAM algorithms across multiple sequences, analyzing their performance in terms of accuracy and consistency under varying sensor configurations. The results provide a comprehensive performance comparison and valuable insights into the strengths and limitations of current SLAM algorithms in indoor environments. The dataset, benchmark results, and related tools are publicly available at https://jorgedfr.github.io/3d_lidar_slam_benchmark_at_iilab/

2025

Integrated RFID System for Intralogistics Operations with Industrial Mobile Robots

Autores
Pacheco, FD; Rebelo, PM; Sousa, RB; Silva, MF; Mendonça, HS;

Publicação
2025 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Radio-Frequency IDentification (RFID) technologies automate the identification of objects and persons, having several applications in retail, manufacturing, and intralogistics sectors. Several works explore the application of RFID systems in robotics and intralogistics, focusing on locating robots, tags, and inventory management. This paper addresses the challenge of intralogistics cargo trolleys communicating their characteristics to an autonomous mobile robot through an RFID system. The robot must know the trolley's relative pose to avoid collisions with the surroundings. As a result, the passive tag on the cargo communicates information to the robot, including the base footprint of the trolley. The proposed RFID system includes the development of a controller board to interact with the frontend integrated circuit of an external antenna onboard the industrial mobile robot. Experimental results assess the system's readability distance in two distinct environments and with two different antenna modules. All the code and documentation are available in a public repository.

2025

Integrating Multimodal Perception into Ground Mobile Robots

Autores
Sousa, RB; Sobreira, HM; Martins, JG; Costa, PG; Silva, MF; Moreira, AP;

Publicação
2025 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Multimodal perception systems enhance the robustness and adaptability of autonomous mobile robots by integrating heterogeneous sensor modalities, improving long-term localisation and mapping in dynamic environments and human-robot interaction. Current mobile platforms often focus on specific sensor configurations and prioritise cost-effectiveness, possibly limiting the flexibility of the user to extend the original robots further. This paper presents a methodology to integrate multimodal perception into a ground mobile platform, incorporating wheel odometry, 2D laser scanners, 3D Light Detection and Ranging (LiDAR), and RGBD cameras. The methodology describes the electronics design to power devices, firmware, computation and networking architecture aspects, and mechanical mounting for the sensory system based on 3D printing, laser cutting, and bending metal sheet processes. Experiments demonstrate the usage of the revised platform in 2D and 3D localisation and mapping and pallet pocket estimation applications. All the documentation and designs are accessible in a public repository.

2025

From Competition to Classroom: A Hands-on Approach to Robotics Learning

Autores
Lopes, MS; Ribeiro, JD; Moreira, AP; Rocha, CD; Martins, JG; Sarmento, JM; Carvalho, JP; Costa, PG; Sousa, RB;

Publicação
2025 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Robotics education plays a crucial role in developing STEM skills. However, university-level courses often emphasize theoretical learning, which can lead to decreased student engagement and motivation. In this paper, we tackle the challenge of providing hands-on robotics experience in higher education by adapting a mobile robot originally designed for competitions to be used in laboratory classes. Our approach integrates real-world robot operation into coursework, bridging the gap between simulation and physical implementation while maintaining accessibility. The robot's software is developed using ROS, and its effectiveness is assessed through student surveys. The results indicate that the platform increases student engagement and interest in robotics topics. Furthermore, feedback from teachers is also collected and confirmed that the platform boosts students' confidence and understanding of robotics.

2024

Pallet and Pocket Detection Based on Deep Learning Techniques

Autores
Caldana, D; Cordeiro, A; Sousa, JP; Sousa, RB; Rebello, PM; Silva, AJ; Silva, MF;

Publicação
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024

Abstract
The high level of precision and consistency required for pallet detection in industrial environments and logistics tasks is a critical challenge that has been the subject of extensive research. This paper proposes a system for detecting pallets and its pockets using the You Only Look Once (YOLO) v8 Open Neural Network Exchange (ONNX) model, followed by the segmentation of the pallet surface. On the basis of the system a pipeline built on the ROS Action Server whose structure promotes modularity and ease of implementation of heuristics. Additionally, is presented a comparison between the YOLOv5 and YOLOv8 models in the detection task, trained with a customised dataset from a factory environment. The results demonstrate that the pipeline can consistently perform pallet and pocket detection, even when tested in the laboratory and with successive 3D pallet segmentation. When comparing the models, YOLOv8 achieved higher average metric values, with YOLOv8m providing better detection performance in the laboratory setting.