Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2025

Dual-Arm Manipulation of a T-Shirt from a Hanger for Feeding a Hem Sewing Machine

Authors
Almeida, F; Leão, G; Costa, M; Rocha, D; Sousa, A; da Silva, LG; Rocha, F; Veiga, G;

Publication
Proceedings of the International Conference on Informatics in Control, Automation and Robotics

Abstract
The textile industry is experiencing rapid advancement, reflected in the adoption of innovative and efficient manufacturing techniques. The automation of clothing sewing systems has the potential to reduce the allocation of repetitive tasks to operators, freeing them for more value-added operations. There are several machines on the market that automatically sew the bottom hem of T-shirts, a key component of the garment that fulfills both functional and aesthetic purposes. However, most of them require the fabric to be positioned manually by an operator. To address this issue, this work presents a solution to automate the process of feeding a T-shirt into a SiRUBA sewing machine using a YuMi dual-arm robot. In this scenario, the T-shirt arrives at the workstation with the main front and back pieces of cloth sewn together, seams facing out, and with no sleeves yet. This setup starts by turning the garment inside out with the aid of an automated hanger, ensuring that the seams are facing inward (as the machine requires), and then using the dual-arm robot to feed the garment into the sewing machine. With our approach, the feeding and hemming process took less than 35 seconds, with a feeding success rate of 98%. Therefore, this work can serve as a steppingstone towards more efficient automated sewing systems within the garment production industry.

2025

Friday: The Versatile Mobile Manipulator Robot

Authors
de Souza, JPC; Cordeiro, AJ; Dias, PA; Rocha, LF;

Publication
EUROPEAN ROBOTICS FORUM 2025

Abstract
This article introduces Friday, a Mobile Manipulator (MoMa) solution designed at iiLab - INESC TEC. Friday is versatile and applicable in various contexts, including warehouses, naval shipyards, aerospace industries, and production lines. The robot features an omnidirectional platform, multiple grippers, and sensors for localisation, safety, and object detection. Its modular hardware and software system enhances functionality across different industrial scenarios. The system provides a stable platform supporting scientific advancements and meeting modern industry demands, with results verified in the aerospace, automotive, naval, and logistics.

2025

Object segmentation dataset generation framework for robotic bin-picking: Multi-metric analysis between results trained with real and synthetic data

Authors
Cordeiro, A; Rocha, LF; Boaventura-Cunha, J; Pires, EJS; Souza, JP;

Publication
COMPUTERS & INDUSTRIAL ENGINEERING

Abstract
The implementation of deep learning approaches based on instance segmentation data remains a challenge for customized scenarios, owing to the time-consuming nature of acquiring and annotating real-world instance segmentation data, which requires a significant investment of semi-professional user labour. Obtaining high-quality labelled data demands expertise and meticulous attention to detail. This requirement can significantly impact the overall implementation process, adding to the complexity and resource requirements of customized scenarios with diverse objects. The proposed work addresses the challenge of generating labelled data for large-scale robotic bin-picking datasets by proposing an easy-to-use automated framework designed to create customized data with accurate labels from CAD models. The framework leverages a photorealistic rendering engine integrated with physics simulation, minimizing the gap between synthetic and real-world data. Models trained using the synthetic data generated by this framework achieved an Average Precision of 86.95%, comparable to the performance of models trained on real-world datasets. Furthermore, this paper provides a comprehensive multi-metric analysis across diverse objects representing distinct industrial applications, including naval, logistics, and aerospace domains. The evaluation also includes the use of three distinct instance segmentation networks, alongside a comparative analysis of the proposed approach against two generative model techniques.

2025

Quality Inspection in Casting Aluminum Parts: A Machine Vision System for Filings Detection and Hole Inspection

Authors
Nascimento, R; Ferreira, T; Rocha, CD; Filipe, V; Silva, MF; Veiga, G; Rocha, L;

Publication
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
Quality inspection inspection systems are critical for maintaining product integrity. Being a repetitive task, when performed by operators only, it can be slow and error-prone. This paper introduces an automated inspection system for quality assessment in casting aluminum parts resorting to a robotic system. The method comprises two processes: filing detection and hole inspection. For filing detection, five deep learning modes were trained. These models include an object detector and four instance segmentation models: YOLOv8, YOLOv8n-seg, YOLOv8s-seg, YOLOv8m-seg, and Mask R-CNN, respectively. Among these, YOLOv8s-seg exhibited the best overall performance, achieving a recall rate of 98.10%, critical for minimizing false negatives and yielding the best overall results. Alongside, the system inspects holes, utilizing image processing techniques like template-matching and blob detection, achieving a 97.30% accuracy and a 2.67% Percentage of Wrong Classifications. The system improves inspection precision and efficiency while supporting sustainability and ergonomic standards, reducing material waste and reducing operator fatigue.

2025

Automated optical system for quality inspection on reflective parts

Authors
Nascimento, R; Rocha, CD; Gonzalez, DG; Silva, T; Moreira, R; Silva, MF; Filipe, V; Rocha, LF;

Publication
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY

Abstract
The growing demand for high-quality components in various industries, particularly in the automotive sector, requires advanced and reliable inspection methods to maintain competitive standards and support innovation. Manual quality inspection tasks are often inefficient and prone to errors due to their repetitive nature and subjectivity, which can lead to attention lapses and operator fatigue. The inspection of reflective aluminum parts presents additional challenges, as uncontrolled reflections and glare can obscure defects and reduce the reliability of conventional vision-based methods. Addressing these challenges requires optimized illumination strategies and robust image processing techniques to enhance defect visibility. This work presents the development of an automated optical inspection system for reflective parts, focusing on components made of high-pressure diecast aluminum used in the automotive industry. The reflective nature of these parts introduces challenges for defect detection, requiring optimized illumination and imaging methods. The system applies deep learning algorithms and uses dome light to achieve uniform illumination, enabling the detection of small defects on reflective surfaces. A collaborative robotic manipulator equipped with a gripper handles the parts during inspection, ensuring precise positioning and repeatability, which improves both the efficiency and effectiveness of the inspection process. A flow execution-based software platform integrates all system components, enabling seamless operation. The system was evaluated with Schmidt Light Metal Group using three custom datasets to detect surface porosities and inner wall defects post-machining. For surface porosity detection, YOLOv8-Mosaic, trained with cropped images to reduce background noise, achieved a recall value of 84.71% and was selected for implementation. Additionally, an endoscopic camera was used in a preliminary study to detect defects within the inner walls of holes. The industrial trials produced promising results, demonstrating the feasibility of implementing a vision-based automated inspection system in various industries. The system improves inspection accuracy and efficiency while reducing material waste and operator fatigue.

2025

Indoor Benchmark of 3-D LiDAR SLAM at Iilab-Industry and Innovation Laboratory

Authors
Ribeiro, JD; Sousa, RB; Martins, JG; Aguiar, AS; Santos, FN; Sobreira, HM;

Publication
IEEE ACCESS

Abstract
This paper presents an indoor benchmarking study of state-of-the-art 3D LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithms using the newly developed IILABS 3D - iilab Indoor LiDAR-based SLAM 3D dataset. Existing SLAM datasets often focus on outdoor environments, rely on a single type of LiDAR sensor, or lack additional sensor data such as wheel odometry in ground-based robotic platforms. Consequently, the existing datasets lack data diversity required to comprehensively evaluate performance under diverse indoor conditions. The IILABS 3D dataset fills this gap by providing a sensor-rich, indoor-exclusive dataset recorded in a controlled laboratory environment using a wheeled mobile robot platform. It includes four heterogeneous 3D LiDAR sensors - Velodyne VLP-16, Ouster OS1-64, RoboSense RS-Helios-5515, and Livox Mid-360 - featuring both mechanical spinning and non-repetitive scanning patterns, as well as an IMU and wheel odometry for sensor fusion. The dataset also contains calibration sequences, challenging benchmark trajectories, and high-precision ground-truth poses captured with a motion capture system. Using this dataset, we benchmark nine representative LiDAR-based SLAM algorithms across multiple sequences, analyzing their performance in terms of accuracy and consistency under varying sensor configurations. The results provide a comprehensive performance comparison and valuable insights into the strengths and limitations of current SLAM algorithms in indoor environments. The dataset, benchmark results, and related tools are publicly available at https://jorgedfr.github.io/3d_lidar_slam_benchmark_at_iilab/

  • 8
  • 385