Details
Name
Daniel Filipe LopesRole
Research AssistantSince
06th March 2023
Nationality
PortugalCentre
Robotics in Industry and Intelligent SystemsContacts
+351222094171
daniel.f.lopes@inesctec.pt
2025
Authors
Lopes, D; F Silva, MF; Rocha, F; Filipe, V;
Publication
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA
Abstract
The textile industry faces economic and environmental challenges due to low recycling rates and contamination from fasteners like buttons, rivets, and zippers. This paper proposes an Red, Green, Blue (RGB) vision system using You Only Look Once version 11 (YOLOv11) with a sliding window technique for automated fastener detection. The system addresses small object detection, occlusion, and fabric variability, incorporating Grounding DINO for garment localization and U2-Net for segmentation. Experiments show the sliding window method outperforms full-image detection for buttons and rivets (precision 0.874, recall 0.923), while zipper detection is less effective due to dataset limitations. This work advances scalable AI-driven solutions for textile recycling, supporting circular economy goals. Future work will target hidden fasteners, dataset expansion and fastener removal. © 2025 IEEE.
2023
Authors
Lopes, D; Coelho, L; Silva, MF;
Publication
APPLIED SCIENCES-BASEL
Abstract
Listening to internal body sounds, or auscultation, is one of the most popular diagnostic techniques in medicine. In addition to being simple, non-invasive, and low-cost, the information it offers, in real time, is essential for clinical decision-making. This process, usually done by a doctor in the presence of the patient, currently presents three challenges: procedure duration, participants' safety, and the patient's privacy. In this article we tackle these by proposing a new autonomous robotic auscultation system. With the patient prepared for the examination, a 3D computer vision sub-system is able to identify the auscultation points and translate them into spatial coordinates. The robotic arm is then responsible for taking the stethoscope surface into contact with the patient's skin surface at the various auscultation points. The proposed solution was evaluated to perform a simulated pulmonary auscultation in six patients (with distinct height, weight, and skin color). The obtained results showed that the vision subsystem was able to correctly identify 100% of the auscultation points, with uncontrolled lighting conditions, and the positioning subsystem was able to accurately position the gripper on the corresponding positions on the human body. Patients reported no discomfort during auscultation using the described automated procedure.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.