Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Daniel Filipe Lopes
  • Role

    Research Assistant
  • Since

    06th March 2023
Publications

2025

Towards an Artificial Intelligence System for Automated Accessory Removal in Textile Recycling: Detecting Textile Fasteners

Authors
Lopes, D; F Silva, MF; Rocha, F; Filipe, V;

Publication
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA

Abstract
The textile industry faces economic and environmental challenges due to low recycling rates and contamination from fasteners like buttons, rivets, and zippers. This paper proposes an Red, Green, Blue (RGB) vision system using You Only Look Once version 11 (YOLOv11) with a sliding window technique for automated fastener detection. The system addresses small object detection, occlusion, and fabric variability, incorporating Grounding DINO for garment localization and U2-Net for segmentation. Experiments show the sliding window method outperforms full-image detection for buttons and rivets (precision 0.874, recall 0.923), while zipper detection is less effective due to dataset limitations. This work advances scalable AI-driven solutions for textile recycling, supporting circular economy goals. Future work will target hidden fasteners, dataset expansion and fastener removal. © 2025 IEEE.

2023

Development of a Collaborative Robotic Platform for Autonomous Auscultation

Authors
Lopes, D; Coelho, L; Silva, MF;

Publication
APPLIED SCIENCES-BASEL

Abstract
Listening to internal body sounds, or auscultation, is one of the most popular diagnostic techniques in medicine. In addition to being simple, non-invasive, and low-cost, the information it offers, in real time, is essential for clinical decision-making. This process, usually done by a doctor in the presence of the patient, currently presents three challenges: procedure duration, participants' safety, and the patient's privacy. In this article we tackle these by proposing a new autonomous robotic auscultation system. With the patient prepared for the examination, a 3D computer vision sub-system is able to identify the auscultation points and translate them into spatial coordinates. The robotic arm is then responsible for taking the stethoscope surface into contact with the patient's skin surface at the various auscultation points. The proposed solution was evaluated to perform a simulated pulmonary auscultation in six patients (with distinct height, weight, and skin color). The obtained results showed that the vision subsystem was able to correctly identify 100% of the auscultation points, with uncontrolled lighting conditions, and the positioning subsystem was able to accurately position the gripper on the corresponding positions on the human body. Patients reported no discomfort during auscultation using the described automated procedure.