2025
Autores
André Filipe Pinto; Nuno Alexandre Cruz; Bruno M. Ferreira; Salviano P. Soares; Vítor M. Filipe;
Publicação
OCEANS 2025 - Great Lakes
Abstract
2026
Autores
de Azambuja R.X.; Morais A.J.; Filipe V.;
Publicação
Lecture Notes in Networks and Systems
Abstract
Deep learning and large language models (LLMs) have recently enabled studies in state-of-the-art technologies that enhance recommender systems. This research focuses on solving the next-item recommendation problem using these challenging technologies in Web applications, specifically focusing on a case study in the wine domain. This paper presents the characterization of the framework developed for the object of study: adaptive recommendation based on new modeling of the initial data to explore the user’s dynamic taste profile. Following the design science research methodology, the following contributions are presented: (i) a novel dataset of wines called X-Wines; (ii) an updated recommender model called X-Model4Rec—eXtensible Model for Recommendation supported in attention and transformer mechanisms which constitute the core of the LLMs; and (iii) a collaborative Web platform to support adaptive wine recommendation to users in an online environment. The results indicate that the solutions proposed in this research can improve recommendations in online environments and promote further scientific work on specific topics.
2025
Autores
Lopes, D; F Silva, MF; Rocha, F; Filipe, V;
Publicação
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA
Abstract
The textile industry faces economic and environmental challenges due to low recycling rates and contamination from fasteners like buttons, rivets, and zippers. This paper proposes an Red, Green, Blue (RGB) vision system using You Only Look Once version 11 (YOLOv11) with a sliding window technique for automated fastener detection. The system addresses small object detection, occlusion, and fabric variability, incorporating Grounding DINO for garment localization and U2-Net for segmentation. Experiments show the sliding window method outperforms full-image detection for buttons and rivets (precision 0.874, recall 0.923), while zipper detection is less effective due to dataset limitations. This work advances scalable AI-driven solutions for textile recycling, supporting circular economy goals. Future work will target hidden fasteners, dataset expansion and fastener removal. © 2025 IEEE.
2025
Autores
Venancio, R; Filipe, V; Cerveira, A; Gonçalves, L;
Publicação
FRONTIERS IN ARTIFICIAL INTELLIGENCE
Abstract
Riding a motorcycle involves risks that can be minimized through advanced sensing and response systems to assist the rider. The use of camera-collected images to monitor road conditions can aid in the development of tools designed to enhance rider safety and prevent accidents. This paper proposes a method for developing deep learning models designed to operate efficiently on embedded systems like the Raspberry Pi, facilitating real-time decisions that consider the road condition. Our research tests and compares several state-of-the-art convolutional neural network architectures, including EfficientNet and Inception, to determine which offers the best balance between inference time and accuracy. Specifically, we measured top-1 accuracy and inference time on a Raspberry Pi, identifying EfficientNetV2 as the most suitable model due to its optimal trade-off between performance and computational demand. The model's top-1 accuracy significantly outperformed other models while maintaining competitive inference speeds, making it ideal for real-time applications in traffic-dense urban settings.
2024
Autores
Silva, T; Carvalho, T; Filipe, V; Gonçlves, L; Sousa, A;
Publicação
2024 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION, ICGI
Abstract
In the modern world, making healthy food choices is increasingly important due to the rise in food-related illnesses. Existing tools, such as Nutri-Score and comprehensive food labels, often pose challenges for many consumers. This paper proposes an application that uses Optical Character Recognition (OCR) technologies to read and interpret food labels, thus upgrading current solutions that rely mainly on reading product barcodes. By using advanced optical character recognition and machine learning techniques, the system aims to accurately extract and analyze nutritional information directly from food packaging without relying on a database of pre-registered products. This innovative approach not only increases consumer awareness, but also supports personalized diet management for diseases such as diabetes and hypertension, while promoting healthier eating habits and better health outcomes. Two minimalist functional prototypes were developed as a result of this work: a desktop application and a mobile application.
2025
Autores
Nascimento, R; Gonzalez, DG; Pires, EJS; Filipe, V; Silva, MF; Rocha, LF;
Publicação
IEEE ACCESS
Abstract
The increasing demand for automated quality inspection in modern industry, particularly for transparent and reflective parts, has driven significant interest in vision-based technologies. These components pose unique challenges due to their optical properties, which often hinder conventional inspection techniques. This systematic review analyzes 24 peer-reviewed studies published between 2015 and 2025, aiming to assess the current state of the art in computer vision-based inspection systems tailored to such materials. The review synthesizes recent advancements in imaging setups, illumination strategies, and deep learning-based defect detection methods. It also identifies key limitations in current approaches, particularly regarding robustness under variable industrial conditions and the lack of standardized benchmarks. By highlighting technological trends and research gaps, this work offers valuable insights and directions for future research-emphasizing the need for adaptive, scalable, and industry-ready solutions to enhance the reliability and effectiveness of inspection systems for transparent and reflective parts.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.