2025
Autores
Patrício, C; Teixeira, LF; Neves, J;
Publicação
CoRR
Abstract
2025
Autores
Ribeiro, AG; Vilaça, L; Costa, C; da Costa, TS; Carvalho, PM;
Publicação
JOURNAL OF IMAGING
Abstract
Quality control represents a critical function in industrial environments, ensuring that manufactured products meet strict standards and remain free from defects. In highly regulated sectors such as the pharmaceutical industry, traditional manual inspection methods remain widely used. However, these are time-consuming and prone to human error, and they lack the reliability required for large-scale operations, highlighting the urgent need for automated solutions. This is crucial for industrial applications, where environments evolve and new defect types can arise unpredictably. This work proposes an automated visual defect detection system specifically designed for pharmaceutical bottles, with potential applicability in other manufacturing domains. Various methods were integrated to create robust tools capable of real-world deployment. A key strategy is the use of incremental learning, which enables machine learning models to incorporate new, unseen data without full retraining, thus enabling adaptation to new defects as they appear, allowing models to handle rare cases while maintaining stability and performance. The proposed solution incorporates a multi-view inspection setup to capture images from multiple angles, enhancing accuracy and robustness. Evaluations in real-world industrial conditions demonstrated high defect detection rates, confirming the effectiveness of the proposed approach.
2025
Autores
Gomes, C; Mastralexi, C; Carvalho, P;
Publicação
IEEE ACCESS
Abstract
In football, where minor differences can significantly affect outcomes and performance, automatic video analysis has become a critical tool for analyzing and optimizing team strategies. However, many existing solutions require expensive and complex hardware comprising multiple cameras, sensors, or GPS devices, limiting accessibility for many clubs, particularly those with limited resources. Using images and video from a moving camera can help a wider audience benefit from video analysis, but it introduces new challenges related to motion. To address this, we explore an alternative homography estimation in moving camera scenarios. Homography plays a crucial role in video analysis, but presents challenges when keypoints are sparse, especially in dynamic environments. Existing techniques predominantly rely on visible keypoints and apply homography transformations on a frame-by-frame basis, often lacking temporal consistency and facing challenges in areas with sparse keypoints. This paper explores the use of estimated motion information for homography computation. Our experimental results reveal that integrating motion data directly into homography estimations leads to reduced errors in keypoint-sparse frames, surpassing state-of-the-art methods, filling a current gap in moving camera scenarios.
2025
Autores
Abdellatif, AA; Fontes, H; Coelho, A; Pessoa, LM; Campos, R;
Publicação
CoRR
Abstract
2025
Autores
Ribeiro, P; Coelho, A; Campos, R;
Publicação
2025 13th Wireless Days Conference (WD)
Abstract
2025
Autores
Nunes, D; Amorim, R; Ribeiro, P; Coelho, A; Campos, R;
Publicação
2025 IEEE INTERNATIONAL MEDITERRANEAN CONFERENCE ON COMMUNICATIONS AND NETWORKING, MEDITCOM
Abstract
This paper proposes FLUC, a modular framework that integrates open-source Large Language Models (LLMs) with Unmanned Aerial Vehicle (UAV) autopilot systems to enable autonomous control in Flying Networks (FNs). FLUC translates high-level natural language commands into executable UAV mission code, bridging the gap between operator intent and UAV behaviour. FLUC is evaluated using three open-source LLMs - Qwen 2.5, Gemma 2, and LLaMA 3.2 - across scenarios involving code generation and mission planning. Results show that Qwen 2.5 excels in multi-step reasoning, Gemma 2 balances accuracy and latency, and LLaMA 3.2 offers faster responses with lower logical coherence. A case study on energy-aware UAV positioning confirms FLUC's ability to interpret structured prompts and autonomously execute domain-specific logic, showing its effectiveness in real-time, mission-driven control.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.