2025
Authors
Jorge Diogo Ribeiro; Ricardo B. Sousa; João G. Martins; André S. Aguiar; Filipe N. Santos; Héber Miguel Sobreira;
Publication
IEEE Access
Abstract
2025
Authors
Pereira, J; Baltazar, AR; Pinheiro, I; da Silva, DQ; Frazao, ML; Neves Dos Santos, FN;
Publication
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA
Abstract
Automated fruit harvesting systems rely heavily on accurate visual perception, particularly for crops such as the Arbutus tree (Arbutus unedo), which holds both ecological and economic significance. However, this species poses considerable challenges for computer vision due to its dense foliage and the morphological variability of its berries across different ripening stages. Despite its importance, the Arbutus tree remains under-explored in the context of precision agriculture and robotic harvesting. This study addresses that gap by evaluating a computer vision-based approach to detect and classify Arbutus berries into three ripeness categories: green, yellow-orange, and red. A significant contribution of this work is the release of two fully annotated open-access datasets, Arbutus Berry Detection Dataset and Arbutus Berry Ripeness Level Detection Dataset, developed through a structured manual labeling process. Additionally, we benchmarked four YOLO architectures - YOLOv8n, YOLOv9t, YOLOv10n, and YOLO11n - as well as the RT-DETR models, using these datasets. Among these, RT-DETR-L demonstrated the most consistent performance in terms of precision, recall, and generalization, outperforming the lighter YOLO models in both speed and accuracy. This highlights RT-DETR's strong potential for deployment in real-time automated harvesting systems, where robust detection and efficient inference are critical. © 2025 IEEE.
2025
Authors
Carneiro, GA; Aubry, TJ; Cunha, A; Radeva, P; Sousa, JJ;
Publication
COMPUTERS AND ELECTRONICS IN AGRICULTURE
Abstract
Precision Agriculture (PA) has emerged as an approach to optimize production, comprise different technology and principles focusing on how to improve agricultural production. Currently, one of the main foundations of PA is the use of artificial intelligence, through deep learning (DL) algorithms. By processing large volumes of complex data, DL enhances decision-making and boosts farming efficiency. However, these methods are hungry for annotated data, which contrasts with the scarce availability of annotated agricultural data and the costs of annotation. Self-supervised learning (SSL) has emerged as a solution to tackle the lack of annotated agricultural data. This study presents a review of the application of SSL methods to computer vision tasks in the agricultural context. The aim is to create a starting point for professionals and scientists who intend to apply these methods using agricultural data. The results of 33 studies found in the literature are discussed, highlighting their pros and cons. In most of the studies, SSL outperformed its supervised counterpart, using datasets from 4000 to 60,000 samples. Potential directions for improving future research are suggested.
2025
Authors
Lopes, D; F Silva, MF; Rocha, F; Filipe, V;
Publication
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA
Abstract
The textile industry faces economic and environmental challenges due to low recycling rates and contamination from fasteners like buttons, rivets, and zippers. This paper proposes an Red, Green, Blue (RGB) vision system using You Only Look Once version 11 (YOLOv11) with a sliding window technique for automated fastener detection. The system addresses small object detection, occlusion, and fabric variability, incorporating Grounding DINO for garment localization and U2-Net for segmentation. Experiments show the sliding window method outperforms full-image detection for buttons and rivets (precision 0.874, recall 0.923), while zipper detection is less effective due to dataset limitations. This work advances scalable AI-driven solutions for textile recycling, supporting circular economy goals. Future work will target hidden fasteners, dataset expansion and fastener removal. © 2025 IEEE.
2024
Authors
Deguchi, T; Baltazar, AR; dos Santos, FN; Mendonça, H;
Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2
Abstract
Since the advent of agriculture, humans have considered phytopharmaceutical products to control pests and reduce losses in farming. Sometimes some of these products, such pesticides, can potentially harm the soil life. In the literature there is evidence that AI and image processing can have a positive contribution to reduce phytopharmaceutical losses, when used in variable rate sprayers. However, it is possible to improve the existing sprayer system's precision, accuracy, and mechanical aspects. This work proposes spraying solution called GraDeS solution (Grape Detection Sprayer). GraDeS solution is a sprayer with two degrees of freedom, controlled by a AI-based algorithm to precisely treat grape bunches diseases. The experiments with the designed sprayer showed two key points. First, the deep learning algorithm recognized and tracked grape bunches. Even with structure movement and bunch covering, the algorithm employs several strategies to keep track of the discovered objects. Second, the robotic sprayer can improve precision in specified areas, such as exclusively spraying grape bunches. Because of the structure's reduced size, the system can be used in medium and small robots.
2024
Authors
Levin, TB; Oliveira, JM; Sousa, RB; Silva, MF; Parreira, BS; Sobreira, HM; Mendonça, HS;
Publication
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024
Abstract
Human oversight can benefit scenarios with complex tasks, such as pallet docking and loading and unloading containers, beyond the current capabilities of autonomous systems without any failures. Furthermore, teleoperation systems allow remote control of mobile ground robots, especially with the surge of 5G technology that promises reliable and low latency communication. Current works research on exploring the latest features from the 5G standard, including ultra-Reliable Low-Latency Communication (uRLLC) and network slicing. However, these features may not be available depending on the Internet Service Provider (ISP) and communication devices. Thus, this work proposes a network architecture for the teleoperation of ground mobile robots in industrial environments using commercially available devices over the 5G Non-Standalone (NSA) standard. Experimental results include an evaluation of the network and End-to-End (E2E) latency of the proposed system. The results show that the proposed architecture enables teleoperation, achieving an average E2E latency of 347.19 ms.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.