Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Daniel Queirós Silva

2025

Arbutus Berry Detection and Classification for Harvesting

Authors
Pereira, J; Baltazar, AR; Pinheiro, I; da Silva, DQ; Frazao, ML; Neves Dos Santos, FN;

Publication
IEEE International Conference on Emerging Technologies and Factory Automation, ETFA

Abstract
Automated fruit harvesting systems rely heavily on accurate visual perception, particularly for crops such as the Arbutus tree (Arbutus unedo), which holds both ecological and economic significance. However, this species poses considerable challenges for computer vision due to its dense foliage and the morphological variability of its berries across different ripening stages. Despite its importance, the Arbutus tree remains under-explored in the context of precision agriculture and robotic harvesting. This study addresses that gap by evaluating a computer vision-based approach to detect and classify Arbutus berries into three ripeness categories: green, yellow-orange, and red. A significant contribution of this work is the release of two fully annotated open-access datasets, Arbutus Berry Detection Dataset and Arbutus Berry Ripeness Level Detection Dataset, developed through a structured manual labeling process. Additionally, we benchmarked four YOLO architectures - YOLOv8n, YOLOv9t, YOLOv10n, and YOLO11n - as well as the RT-DETR models, using these datasets. Among these, RT-DETR-L demonstrated the most consistent performance in terms of precision, recall, and generalization, outperforming the lighter YOLO models in both speed and accuracy. This highlights RT-DETR's strong potential for deployment in real-time automated harvesting systems, where robust detection and efficient inference are critical. © 2025 IEEE.

2024

Enhancing Grapevine Node Detection to Support Pruning Automation: Leveraging State-of-the-Art YOLO Detection Models for 2D Image Analysis

Authors
Oliveira, F; da Silva, DQ; Filipe, V; Pinho, TM; Cunha, M; Cunha, JB; dos Santos, FN;

Publication
SENSORS

Abstract
Automating pruning tasks entails overcoming several challenges, encompassing not only robotic manipulation but also environment perception and detection. To achieve efficient pruning, robotic systems must accurately identify the correct cutting points. A possible method to define these points is to choose the cutting location based on the number of nodes present on the targeted cane. For this purpose, in grapevine pruning, it is required to correctly identify the nodes present on the primary canes of the grapevines. In this paper, a novel method of node detection in grapevines is proposed with four distinct state-of-the-art versions of the YOLO detection model: YOLOv7, YOLOv8, YOLOv9 and YOLOv10. These models were trained on a public dataset with images containing artificial backgrounds and afterwards validated on different cultivars of grapevines from two distinct Portuguese viticulture regions with cluttered backgrounds. This allowed us to evaluate the robustness of the algorithms on the detection of nodes in diverse environments, compare the performance of the YOLO models used, as well as create a publicly available dataset of grapevines obtained in Portuguese vineyards for node detection. Overall, all used models were capable of achieving correct node detection in images of grapevines from the three distinct datasets. Considering the trade-off between accuracy and inference speed, the YOLOv7 model demonstrated to be the most robust in detecting nodes in 2D images of grapevines, achieving F1-Score values between 70% and 86.5% with inference times of around 89 ms for an input size of 1280 x 1280 px. Considering these results, this work contributes with an efficient approach for real-time node detection for further implementation on an autonomous robotic pruning system.

2024

YOLO-Based Tree Trunk Types Multispectral Perception: A Two-Genus Study at Stand-Level for Forestry Inventory Management Purposes

Authors
da Silva, DQ; Dos Santos, FN; Filipe, V; Sousa, AJ; Pires, EJS;

Publication
IEEE ACCESS

Abstract
Stand-level forest tree species perception and identification are needed for monitoring-related operations, being crucial for better biodiversity and inventory management in forested areas. This paper contributes to this knowledge domain by researching tree trunk types multispectral perception at stand-level. YOLOv5 and YOLOv8 - Convolutional Neural Networks specialized at object detection and segmentation - were trained to detect and segment two tree trunk genus (pine and eucalyptus) using datasets collected in a forest region in Portugal. The dataset comprises only two categories, which correspond to the two tree genus. The datasets were manually annotated for object detection and segmentation with RGB and RGB-NIR images, and are publicly available. The Small variant of YOLOv8 was the best model at detection and segmentation tasks, achieving an F1 measure above 87% and 62%, respectively. The findings of this study suggest that the use of extended spectra, including Visible and Near Infrared, produces superior results. The trained models can be integrated into forest tractors and robots to monitor forest genus across different spectra. This can assist forest managers in controlling their forest stands.

2024

Assessing Soil Ripping Depth for Precision Forestry with a Cost-Effective Contactless Sensing System

Authors
da Silva, DQ; Louro, F; dos Santos, FN; Filipe, V; Sousa, AJ; Cunha, M; Carvalho, JL;

Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Forest soil ripping is a practice that involves revolving the soil in a forest area to prepare it for planting or sowing operations. Advanced sensing systems may help in this kind of forestry operation to assure ideal ripping depth and intensity, as these are important aspects that have potential to minimise the environmental impact of forest soil ripping. In this work, a cost-effective contactless system - capable of detecting and mapping soil ripping depth in real-time - was developed and tested in laboratory and in a realistic forest scenario. The proposed system integrates two single-point LiDARs and a GNSS sensor. To evaluate the system, ground-truth data was manually collected on the field during the operation of the machine with a ripping implement. The proposed solution was tested in real conditions, and the results showed that the ripping depth was estimated with minimal error. The accuracy and mapping ripping depth ability of the low-cost sensor justify their use to support improved soil preparation with machines or robots toward sustainable forest industry.

2023

Deep Learning-Based Tree Stem Segmentation for Robotic Eucalyptus Selective Thinning Operations

Authors
da Silva, DQ; Rodrigues, TF; Sousa, AJ; dos Santos, FN; Filipe, V;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II

Abstract
Selective thinning is a crucial operation to reduce forest ignitable material, to control the eucalyptus species and maximise its profitability. The selection and removal of less vigorous stems allows the remaining stems to grow healthier and without competition for water, sunlight and nutrients. This operation is traditionally performed by a human operator and is time-intensive. This work simplifies selective thinning by removing the stem selection part from the human operator's side using a computer vision algorithm. For this, two distinct datasets of eucalyptus stems (with and without foliage) were built and manually annotated, and three Deep Learning object detectors (YOLOv5, YOLOv7 and YOLOv8) were tested on real context images to perform instance segmentation. YOLOv8 was the best at this task, achieving an Average Precision of 74% and 66% on non-leafy and leafy test datasets, respectively. A computer vision algorithm for automatic stem selection was developed based on the YOLOv8 segmentation output. The algorithm managed to get a Precision above 97% and a 81% Recall. The findings of this work can have a positive impact in future developments for automatising selective thinning in forested contexts.

2023

Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions

Authors
Pinheiro, I; Moreira, G; da Silva, DQ; Magalhaes, S; Valente, A; Oliveira, PM; Cunha, M; Santos, F;

Publication
AGRONOMY-BASEL

Abstract
The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.

  • 2
  • 3