Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Sandro Augusto Magalhães
  • Role

    Researcher
  • Since

    01st September 2018
  • Nationality

    Portugal
  • Contacts

    +351220413317
    sandro.a.magalhaes@inesctec.pt
008
Publications

2023

Benchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models

Authors
Magalhaes, SC; dos Santos, FN; Machado, P; Moreira, AP; Dias, J;

Publication
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE

Abstract
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU-Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU-Tensor Processing Unit (such as Coral Dev Board TPU), and DPU-Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency.Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5 W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %.

2023

Design and Control Architecture of a Triple 3 DoF SCARA Manipulator for Tomato Harvesting

Authors
Tinoco, V; Silva, MF; Santos, FN; Magalhaes, S; Morais, R;

Publication
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
The increasing world population, growing need for agricultural products, and labour shortages have driven the growth of robotics in agriculture. Tasks such as fruit harvesting require extensive hours of work during harvest periods and can be physically exhausting. Autonomous robots bring more efficiency to agricultural tasks with the possibility of working continuously. This paper proposes a stackable 3 DoF SCARA manipulator for tomato harvesting. The manipulator uses a custom electronic circuit to control DC motors with an endless gear at each joint and uses a camera and a Tensor Processing Unit (TPU) for fruit detection. Cascaded PID controllers are used to control the joints with magnetic encoders for rotational feedback, and a time-of-flight sensor for prismatic movement feedback. Tomatoes are detected using an algorithm that finds regions of interest with the red colour present and sends these regions of interest to an image classifier that evaluates whether or not a tomato is present. With this, the system calculates the position of the tomato using stereo vision obtained from a monocular camera combined with the prismatic movement of the manipulator. As a result, the manipulator was able to position itself very close to the target in less than 3 seconds, where an end-effector could adjust its position for the picking.

2023

Automated Infield Grapevine Inflorescence Segmentation Based on Deep Learning Models

Authors
Moreira, G; Magalhães, SA; dos Santos, FN; Cunha, M;

Publication
IECAG 2023

Abstract

2023

3D tomatoes' localisation with monocular cameras using histogram filters

Authors
Magalhães, SC; dos Santos, FN; Moreira, AP; Dias, J;

Publication
CoRR

Abstract

2022

Benchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato

Authors
Moreira, G; Magalhaes, SA; Pinho, T; dos Santos, FN; Cunha, M;

Publication
AGRONOMY-BASEL

Abstract
The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.