Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Sandro Augusto Magalhães

2021

Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse

Autores
Magalhaes, SA; Castro, L; Moreira, G; dos Santos, FN; Cunha, M; Dias, J; Moreira, AP;

Publicação
SENSORS

Abstract
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44 ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms.

2021

Cost-Effective 4DoF Manipulator for General Applications

Autores
Magalhães, SA; Moreira, AP; dos Santos, FN; Dias, J; Santos, L;

Publicação
Intelligent Systems and Applications - Proceedings of the 2021 Intelligent Systems Conference, IntelliSys 2021, Amsterdam, The Netherlands, 2-3 September, 2021, Volume 3

Abstract
Nowadays, robotic manipulators’ uses are broader than industrial needs. They are applied to perform agricultural tasks, consumer services, medical surgeries, among others. The development of new cost-effective robotic arms assumes a prominent position to enable their wide-spread adoption in these application areas. Bearing these ideas in mind, the objective of this paper is twofold. First, introduce the hardware and software architecture and position-control design for a four Degree of Freedom (DoF) manipulator constituted by high-resolution stepper motors and incremental encoders and a cost-effective price. Secondly, to describe the mitigation strategies adopted to lead with the manipulator’s position using incremental encoders during startup and operating modes. The described solution has a maximum circular workspace of 0.7 m and a maximum payload of 3 kg. The developed architecture was tested, inducing the manipulator to perform a square path. Tests prove an accumulative error of 12.4 mm. All the developed code for firmware and ROS drivers was made publicly available. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2021

Tomato Detection Using Deep Learning for Robotics Application

Autores
Padilha, TC; Moreira, G; Magalhaes, SA; dos Santos, FN; Cunha, M; Oliveira, M;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2021)

Abstract
The importance of agriculture and the production of fruits and vegetables has stood out mainly over the past few years, especially for the benefits for our health. In 2021, in the international year of fruit and vegetables, it is important to encourage innovation and evolution in this area, with the needs surrounding the different processes of the different cultures. This paper compares the performance between two datasets for robotics fruit harvesting using four deep learning object detection models: YOLOv4, SSD ResNet 50, SSD Inception v2, SSD MobileNet v2. This work aims to benchmark the Open Images Dataset v6 (OIDv6) against an acquired dataset inside a tomatoes greenhouse for tomato detection in agricultural environments, using a test dataset with acquired non augmented images. The results highlight the benefit of using self-acquired datasets for the detection of tomatoes because the state-of-the-art datasets, as OIDv6, lack some relevant characteristics of the fruits in the agricultural environment, as the shape and the color. Detections in greenhouses environments differ greatly from the data inside the OIDv6, which has fewer annotations per image and the tomato is generally riped (reddish). Standing out in the use of our tomato dataset, YOLOv4 stood out with a precision of 91%. The tomato dataset was augmented and is publicly available (See https://rdm.inesctec.pt/ and https://rdm.inesctec.pt/dataset/ii-2021-001).

2021

Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models

Autores
Aguiar, AS; Magalhaes, SA; dos Santos, FN; Castro, L; Pinho, T; Valente, J; Martins, R; Boaventura Cunha, J;

Publicação
AGRONOMY-BASEL

Abstract
The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.

2021

PixelCropRobot, a cartesian multitask platform for microfarms automation

Autores
Terra F.; Rodrigues L.; Magalhaes S.; Santos F.; Moura P.; Cunha M.;

Publicação
2021 International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation, IRIA 2021

Abstract
The world society needs to produce more food with the highest quality standards to feed the world population with the same level of nutrition. Microfarms and local food production enable growing vegetables near the population and reducing the operational logistics costs related to post-harvest food handling. However, it isn't economical viable neither efficient to have one person devoted to these microfarms task. To overcome this issue, we propose an open-source robotic solution capable of performing multitasks in small polyculture farms. This robot is equipped with optical sensors, manipulators and other mechatronic technology to monitor and process both biotic and abiotic agronomic data. This information supports the consequent activation of manipulators that perform several agricultural tasks: crop and weed detection, sowing and watering. The development of the robot meets low-cost requirements so that it can be a putative commercial solution. This solution is designed to be relevant as a test platform to support the assembly of new sensors and further develop new cognitive solutions, to raise awareness on topics related to Precision Agriculture. We are looking for a rational use of resources and several other aspects of an evolved, economically efficient and ecologically sustainable agriculture.

2022

Benchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato

Autores
Moreira, G; Magalhaes, SA; Pinho, T; dos Santos, FN; Cunha, M;

Publicação
AGRONOMY-BASEL

Abstract
The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.

  • 2
  • 5