Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Filipe Neves dos Santos was born in São Paio de Oleiros, Portugal, in 1979. He olds a Licenciatura (5-year degree) in Electrical and Computer Engineering in 2003 from Instituto Superior de Engenharia do Porto (ISEP), a M.Sc. in Electrical and Computer Engineering from the Instituto Superior Técnico (IST) da Universidade Técnica de Lisboa, in 2007, and received the PhD degree in Electrical and Computer Engineering at the Faculdade de Engenharia (FEUP), Universidade do Porto, Portugal, in 2014. His professional passion is to develop autonomous robots and machinery to solve real problems, desires and needs of our society and to contribute for self-sustainability and fairness of the global economy. Actually, He is focused in developing and researching robotic solutions for agriculture and forestry sector, where is required a higher efficiency for our world self-sustainability. Considering his closer regional reality, he have setup the goal to promote agricultural robotic based projects and develop robots that can operate fully autonomously and safely in steep-slope scenarios, which is a common reality of North of Portugal and in other large number of world regions. As so, he is interested in explore and develop robots for specific agricultural and forestall tasks such as: monitoring (by ground), spraying, logistics, pruning, and selective harvesting. The successfully execution of these task is largely dependent on the robustness of specific robotic systems, such as: - Visual Perception; - Navigation (localization, mapping and path planning); and - Manipulation and end tools. For that reason Visual Perception and Navigation are his main research fields inside of robotics research. His formation in Electronics and Computer Engineer (Bachelor (old-one of 5 years) MSc (sensor fusion), PhD (semantic mapping) ), experience of 4 years as entrepreneur (technological startup), 8 year as robotics researcher, 5 years as manager (in supporting tasks in a family enterprise), and 6 year as electronics technician will help him to successfully contribute for the agricultural and forestall robotics future.

Interest
Topics
Details

Details

040
Publications

2023

Nano Aerial Vehicles for Tree Pollination

Authors
Pinheiro, I; Aguiar, A; Figueiredo, A; Pinho, T; Valente, A; Santos, F;

Publication
APPLIED SCIENCES-BASEL

Abstract
Currently, Unmanned Aerial Vehicles (UAVs) are considered in the development of various applications in agriculture, which has led to the expansion of the agricultural UAV market. However, Nano Aerial Vehicles (NAVs) are still underutilised in agriculture. NAVs are characterised by a maximum wing length of 15 centimetres and a weight of fewer than 50 g. Due to their physical characteristics, NAVs have the advantage of being able to approach and perform tasks with more precision than conventional UAVs, making them suitable for precision agriculture. This work aims to contribute to an open-source solution known as Nano Aerial Bee (NAB) to enable further research and development on the use of NAVs in an agricultural context. The purpose of NAB is to mimic and assist bees in the context of pollination. We designed this open-source solution by taking into account the existing state-of-the-art solution and the requirements of pollination activities. This paper presents the relevant background and work carried out in this area by analysing papers on the topic of NAVs. The development of this prototype is rather complex given the interactions between the different hardware components and the need to achieve autonomous flight capable of pollination. We adequately describe and discuss these challenges in this work. Besides the open-source NAB solution, we train three different versions of YOLO (YOLOv5, YOLOv7, and YOLOR) on an original dataset (Flower Detection Dataset) containing 206 images of a group of eight flowers and a public dataset (TensorFlow Flower Dataset), which must be annotated (TensorFlow Flower Detection Dataset). The results of the models trained on the Flower Detection Dataset are shown to be satisfactory, with YOLOv7 and YOLOR achieving the best performance, with 98% precision, 99% recall, and 98% F1 score. The performance of these models is evaluated using the TensorFlow Flower Detection Dataset to test their robustness. The three YOLO models are also trained on the TensorFlow Flower Detection Dataset to better understand the results. In this case, YOLOR is shown to obtain the most promising results, with 84% precision, 80% recall, and 82% F1 score. The results obtained using the Flower Detection Dataset are used for NAB guidance for the detection of the relative position in an image, which defines the NAB execute command.

2023

Safety Standards for Collision Avoidance Systems in Agricultural Robots - A Review

Authors
Martins, JJ; Silva, M; Santos, F;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To produce more food and tackle the labor scarcity, agriculture needs safer robots for repetitive and unsafe tasks (such as spraying). The interaction between humans and robots presents some challenges to ensure a certifiable safe collaboration between human-robot, a reliable system that does not damage goods and plants, in a context where the environment is mostly dynamic, due to the constant environment changes. A well-known solution to this problem is the implementation of real-time collision avoidance systems. This paper presents a global overview about state of the art methods implemented in the agricultural environment that ensure human-robot collaboration according to recognised industry standards. To complement are addressed the gaps and possible specifications that need to be clarified in future standards, taking into consideration the human-machine safety requirements for agricultural autonomous mobile robots.

2023

Toward Grapevine Digital Ampelometry Through Vision Deep Learning Models

Authors
Magalhaes, SC; Castro, L; Rodrigues, L; Padilha, TC; de Carvalho, F; dos Santos, FN; Pinho, T; Moreira, G; Cunha, J; Cunha, M; Silva, P; Moreira, AP;

Publication
IEEE SENSORS JOURNAL

Abstract
Several thousand grapevine varieties exist, with even more naming identifiers. Adequate specialized labor is not available for proper classification or identification of grapevines, making the value of commercial vines uncertain. Traditional methods, such as genetic analysis or ampelometry, are time-consuming, expensive, and often require expert skills that are even rarer. New vision-based systems benefit from advanced and innovative technology and can be used by nonexperts in ampelometry. To this end, deep learning (DL) and machine learning (ML) approaches have been successfully applied for classification purposes. This work extends the state of the art by applying digital ampelometry techniques to larger grapevine varieties. We benchmarked MobileNet v2, ResNet-34, and VGG-11-BN DL classifiers to assess their ability for digital ampelography. In our experiment, all the models could identify the vines' varieties through the leaf with a weighted F1 score higher than 92%.

2023

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.

2023

Benchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models

Authors
Magalhaes, SC; dos Santos, FN; Machado, P; Moreira, AP; Dias, J;

Publication
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE

Abstract
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU-Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU-Tensor Processing Unit (such as Coral Dev Board TPU), and DPU-Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency.Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5 W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %.

Supervised
thesis

2022

ForestMP: Multimodal perception system for robotics in forestry applications

Author
Daniel Queirós da Silva

Institution
UP-FEUP

2022

Localization and Mapping Based on Semantic and Multi-layer Maps Concepts

Author
André Silva Pinto de Aguiar

Institution
UP-FEUP

2022

PlanterRobot4.0 - Soil Perception System Leading to Robotized Tree Plantation and Maintenance in the context of Agriculture 4.0

Author
Rui Manuel Pereira Coutinho

Institution
UP-FEUP

2020

Grasping and manipulation with active perception for open-field agricultural robotics

Author
Sandro Augusto Costa Magalhães

Institution
UP-FEUP

2020

Advanced 2.5D Path Planning for agricultural robots

Author
Luís Carlos Feliz Santos

Institution
UTAD