Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Carlos Santos

2020

Path Planning Aware of Robot's Center of Mass for Steep Slope Vineyards

Authors
Santos, L; Santos, F; Mendes, J; Costa, P; Lima, J; Reis, R; Shinde, P;

Publication
ROBOTICA

Abstract
Steep slope vineyards are a complex scenario for the development of ground robots. Planning a safe robot trajectory is one of the biggest challenges in this scenario, characterized by irregular surfaces and strong slopes (more than 35 degrees). Moving the robot through a pile of stones, spots with high slope or/and with wrong robot yaw may result in an abrupt fall of the robot, damaging the equipment and centenary vines, and sometimes imposing injuries to humans. This paper presents a novel approach for path planning aware of center of mass of the robot for application in sloppy terrains. Agricultural robotic path planning (AgRobPP) is a framework that considers the A* algorithm by expanding inner functions to deal with three main inputs: multi-layer occupation grid map, altitude map and robot's center of mass. This multi-layer grid map is updated by obstacles taking into account the terrain slope and maximum robot posture. AgRobPP is also extended with algorithms for local trajectory replanning during the execution of a trajectory that is blocked by the presence of an obstacle, always assuring the safety of the re-planned path. AgRobPP has a novel PointCloud translator algorithm called PointCloud to grid map and digital elevation model (PC2GD), which extracts the occupation grid map and digital elevation model from a PointCloud. This can be used in AgRobPP core algorithms and farm management intelligent systems as well. AgRobPP algorithms demonstrate a great performance with the real data acquired from AgRob V16, a robotic platform developed for autonomous navigation in steep slope vineyards.

2020

Visual Trunk Detection Using Transfer Learning and a Deep Learning-Based Coprocessor

Authors
Aguiar, AS; Dos Santos, FN; Miranda De Sousa, AJM; Oliveira, PM; Santos, LC;

Publication
IEEE ACCESS

Abstract
Agricultural robotics is nowadays a complex, challenging, and exciting research topic. Some agricultural environments present harsh conditions to robotics operability. In the case of steep slope vineyards, there are several challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of signals emitted by the Global Navigation Satellite System (GNSS). Under these conditions, robotics navigation becomes a challenging task. To perform these tasks safely and accurately, the extraction of reliable features or landmarks from the surrounding environment is crucial. This work intends to solve this issue, performing accurate, cheap, and fast landmark extraction in steep slope vineyard context. To do so, we used a single camera and an Edge Tensor Processing Unit (TPU) provided by Google & x2019;s USB Accelerator as a small, high-performance, and low power unit suitable for image classification, object detection, and semantic segmentation. The proposed approach performs object detection using Deep Learning (DL)-based Neural Network (NN) models on this device to detect vine trunks. To train the models, Transfer Learning (TL) is used on several pre-trained versions of MobileNet V1 and MobileNet V2. A benchmark between the two models and the different pre-trained versions is performed. The models are pre-trained in a built in-house dataset, that is publicly available containing 336 different images with approximately 1,600 annotated vine trunks. There are considered two vineyards, one using camera images with the conventional infrared filter and others with an infrablue filter. Results show that this configuration allows a fast vine trunk detection, with MobileNet V2 being the most accurate retrained detector, achieving an overall Average Precision of 52.98 & x0025;. We briefly compare the proposed approach with the state-of-the-art Tiny YOLO-V3 running on Jetson TX2, showing the outperformance of the adopted system in this work. Additionally, it is also shown that the proposed detectors are suitable for the Localization and Mapping problems.

2020

Path Planning for ground robots in agriculture: a short review

Authors
Santos, LC; Santos, FN; Solteiro Pires, EJS; Valente, A; Costa, P; Magalhaes, S;

Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
The world's population is estimated to reach nine billion people by the year 2050, which indicates that agricultural productivity must increase sustainably. The mechanisation and automatisation of agricultural tasks is an essential step to face population growth. Ground robots have been developed along the last decade for several agricultural applications, being, the autonomous and safe navigation one of the hardest challenge in this development. Moving autonomously, a mobile platform involves different tasks, such as localisation, mapping, motion control, and path planning, a crucial step for autonomous operations. This article performs a survey of different applications for path planning techniques applied to various agricultural contexts. This paper analyses different agricultural applications and details about the employed path planning method. The conclusion indicates that path planning has been successfully applied to agrarian robots for field coverage and point-to-point navigation, being that coverage path planning is slightly more advanced in this field.

2020

Vineyard trunk detection using deep learning - An experimental device benchmark

Authors
Pinto de Aguiar, ASP; Neves dos Santos, FBN; Feliz dos Santos, LCF; de Jesus Filipe, VMD; Miranda de Sousa, AJM;

Publication
COMPUTERS AND ELECTRONICS IN AGRICULTURE

Abstract
Research and development in mobile robotics are continuously growing. The ability of a human-made machine to navigate safely in a given environment is a challenging task. In agricultural environments, robot navigation can achieve high levels of complexity due to the harsh conditions that they present. Thus, the presence of a reliable map where the robot can localize itself is crucial, and feature extraction becomes a vital step of the navigation process. In this work, the feature extraction issue in the vineyard context is solved using Deep Learning to detect high-level features - the vine trunks. An experimental performance benchmark between two devices is performed: NVIDIA's Jetson Nano and Google's USB Accelerator. Several models were retrained and deployed on both devices, using a Transfer Learning approach. Specifically, MobileNets, Inception, and lite version of You Only Look Once are used to detect vine trunks in real-time. The models were retrained in a built in-house dataset, that is publicly available. The training dataset contains approximately 1600 annotated vine trunks in 336 different images. Results show that NVIDIA's Jetson Nano provides compatibility with a wider variety of Deep Learning architectures, while Google's USB Accelerator is limited to a unique family of architectures to perform object detection. On the other hand, the Google device showed an overall Average precision higher than Jetson Nano, with a better runtime performance. The best result obtained in this work was an average precision of 52.98% with a runtime performance of 23.14 ms per image, for MobileNet-V2. Recent experiments showed that the detectors are suitable for the use in the Localization and Mapping context.

2020

Occupancy Grid and Topological Maps Extraction from Satellite Images for Path Planning in Agricultural Robots

Authors
Santos, LC; Aguiar, AS; Santos, FN; Valente, A; Petry, M;

Publication
ROBOTICS

Abstract
Robotics will significantly impact large sectors of the economy with relatively low productivity, such as Agri-Food production. Deploying agricultural robots on the farm is still a challenging task. When it comes to localising the robot, there is a need for a preliminary map, which is obtained from a first robot visit to the farm. Mapping is a semi-autonomous task that requires a human operator to drive the robot throughout the environment using a control pad. Visual and geometric features are used by Simultaneous Localisation and Mapping (SLAM) Algorithms to model and recognise places, and track the robot's motion. In agricultural fields, this represents a time-consuming operation. This work proposes a novel solution-called AgRoBPP-bridge-to autonomously extract Occupancy Grid and Topological maps from satellites images. These preliminary maps are used by the robot in its first visit, reducing the need of human intervention and making the path planning algorithms more efficient. AgRoBPP-bridge consists of two stages: vineyards row detection and topological map extraction. For vineyards row detection, we explored two approaches, one that is based on conventional machine learning technique, by considering Support Vector Machine with Local Binary Pattern-based features, and another one found in deep learning techniques (ResNET and DenseNET). From the vineyards row detection, we extracted an occupation grid map and, by considering advanced image processing techniques and Voronoi diagrams concept, we obtained a topological map. Our results demonstrated an overall accuracy higher than 85% for detecting vineyards and free paths for robot navigation. The Support Vector Machine (SVM)-based approach demonstrated the best performance in terms of precision and computational resources consumption. AgRoBPP-bridge shows to be a relevant contribution to simplify the deployment of robots in agriculture.

2020

Navigation Stack for Robots Working in Steep Slope Vineyard

Authors
Santos, LC; de Aguiar, ASP; Santos, FN; Valente, A; Ventura, JB; Sousa, AJ;

Publication
Intelligent Systems and Applications - Proceedings of the 2020 Intelligent Systems Conference, IntelliSys 2020, London, UK, September 3-4, 2020, Volume 1

Abstract
Agricultural robotics is nowadays a complex, challenging, and relevant research topic for the sustainability of our society. Some agricultural environments present harsh conditions to robotics operability. In the case of steep-slope vineyards, there are several robotic challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of the Global Navigation Satellite System. Under these conditions, robotics navigation, mapping, and localization become a challenging task. Performing these tasks with safety and accuracy, a reliable and advanced Navigation stack for robots working in a steep slope vineyard is required. This paper presents the integration of several robotic components, path planning aware of robot centre of gravity and terrain slope, occupation grid map extraction from satellite images, a localization and mapping procedure based on high-level visual features reliable under GNSS signals blockage/missing, for steep-slope robots. © 2021, Springer Nature Switzerland AG.

  • 3
  • 5