Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by André Silva Aguiar

2021

A Camera to LiDAR calibration approach through the optimization of atomic transformations

Authors
de Aguiar, ASP; de Oliveira, MAR; Pedrosa, EF; dos Santos, FBN;

Publication
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
This paper proposes a camera-to-3D Light Detection And Ranging calibration framework through the optimization of atomic transformations. The system is able to simultaneously calibrate multiple cameras with Light Detection And Ranging sensors, solving the problem of Bundle. In comparison with the state-of-the-art, this work presents several novelties: the ability to simultaneously calibrate multiple cameras and LiDARs; the support for multiple sensor modalities; the calibration through the optimization of atomic transformations, without changing the topology of the input transformation tree; and the integration of the calibration framework within the Robot Operating System (ROS) framework. The software pipeline allows the user to interactively position the sensors for providing an initial estimate, to label and collect data, and visualize the calibration procedure. To test this framework, an agricultural robot with a stereo camera and a 3D Light Detection And Ranging sensor was used. Pairwise calibrations and a single calibration of the three sensors were tested and evaluated. Results show that the proposed approach produces accurate calibrations when compared to the state-of-the-art, and is robust to harsh conditions such as inaccurate initial guesses or small amount of data used in calibration. Experiments have shown that our optimization process can handle an angular error of approximately 20 degrees and a translation error of 0.5 meters, for each sensor. Moreover, the proposed approach is able to achieve state-of-the-art results even when calibrating the entire system simultaneously.

2021

Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models

Authors
Aguiar, AS; Magalhaes, SA; dos Santos, FN; Castro, L; Pinho, T; Valente, J; Martins, R; Boaventura Cunha, J;

Publication
AGRONOMY-BASEL

Abstract
The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.

2021

Autonomous Robot Visual-Only Guidance in Agriculture Using Vanishing Point Estimation

Authors
Sarmento, J; Aguiar, AS; dos Santos, FN; Sousa, AJ;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2021)

Abstract
Autonomous navigation in agriculture is very challenging as it usually takes place outdoors where there is rough terrain, uncontrolled natural lighting, constantly changing organic scenarios and sometimes the absence of a Global Navigation Satellite System (GNSS). In this work, a single camera and a Google coral dev Board Edge Tensor Processing Unit (TPU) setup is proposed to navigate among a woody crop, more specifically a vineyard. The guidance is provided by estimating the vanishing point and observing its position with respect to the central frame, and correcting the steering angle accordingly. The vanishing point is estimated by object detection using Deep Learning (DL) based Neural Networks (NN) to obtain the position of the trunks in the image. The NN's were trained using Transfer Learning (TL), which requires a smaller dataset than conventional training methods. For this purpose, a dataset with 4221 images was created considering image collection, annotation and augmentation procedures. Results show that our framework can detect the vanishing point with an average of the absolute error of 0.52. and can be considered for autonomous steering.

2021

Robot navigation in vineyards based on the visual vanish point concept

Authors
Sarmento, J; Aguiar, AS; Santos, FND; Sousa, AJ;

Publication
2021 International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation, IRIA 2021

Abstract
Autonomous navigation in agriculture is very challenging as it usually takes place outdoors where there is rough terrain, uncontrolled natural lighting, constantly changing organic scenarios and sometimes the absence of Global Navigation Satellite System (GNSS) signal. In this work, a monocular visual system is proposed to estimate angular orientation and navigate between woody crops, more specifically a vineyard, using a Proportional Integrative Derivative (PID)-based controller. The guidance is provided by combining two ways to find the center of the vineyard: First, by estimating the vanishing point and second, by averaging the position of the two closest base trunk detections. Then, by the monocular angle perception, the angular error is determined. For obtaining the trunk position in the image, object detection using Deep Learning (DL) based Neural Networks (NN) is used. To evaluate the proposed controller, a visual vineyard simulation is created using Gazebo. The proposed joint controller is able to travel along a simulated straight vineyard with an RMS error of 1.17 cm. Moreover, a simulated curved vineyard modeled after the Douro region is tested in this work, where the robot was able to steer with an RMS error of 7.28 cm. © 2021 IEEE.

2022

Localization and Mapping on Agriculture Based on Point-Feature Extraction and Semiplanes Segmentation From 3D LiDAR Data

Authors
Aguiar, AS; dos Santos, FN; Sobreira, H; Boaventura Cunha, J; Sousa, AJ;

Publication
FRONTIERS IN ROBOTICS AND AI

Abstract
Developing ground robots for agriculture is a demanding task. Robots should be capable of performing tasks like spraying, harvesting, or monitoring. However, the absence of structure in the agricultural scenes challenges the implementation of localization and mapping algorithms. Thus, the research and development of localization techniques are essential to boost agricultural robotics. To address this issue, we propose an algorithm called VineSLAM suitable for localization and mapping in agriculture. This approach uses both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel Particle Filter that considers both feature modalities. The numeric stability of the algorithm was tested using simulated data. The proposed methodology proved to be suitable to localize a robot using only three orthogonal semiplanes. Moreover, the entire VineSLAM pipeline was compared against a state-of-the-art approach considering three real-world experiments in a woody-crop vineyard. Results show that our approach can localize the robot with precision even in long and symmetric vineyard corridors outperforming the state-of-the-art algorithm in this context.

2022

FollowMe - A Pedestrian Following Algorithm for Agricultural Logistic Robots

Authors
Sarmento, J; Dos Santos, FN; Aguiar, AS; Sobreira, H; Regueiro, CV; Valente, A;

Publication
2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
In Industry 4.0 and Agriculture 4.0, there are logistics areas where robots can play an important role, for example by following a person at a certain distance. These robots can transport heavy tools or simply help collect certain items, such as harvested fruits. The use of Ultra Wide Band (UWB) transceivers as range sensors is becoming very common in the field of robotics, i.e. for localising goods and machines. Since UWB technology has very accurate time resolution, it is advantageous for techniques such as Time Of Arrival (TOA), which can estimate distance by measuring the time between message frames. In this work, UWB transceivers are used as range sensors to track pedestrians/operators. In this work we propose the use of two algorithms for relative localization, between a person and robot. Both algorithms use a similar 2dimensional occupancy grid, but differ in filtering. The first is based on a Extended Kalman Filter (EKF) that fuses the range sensor with odometry. The second is based on an Histogram Filter that calculates the pedestrian position by discretizing the state space in well-defined regions. Finally, a controller is implemented to autonomously command the robot. Both approaches are tested and compared on a real differential drive robot. Both proposed solutions are able to follow a pedestrian at speeds of 0.1m/s, and are promising solutions to complement other solutions based on cameras and LiDAR.

  • 3
  • 4