2022
Autores
Aguiar, AS; dos Santos, FN; Sobreira, H; Boaventura Cunha, J; Sousa, AJ;
Publicação
FRONTIERS IN ROBOTICS AND AI
Abstract
Developing ground robots for agriculture is a demanding task. Robots should be capable of performing tasks like spraying, harvesting, or monitoring. However, the absence of structure in the agricultural scenes challenges the implementation of localization and mapping algorithms. Thus, the research and development of localization techniques are essential to boost agricultural robotics. To address this issue, we propose an algorithm called VineSLAM suitable for localization and mapping in agriculture. This approach uses both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel Particle Filter that considers both feature modalities. The numeric stability of the algorithm was tested using simulated data. The proposed methodology proved to be suitable to localize a robot using only three orthogonal semiplanes. Moreover, the entire VineSLAM pipeline was compared against a state-of-the-art approach considering three real-world experiments in a woody-crop vineyard. Results show that our approach can localize the robot with precision even in long and symmetric vineyard corridors outperforming the state-of-the-art algorithm in this context.
2022
Autores
Leao, G; Costa, CM; Sousa, A; Reis, LP; Veiga, G;
Publicação
ROBOTICS
Abstract
Bin picking is a challenging problem that involves using a robotic manipulator to remove, one-by-one, a set of objects randomly stacked in a container. In order to provide ground truth data for evaluating heuristic or machine learning perception systems, this paper proposes using simulation to create bin picking environments in which a procedural generation method builds entangled tubes that can have curvatures throughout their length. The output of the simulation is an annotated point cloud, generated by a virtual 3D depth camera, in which the tubes are assigned with unique colors. A general metric based on micro-recall is proposed to compare the accuracy of point cloud annotations with the ground truth. The synthetic data is representative of a high quality 3D scanner, given that the performance of a tube modeling system when given 640 simulated point clouds was similar to the results achieved with real sensor data. Therefore, simulation is a promising technique for the automated evaluation of solutions for bin picking tasks.
2022
Autores
Coutinho, RM; Sousa, A; Santos, F; Cunha, M;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Soil Moisture (SM) is one of the most critical factors for a crop's growth, yield, and quality. Although Ground-Penetrating RADAR (GPR) is commonly used in satelite observation to analyze soil moisture, it is not cost-effective for agricultural applications. Automotive RADAR uses the concept of Frequency-Modulated Continuous Wave (FMCW) and is more competitive in terms of price. This paper evaluates the viability of using a cost-effective RADAR as a substitute for GPR for soil moisture content estimation. The research consisted of four experiments, and the results show that the RADAR's output signal and the soil moisture sensor SEN0193 have a high correlation with values as high as 0.93 when the SM is below 15%. Such results show that the tested sensor (and its cost-effective working principle) are able to determine soil water content (with certain limitations) in a non-intrusive, proximal sensing manner.
2022
Autores
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ; Oliveira, PM;
Publicação
ROBOTICS
Abstract
Object identification, such as tree trunk detection, is fundamental for forest robotics. Intelligent vision systems are of paramount importance in order to improve robotic perception, thus enhancing the autonomy of forest robots. To that purpose, this paper presents three contributions: an open dataset of 5325 annotated forest images; a tree trunk detection Edge AI benchmark between 13 deep learning models evaluated on four edge-devices (CPU, TPU, GPU and VPU); and a tree trunk mapping experiment using an OAK-D as a sensing device. The results showed that YOLOR was the most reliable trunk detector, achieving a maximum F1 score around 90% while maintaining high scores for different confidence levels; in terms of inference time, YOLOv4 Tiny was the fastest model, attaining 1.93 ms on the GPU. YOLOv7 Tiny presented the best trade-off between detection accuracy and speed, with average inference times under 4 ms on the GPU considering different input resolutions and at the same time achieving an F1 score similar to YOLOR. This work will enable the development of advanced artificial vision systems for robotics in forestry monitoring operations.
2022
Autores
Monteiro, F; Sousa, A;
Publicação
INTED2022 Proceedings - INTED Proceedings
Abstract
2022
Autores
Monteiro, F; Sousa, A;
Publicação
INTED2022 Proceedings - INTED Proceedings
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.