2023
Authors
Martins, JJ; Silva, M; Santos, F;
Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1
Abstract
To produce more food and tackle the labor scarcity, agriculture needs safer robots for repetitive and unsafe tasks (such as spraying). The interaction between humans and robots presents some challenges to ensure a certifiable safe collaboration between human-robot, a reliable system that does not damage goods and plants, in a context where the environment is mostly dynamic, due to the constant environment changes. A well-known solution to this problem is the implementation of real-time collision avoidance systems. This paper presents a global overview about state of the art methods implemented in the agricultural environment that ensure human-robot collaboration according to recognised industry standards. To complement are addressed the gaps and possible specifications that need to be clarified in future standards, taking into consideration the human-machine safety requirements for agricultural autonomous mobile robots.
2020
Authors
Neves, R; Ramos, T; Simionesei, L; Oliveira, A; Grosso, N; Santos, F; Moura, P; Stefan, V; Escorihuela, MJ; Gao, Q; Pérez-Pastor, A; Riquelme, J; Forcén, M; Biddoccu, M; Rabino, D; Bagagiolo, G; Karakaya, N;
Publication
Abstract
2023
Authors
Baltazar, AR; Dos Santos, FN; De Sousa, ML; Moreira, AP; Cunha, JB;
Publication
IEEE ACCESS
Abstract
The efficient application of phytochemical products in agriculture is a complex issue that demands optimised sprayers and variable rate technologies, which rely on advanced sensing systems to address challenges such as overdosage and product losses. This work developed a system capable of processing different tree canopy parameters to support precision fruit farming and environmental protection using intelligent spraying methodologies. This system is based on a 2D light detection and ranging (LiDAR) sensor and a Global Navigation Satellite System (GNSS) receiver integrated into a sprayer driven by a tractor. The algorithm detects the canopy boundaries, allowing spray only in the presence of vegetation. The spray volume spared evaluates the system's performance compared to a Tree Row Volume (TRV) methodology. The results showed a 28% reduction in the overdosage of spraying product. The second step in this work was calculating and adjusting the amount of liquid to apply based on the tree volume. Considering this parameter, the saving obtained had an average value for the right and left rows of 78%. The volume of the trees was also monitored in a georeferenced manner with the creation of a occupation grid map. This map recorded the trajectory of the sprayer and the detected trees according to their volume.
2023
Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;
Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1
Abstract
To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.
2023
Authors
Aguiar, AS; dos Santos, FN; Santos, LC; Sousa, AJ; Boaventura Cunha, J;
Publication
JOURNAL OF FIELD ROBOTICS
Abstract
Robotics in agriculture faces several challenges, such as the unstructured characteristics of the environments, variability of luminosity conditions for perception systems, and vast field extensions. To implement autonomous navigation systems in these conditions, robots should be able to operate during large periods and travel long trajectories. For this reason, it is essential that simultaneous localization and mapping algorithms can perform in large-scale and long-term operating conditions. One of the main challenges for these methods is maintaining low memory resources while mapping extensive environments. This work tackles this issue, proposing a localization and mapping approach called VineSLAM that uses a topological mapping architecture to manage the memory resources required by the algorithm. This topological map is a graph-based structure where each node is agnostic to the type of data stored, enabling the creation of a multilayer mapping procedure. Also, a localization algorithm is implemented, which interacts with the topological map to perform access and search operations. Results show that our approach is aligned with the state-of-the-art regarding localization precision, being able to compute the robot pose in long and challenging trajectories in agriculture. In addition, we prove that the topological approach innovates the state-of-the-art memory management. The proposed algorithm requires less memory than the other benchmarked algorithms, and can maintain a constant memory allocation during the entire operation. This consists of a significant innovation, since our approach opens the possibility for the deployment of complex 3D SLAM algorithms in real-world applications without scale restrictions.
2023
Authors
Magalhaes, SC; dos Santos, FN; Machado, P; Moreira, AP; Dias, J;
Publication
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU-Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU-Tensor Processing Unit (such as Coral Dev Board TPU), and DPU-Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency.Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5 W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.