2019
Autores
Aguiar, A; Sousa, A; dos Santos, FN; Oliveira, M;
Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal we benchmark 3 well known Visual Odometry methods with 2 datasets. Two of these are feature-based Visual Odometry algorithms: Libviso2 and SVO 2.0. The third is an appearance-based Visual Odometry algorithm called DSO. In monocular Visual Odometry, two main problems appear: pure rotations and scale estimation. In this paper, we focus on the first issue. To do so, we propose a Kalman Filter to fuse a single gyroscope with the output pose of monocular Visual Odometry, while estimating gyroscope's bias continuously. In this approach we propose a non-linear noise variation that ensures that bias estimation is not affected by Visual Odometry resultant rotations. We compare and discuss the three unchanged methods and the three methods with the proposed additional Kalman Filter. For tests, two public datasets are used: the Kitti dataset and another built in-house. Results show that our additional Kalman Filter highly improves Visual Odometry performance in rotation movements.
2019
Autores
Abreu, M; Lau, N; Sousa, A; Reis, LP;
Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Reinforcement learning algorithms are now more appealing than ever. Recent approaches bring power and tuning simplicity to the everyday work machine. The possibilities are endless, and the idea of automating learning without domain knowledge is quite tempting for many researchers. However, in competitive environments such as the RoboCup 3D Soccer Simulation League, there is a lot to be done regarding humanlike behaviors. Current teams use many mechanical movements to perform basic skills, such as running and dribbling the ball. This paper aims to use the PPO algorithm to optimize those skills, achieving natural gaits without sacrificing performance. We use Simspark to simulate a NAO humanoid robot, using visual and body sensors to control its actuators. Based on our results, we propose an indirect control approach and detailed parameter setups to obtain natural running and dribbling behaviors. The obtained performance is in some cases comparable or better than the top RoboCup teams. However, some skills are not ready to be applied in competitive environments yet, due to instability. This work contributes towards the improvement of RoboCup and some related technical challenges.
2019
Autores
Aguiar, A; dos Santos, FN; Santos, L; Sousa, A;
Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.
Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal and have a reliable localization system in our robotic platform we aim to extract the better performance as possible from a monocular Visual Odometry method. To do so, we present a benchmark of Libviso2 using both perspective and fisheye lens cameras, studying the behavior of the method using both topologies in terms of motion performance in an outdoor environment. Also we analyze the quality of feature extraction of the method using the two camera systems studying the impact of the field of view and omnidirectional image rectification in VO. We propose a general methodology to incorporate a fisheye lens camera system into a VO method. Finally, we briefly describe the robot setup that was used to generate the results that will be presented. © 2019, Springer Nature Switzerland AG.
2019
Autores
Aguiar, A; Santos, F; Sousa, AJ; Santos, L;
Publicação
APPLIED SCIENCES-BASEL
Abstract
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.
2019
Autores
Pinto, VH; Monteiro, JM; Gonçalves, J; Costa, P;
Publicação
Advances in Intelligent Systems and Computing
Abstract
NaSSIE - Navigation and Sensoring Skills in Engineering is a platform developed with the intent of facilitating the acquisition of some skills by Engineering Students, which is a core part of the process of controlling a mobile robot. In this paper, the chosen hardware and consequent physical construction of the prototype as well as vehicle’s associated software will be presented. As a use case, this platform was tested during the Robotic Day 2017 in Czech Republic. Preliminary results will also be presented of this year’s preparation for the Micromouse competition. © 2019, Springer Nature Switzerland AG.
2019
Autores
Sobreira, H; Costa, CM; Sousa, I; Rocha, L; Lima, J; Farias, PCMA; Costa, P; Paulo Moreira, AP;
Publicação
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS
Abstract
The self-localization of mobile robots in the environment is one of the most fundamental problems in the robotics navigation field. It is a complex and challenging problem due to the high requirements of autonomous mobile vehicles, particularly with regard to the algorithms accuracy, robustness and computational efficiency. In this paper, we present a comparison of three of the most used map-matching algorithms applied in localization based on natural landmarks: our implementation of the Perfect Match (PM) and the Point Cloud Library (PCL) implementation of the Iterative Closest Point (ICP) and the Normal Distribution Transform (NDT). For the purpose of this comparison we have considered a set of representative metrics, such as pose estimation accuracy, computational efficiency, convergence speed, maximum admissible initialization error and robustness to the presence of outliers in the robots sensors data. The test results were retrieved using our ROS natural landmark public dataset, containing several tests with simulated and real sensor data. The performance and robustness of the Perfect Match is highlighted throughout this article and is of paramount importance for real-time embedded systems with limited computing power that require accurate pose estimation and fast reaction times for high speed navigation. Moreover, we added to PCL a new algorithm for performing correspondence estimation using lookup tables that was inspired by the PM approach to solve this problem. This new method for computing the closest map point to a given sensor reading proved to be 40 to 60 times faster than the existing k-d tree approach in PCL and allowed the Iterative Closest Point algorithm to perform point cloud registration 5 to 9 times faster.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.