Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Luís Carlos Santos

2019

Vineyard Segmentation from Satellite Imagery Using Machine Learning

Autores
Santos, L; Santos, FN; Filipe, V; Shinde, P;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I

Abstract
Steep slope vineyards are a complex scenario for the development of ground robots due to the harsh terrain conditions and unstable localization systems. Automate vineyard tasks (like monitoring, pruning, spraying, and harvesting) requires advanced robotic path planning approaches. These approaches usually resort to Simultaneous Localization and Mapping (SLAM) techniques to acquire environment information, which requires previous navigation of the robot through the entire vineyard. The analysis of satellite or aerial images could represent an alternative to SLAM techniques, to build the first version of occupation grid map (needed by robots). The state of the art for aerial vineyard images analysis is limited to flat vineyards with straight vine’s row. This work considers a machine learning based approach (SVM classifier with Local Binary Pattern (LBP) based descriptor) to perform the vineyard segmentation from public satellite imagery. In the experiments with a dataset of satellite images from vineyards of Douro region, the proposed method achieved accuracy over 90%. © Springer Nature Switzerland AG 2019.

2020

A Version of Libviso2 for Central Dioptric Omnidirectional Cameras with a Laser-Based Scale Calculation

Autores
Aguiar, A; Santos, F; Santos, L; Sousa, A;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Monocular Visual Odometry techniques represent a challenging and appealing research area in robotics navigation field. The use of a single camera to track robot motion is a hardware-cheap solution. In this context, there are few Visual Odometry methods on the literature that estimate robot pose accurately using a single camera without any other source of information. The use of omnidirectional cameras in this field is still not consensual. Many works show that for outdoor environments the use of them does represent an improvement compared with the use of conventional perspective cameras. Besides that, in this work we propose an open-source monocular omnidirectional version of the state-of-the-art method Libviso2 that outperforms the original one even in outdoor scenes. This approach is suitable for central dioptric omnidirectional cameras and takes advantage of their wider field of view to calculate the robot motion with a really positive performance on the context of monocular Visual Odometry. We also propose a novel approach to calculate the scale factor that uses matches between laser measures and 3-D triangulated feature points to do so. The novelty of this work consists in the association of the laser ranges with the features on the omnidirectional image. Results were generate using three open-source datasets built in-house showing that our unified system largely outperforms the original monocular version of Libviso2.

2019

Monocular Visual Odometry Using Fisheye Lens Cameras

Autores
Aguiar, A; dos Santos, FN; Santos, L; Sousa, A;

Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal and have a reliable localization system in our robotic platform we aim to extract the better performance as possible from a monocular Visual Odometry method. To do so, we present a benchmark of Libviso2 using both perspective and fisheye lens cameras, studying the behavior of the method using both topologies in terms of motion performance in an outdoor environment. Also we analyze the quality of feature extraction of the method using the two camera systems studying the impact of the field of view and omnidirectional image rectification in VO. We propose a general methodology to incorporate a fisheye lens camera system into a VO method. Finally, we briefly describe the robot setup that was used to generate the results that will be presented. © 2019, Springer Nature Switzerland AG.

2019

FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware

Autores
Aguiar, A; Santos, F; Sousa, AJ; Santos, L;

Publicação
APPLIED SCIENCES-BASEL

Abstract
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.

2020

Deep Learning Applications in Agriculture: A Short Review

Autores
Santos, L; Santos, FN; Oliveira, PM; Shinde, P;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Deep learning (DL) incorporates a modern technique for image processing and big data analysis with large potential. Deep learning is a recent tool in the agricultural domain, being already successfully applied to other domains. This article performs a survey of different deep learning techniques applied to various agricultural problems, such as disease detection/identification, fruit/plants classification and fruit counting among other domains. The paper analyses the specific employed models, the source of the data, the performance of each study, the employed hardware and the possibility of real-time application to study eventual integration with autonomous robotic platforms. The conclusions indicate that deep learning provides high accuracy results, surpassing, with occasional exceptions, alternative traditional image processing techniques in terms of accuracy.

2020

Forest Robot and Datasets for Biomass Collection

Autores
Reis, R; dos Santos, FN; Santos, L;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Portugal has witnessed some of its largest wildfires in the last decade, due to the lack of forestry management and valuation strategies. A cost-effective biomass collection tool/approach can increase the forest valuing, being a tool to reduce fire risk in the forest. However, cost-effective forestry machinery/solutions are needed to harvest this biomass. Most of bigger operations in forests are already highly mechanized, but not the smaller operations. Mobile robotics know-how combined with new virtual reality and remote sensing techniques paved the way for a new robotics perspective regarding work machines in the forest. Navigation is still a challenge in a forest. There is a lot of information, trees consist of obstacles while lower vegetation may hide danger for robot trajectory, and the terrain in our region is mostly steep. The existence of accurate information about the environment is crucial for the navigation process and for biomass inventory. This paper presents a prototype forest robot for biomass collection. Besides, it is provided a dataset of different forest environments, containing data from different sensors such as 3D laser data, thermal camera, inertial units, GNSS, and RGB camera. These datasets are meant to provide information for the study of the forest terrain, allowing further development and research of navigation planning, biomass analysis, task planning, and information that professionals of this field may require.

  • 2
  • 5