Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

Publications

2020

Digital Reconstitution of Road Traffic Accidents: A Flexible Methodology Relying on UAV Surveying and Complementary Strategies to Support Multiple Scenarios

Authors
Padua, L; Sousa, J; Vanko, J; Hruska, J; Adao, T; Peres, E; Sousa, A; Sousa, JJ;

Publication
International Journal of Environmental Research and Public Health

Abstract
The reconstitution of road traffic accidents scenes is a contemporary and important issue, addressed both by private and public entities in different countries around the world. However, the task of collecting data on site is not generally focused on with the same orientation and relevance. Addressing this type of accident scenario requires a balance between two fundamental yet competing concerns: (1) information collecting, which is a thorough and lengthy process and (2) the need to allow traffic to flow again as quickly as possible. This technical note proposes a novel methodology that aims to support road traffic authorities/professionals in activities involving the collection of data/evidences of motor vehicle collision scenarios by exploring the potential of using low-cost, small-sized and light-weight unmanned aerial vehicles (UAV). A high number of experimental tests and evaluations were conducted in various working conditions and in cooperation with the Portuguese law enforcement authorities responsible for investigating road traffic accidents. The tests allowed for concluding that the proposed method gathers all the conditions to be adopted as a near future approach for reconstituting road traffic accidents and proved to be: faster, more rigorous and safer than the current manual methodologies used not only in Portugal but also in many countries worldwide.

2020

Individual Grapevine Analysis in a Multi-Temporal Context Using UAV-Based Multi-Sensor Imagery

Authors
Padua, L; Adao, T; Sousa, A; Peres, E; Sousa, JJ;

Publication
Remote Sensing

Abstract
The use of unmanned aerial vehicles (UAVs) for remote sensing applications in precision viticulture significantly increased in the last years. UAVs’ capability to acquire high spatiotemporal resolution and georeferenced imagery from different sensors make them a powerful tool for a better understanding of vineyard spatial and multitemporal heterogeneity, allowing the estimation of parameters directly impacting plants’ health status. In this way, the decision support process in precision viticulture can be greatly improved. However, despite the proliferation of these innovative technologies in viticulture, most of the published studies rely only on data from a single sensor in order to achieve a specific goal and/or in a single/small period of the vineyard development. In order to address these limitations and fully exploit the advantages offered by the use of UAVs, this study explores the multi-temporal analysis of vineyard plots at a grapevine scale using different imagery sensors. Individual grapevine detection enables the estimation of biophysical and geometrical parameters, as well as missing grapevine plants. A validation procedure was carried out in six vineyard plots focusing on the detected number of grapevines and missing grapevines. A high overall agreement was obtained concerning the number of grapevines present in each row (99.8%), as well as in the individual grapevine identification (mean overall accuracy of 97.5%). Aerial surveys were conducted in two vineyard plots at different growth stages, being acquired for RGB, multispectral and thermal imagery. Moreover, the extracted individual grapevine parameters enabled us to assess the vineyard variability in a given epoch and to monitor its multi-temporal evolution. This type of analysis is critical for precision viticulture, constituting as a tool to significantly support the decision-making process.

2020

Effectiveness of Sentinel-2 in Multi-Temporal Post-Fire Monitoring When Compared with UAV Imagery

Authors
Padua, L; Guimaraes, N; Adao, T; Sousa, A; Peres, E; Sousa, JJ;

Publication
ISPRS International Journal of Geo-Information

Abstract
Unmanned aerial vehicles (UAVs) have become popular in recent years and are now used in a wide variety of applications. This is the logical result of certain technological developments that occurred over the last two decades, allowing UAVs to be equipped with different types of sensors that can provide high-resolution data at relatively low prices. However, despite the success and extraordinary results achieved by the use of UAVs, traditional remote sensing platforms such as satellites continue to develop as well. Nowadays, satellites use sophisticated sensors providing data with increasingly improving spatial, temporal and radiometric resolutions. This is the case for the Sentinel-2 observation mission from the Copernicus Programme, which systematically acquires optical imagery at high spatial resolutions, with a revisiting period of five days. It therefore makes sense to think that, in some applications, satellite data may be used instead of UAV data, with all the associated benefits (extended coverage without the need to visit the area). In this study, Sentinel-2 time series data performances were evaluated in comparison with high-resolution UAV-based data, in an area affected by a fire, in 2017. Given the 10-m resolution of Sentinel-2 images, different spatial resolutions of the UAV-based data (0.25, 5 and 10 m) were used and compared to determine their similarities. The achieved results demonstrate the effectiveness of satellite data for post-fire monitoring, even at a local scale, as more cost-effective than UAV data. The Sentinel-2 results present a similar behavior to the UAV-based data for assessing burned areas.

2019

UAV-Based Automatic Detection and Monitoring of Chestnut Trees

Authors
Marques, P; Padua, L; Adao, T; Hruska, J; Peres, E; Sousa, A; Sousa, JJ;

Publication
Remote Sensing

Abstract
Unmanned aerial vehicles have become a popular remote sensing platform for agricultural applications, with an emphasis on crop monitoring. Although there are several methods to detect vegetation through aerial imagery, these remain dependent of manual extraction of vegetation parameters. This article presents an automatic method that allows for individual tree detection and multi-temporal analysis, which is crucial in the detection of missing and new trees and monitoring their health conditions over time. The proposed method is based on the computation of vegetation indices (VIs), while using visible (RGB) and near-infrared (NIR) domain combination bands combined with the canopy height model. An overall segmentation accuracy above 95% was reached, even when RGB-based VIs were used. The proposed method is divided in three major steps: (1) segmentation and first clustering; (2) cluster isolation; and (3) feature extraction. This approach was applied to several chestnut plantations and some parameters—such as the number of trees present in a plantation (accuracy above 97%), the canopy coverage (93% to 99% accuracy), the tree height (RMSE of 0.33 m and R2 = 0.86), and the crown diameter (RMSE of 0.44 m and R2 = 0.96)—were automatically extracted. Therefore, by enabling the substitution of time-consuming and costly field campaigns, the proposed method represents a good contribution in managing chestnut plantations in a quicker and more sustainable way.

2019

USING VIRTUAL SCENARIOS TO PRODUCE MACHINE LEARNABLE ENVIRONMENTS FOR WILDFIRE DETECTION AND SEGMENTATION

Authors
Adão, T; Pinho, TM; Pádua, L; Santos, N; Sousa, A; Sousa, JJ; Peres, E;

Publication
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences

Abstract
<p><strong>Abstract.</strong> Today’s climatic proneness to extreme conditions together with human activity have been triggering a series of wildfire-related events that put at risk ecosystems, as well as animal and vegetal patrimony, while threatening dwellers nearby rural or urban areas. When intervention teams - firefighters, civil protection, police - acknowledge these events, usually they have already escalated to proportions hardly controllable mainly due wind gusts, fuel-like solo conditions, among other conditions that propitiate fire spreading.</p> <p>Currently, there is a wide range of camera-capable sensing systems that can be complemented with useful location data - for example, unmanned aerial systems (UAS) integrated cameras and IMU/GPS sensors, stationary surveillance systems - and processing components capable of fostering wildfire events detection and monitoring, thus providing accurate and faithful data for decision support. Precisely in what concerns to detection and monitoring, Deep Learning (DL) has been successfully applied to perform tasks involving classification and/or segmentation of objects of interest in several fields, such as Agriculture, Forestry and other similar areas. Usually, for an effective DL application, more specifically, based on imagery, datasets must rely on heavy and burdensome logistics to gather a representative problem formulation. What if putting together a dataset could be supported in customizable virtual environments, representing faithful situations to train machines, as it already occurs for human training in what regards some particular tasks (rescue operations, surgeries, industry assembling, etc.)?</p> <p>This work intends to propose not only a system to produce faithful virtual environments to complement and/or even supplant the need for dataset gathering logistics while eventually dealing with hypothetical proposals considering climate change events, but also to create tools for synthesizing wildfire environments for DL application. It will therefore enable to extend existing fire datasets with new data generated by human interaction and supervision, viable for training a computational entity. To that end, a study is presented to assess at which extent data virtually generated data can contribute to an effective DL system aiming to identify and segment fire, bearing in mind future developments of active monitoring systems to timely detect fire events and hopefully provide decision support systems to operational teams.</p>

Supervised
thesis

2019

Relatório de Estágio CdM: Módulo de refeições

Author
Bruno Manuel Rodrigues Lameira

Institution
UTAD

2019

viStaMPS: Aplicação Informática para Processamento, Manipulação e Visualização de Séries Temporais de imagens SAR

Author
Pedro Manuel Sousa Guimarães

Institution
UTAD

2018

Deteção e monitorização automática de castanheiros com base em imagens aéreas de alta resolução

Author
Pedro Miguel Mota Marques

Institution
UTAD

2018

Agroforestry management system for UAV-based time series data analysis focused on vineyards and chestnut trees

Author
Luís Pádua

Institution
UTAD

2018

viStaMPS: Aplicação Informática para Processamento, Manipulação e Visualização de Séries Temporais de imagens SAR

Author
Pedro Manuel Sousa Guimarães

Institution
UTAD