Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by António Cunha

2025

A systematic review on soil moisture estimation using remote sensing data for agricultural applications

Authors
Teixeira, AC; Bakon, M; Lopes, D; Cunha, A; Sousa, JJ;

Publication
SCIENCE OF REMOTE SENSING

Abstract
Soil moisture plays a central role in agricultural sustainability and water-resource management under climate change and increasing water scarcity. Remote-sensing technologies have transformed soil-moisture estimation by enabling large-scale, high-resolution, and continuous monitoring. Following the PRISMA framework, this systematic review analyzes 64 studies published between 2016 and 2024, selected from 379 screened articles, focusing on agricultural applications. Remote-sensing data span optical, thermal, and microwave observations from satellites and unmanned aerial vehicles (UAVs), with estimation approaches classified as empirical, semi-empirical, physical, or learning-based. Satellite observations dominate the literature (73% of studies), while UAVs are increasingly used for high-resolution, site-specific assessments. Multi-sensor fusion, combining optical, thermal, and microwave data, is a growing strategy to overcome the limitations of individual sensors. Active SAR systems provide weather-independent measurements with high spatial resolution, whereas optical and thermal sensors offer valuable spectral indices but are limited by cloud cover and shallow penetration depth. Learning-based methods are the most frequent approach (54% of studies), using machine and deep learning to model complex relationships between soil moisture and remote-sensing variables. Principal challenges include vegetation interference, surface roughness, and limited in-situ calibration data. Mitigation strategies involve longer-wavelength SAR (L-and P-bands), multi-sensor fusion, downscaling, and integration of auxiliary datasets (soil texture, elevation, meteorology). By synthesizing recent advances and emerging trends, this review provides practical guidance for accurate, scalable, and operational soil-moisture monitoring in precision agriculture and environmental management.

2025

Automated Crack Detection in Micro-CT Scanning for Fiber-Reinforced Concrete Using Super-Resolution and Deep Learning

Authors
Souza, JPGD; Silva, AC; Congro, M; Roehl, D; Paiva, ACD; Pereira, S; Cunha, A;

Publication
ELECTRONICS

Abstract
Fiber-reinforced concrete is a crucial material for civil construction, and monitoring its health is important for preserving structures and preventing accidents and financial losses. Among non-destructive monitoring methods, Micro Computed Tomography (Micro-CT) imaging stands out as an inexpensive method that is free from noise and external interference. However, manual inspection of these images is subjective and requires significant human effort. In recent years, several studies have successfully utilized Deep Learning models for the automatic detection of cracks in concrete. However, according to the literature, a gap remains in the context of detecting cracks using Micro-CT images of fiber-reinforced concrete. Therefore, this work proposes a framework for automatic crack detection that combines the following: (a) a super-resolution-based preprocessing to generate, for each image, versions with double and quadruple the original resolution, (b) a classification step using EfficientNetB0 to classify the type of concrete matrix, (c) specific training of Detection Transformer (DETR) models for each type of matrix and resolution, and (d) and a votation committee-based post-processing among the models trained for each resolution to reduce false positives. The model was trained on a new publicly available dataset, the FIRECON dataset, which consists of 4064 images annotated by an expert, achieving metrics of 86.098% Intersection over Union, 89.37% Precision, 83.26% Recall, 84.99% F1-Score, and 44.69% Average Precision. The framework, therefore, significantly reduces analysis time and improves consistency compared to the manual methods used in previous studies. The results demonstrate the potential of Deep Learning to aid image analysis in damage assessments, providing valuable insights into the damage mechanisms of fiber-reinforced concrete and contributing to the development of durable, high-performance engineering materials. © 2025 by the authors.

2025

Deep Learning Meets InSAR for Infrastructure Monitoring: A Systematic Review of Models, Applications, and Challenges

Authors
Fontes, M; Bakon, M; Cunha, A; Sousa, JJ;

Publication
SENSORS

Abstract
Monitoring civil infrastructure is increasingly critical due to aging assets, urban expansion, and the need for early detection of structural instabilities. Interferometric Synthetic Aperture Radar (InSAR) offers high-resolution, all-weather surface deformation monitoring capabilities, which are being enhanced by recent advances in Deep Learning (DL). Despite growing interest, the existing literature lacks a comprehensive synthesis of how DL models are applied specifically to infrastructure monitoring using InSAR data. This review addresses this gap by systematically analyzing 67 peer-reviewed articles published between 2020 and February 2025. We examine the DL architectures employed, ranging from LSTMs and CNNs to Transformer-based and hybrid models, and assess their integration within various stages of the InSAR monitoring pipeline, including pre-processing, temporal analysis, segmentation, prediction, and risk classification. Our findings reveal a predominance of LSTM and CNN-based approaches, limited exploration of pre-processing tasks, and a focus on urban and linear infrastructures. We identify methodological challenges such as data sparsity, low coherence, and lack of standard benchmarks, and we highlight emerging trends including hybrid architectures, attention mechanisms, end-to-end pipelines, and data fusion with exogenous sources. The review concludes by outlining key research opportunities, such as enhancing model explainability, expanding applications to underexplored infrastructure types, and integrating DL-InSAR workflows into operational structural health monitoring systems.

2025

Comparative Analysis of Transformer Architectures and Ensemble Methods for Automated Glaucoma Screening in Fundus Images from Portable Ophthalmoscopes

Authors
Costa, ROC; França, PAF; Pessoa, ACP; Júnior, GB; de Almeida, JDS; Cunha, A;

Publication
VISION

Abstract
Deep learning for glaucoma screening often relies on high-resolution clinical images and convolutional neural networks (CNNs). However, these methods face significant performance drops when applied to noisy, low-resolution images from portable devices. To address this, our work investigates ensemble methods using multiple Transformer architectures for automated glaucoma detection in challenging scenarios. We use the Brazil Glaucoma (BrG) and private D-Eye datasets to assess model robustness. These datasets include images typical of smartphone-coupled ophthalmoscopes, which are often noisy and variable in quality. Four Transformer models-Swin-Tiny, ViT-Base, MobileViT-Small, and DeiT-Base-were trained and evaluated both individually and in ensembles. We evaluated the results at both image and patient levels to reflect clinical practice. The results show that, although performance drops on lower-quality images, ensemble combinations and patient-level aggregation significantly improve accuracy and sensitivity. We achieved up to 85% accuracy and an 84.2% F1-score on the D-Eye dataset, with a notable reduction in false negatives. Grad-CAM attention maps confirmed that Transformers identify anatomical regions relevant to diagnosis. These findings reinforce the potential of Transformer ensembles as an accessible solution for early glaucoma detection in populations with limited access to specialized equipment.

2024

Application of vision transformers in the early detection of excavation in the BRSET base

Authors
Ferreira, JS; Fernandes, MM; Leite, DDL; Gonzalez, D; da Camara, JCJCR; Rodrigues, JJR; Cunha, AAC;

Publication
PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT AND TECHNOLOGIES FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION, DSAI 2024

Abstract
Enlarged excavation of the optic papilla, caused by the loss of fibres that originate in the retina and transmit electrical stimuli to the visual cortex, is a critical indicator in the early detection of glaucoma, a disease that can lead to irreversible blindness. As the optic papilla shows morphological variations in the population, its identification can be a challenge. Methods based on deep learning have shown promise in helping doctors analyse these images more accurately. Recently, models such as Vision Transformers (ViT) have shown significant results in various medical applications, including glaucoma detection. However, the scarcity of quality data remains a major obstacle to training these models. This study evaluated the performance of the Swin Transformer, DeiT and Linformer models in detecting optic papilla excavation, using the new Brazilian Multilabel Ophthalmological Dataset (BRSET). The results showed that the DeiT model obtained the best accuracy, with 0.94, followed by the Swin Transformer, with 0.88, and the Linformer, with 0.85. The findings of this study suggest that ViT models can not only significantly improve the detection of glaucomatous papillary excavation, but also strengthen Human-Machine Collaboration, promoting more effective interaction between doctors and automated systems in medical diagnosis.

2025

Optimising Active Learning with a Decreasing-Budget-Based Strategy: A Medical Application Case Study

Authors
Gonzalez, DG; Leite, MI; Magalhaes, L; Cunha, A;

Publication
APPLIED SCIENCES-BASEL

Abstract
The collection and annotation of data for supervised machine learning remain challenging and costly tasks, particularly in domains that demand expert knowledge. Depending on the application, labelling may require highly specialised professionals, significantly increasing the overall effort and expense. Active learning techniques offer a promising solution by reducing the number of annotations needed, thereby lowering costs without compromising model performance. This work proposes an active learning with a decreasing-budget-based strategy to reduce the effort required to annotate medical images. The strategy encourages data annotators to focus on initial iterations, optimise budget allocation, and ensure that the trained model achieves maximum performance with reduced effort in subsequent iterations. This strategy also improves the performance of deep learning models, which perform better with fewer images, reducing the specialists' workload. This work also introduces three experiments that contribute to understanding the impact of the strategy in the annotation process.

  • 15
  • 30