Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRIIS

2022

Preliminary Study of Deep Learning Algorithms for Metaplasia Detection in Upper Gastrointestinal Endoscopy

Autores
Neto, A; Ferreira, S; Libânio, D; Ribeiro, MD; Coimbra, MT; Cunha, A;

Publicação
Wireless Mobile Communication and Healthcare - 11th EAI International Conference, MobiHealth 2022, Virtual Event, November 30 - December 2, 2022, Proceedings

Abstract
Precancerous conditions such as intestinal metaplasia (IM) have a key role in gastric cancer development and can be detected during endoscopy. During upper gastrointestinal endoscopy (UGIE), misdiagnosis can occur due to technical and human factors or by the nature of the lesions, leading to a wrong diagnosis which can result in no surveillance/treatment and impairing the prevention of gastric cancer. Deep learning systems show great potential in detecting precancerous gastric conditions and lesions by using endoscopic images and thus improving and aiding physicians in this task, resulting in higher detection rates and fewer operation errors. This study aims to develop deep learning algorithms capable of detecting IM in UGIE images with a focus on model explainability and interpretability. In this work, white light and narrow-band imaging UGIE images collected in the Portuguese Institute of Oncology of Porto were used to train deep learning models for IM classification. Standard models such as ResNet50, VGG16 and InceptionV3 were compared to more recent algorithms that rely on attention mechanisms, namely the Vision Transformer (ViT), trained in 818 UGIE images (409 normal and 409 IM). All the models were trained using a 5-fold cross-validation technique and for validation, an external dataset will be tested with 100 UGIE images (50 normal and 50 IM). In the end, explainability methods (Grad-CAM and attention rollout) were used for more clear and more interpretable results. The model which performed better was ResNet50 with a sensitivity of 0.75 (±0.05), an accuracy of 0.79 (±0.01), and a specificity of 0.82 (±0.04). This model obtained an AUC of 0.83 (±0.01), where the standard deviation was 0.01, which means that all iterations of the 5-fold cross-validation have a more significant agreement in classifying the samples than the other models. The ViT model showed promising performance, reaching similar results compared to the remaining models. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

2022

DETECTING EARTHQUAKES IN SAR INTERFEROGRAM WITH VISION TRANSFORMER

Autores
Silva, B; Sousa, JJ; Cunha, A;

Publicação
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022)

Abstract
SAR Interferometry (InSAR) techniques are for detecting and monitoring ground deformation all over the planet. Natural disasters such as volcanoes and earthquakes deformations are among the main applications, and the great developments that we have witnessed in recent years suggests that near real-time monitoring will soon be possible. InSAR is developing fast - space agencies are launching more satellites, leading to exponential data growth. Consequently, conventional techniques cannot process all the acquired data. Modern deep learning methods can be a solution since they reach high accuracy in automatically detecting patterns in images and are fast to operate. In this work, we explore the contribution of deep learning vision transformer models to automatically detect seismic deformation in SAR interferograms. A VGG19 model is trained as baseline and ViT model uses 256x256 pixels patches and the full interferogram. The ViT model outperforms the state-of-the-art both for patch and full interferogram approaches, achieving 0.88 and 0.92 F1-score, respectively.

2022

Using deep learning for automatic detection of insects in traps

Autores
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;

Publicação
CENTERIS 2022 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2022, Hybrid Event / Lisbon, Portugal, November 9-11, 2022.

Abstract

2022

A deep learning approach for automatic counting of bedbugs and grape moth

Autores
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;

Publicação
CENTERIS 2022 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2022, Hybrid Event / Lisbon, Portugal, November 9-11, 2022.

Abstract

2022

Vineyard classification using OBIA on UAV-based RGB and multispectral data: A case study in different wine regions

Autores
Padua, L; Matese, A; Di Gennaro, SF; Morais, R; Peres, E; Sousa, JJ;

Publicação
COMPUTERS AND ELECTRONICS IN AGRICULTURE

Abstract
Vineyard classification is an important process within viticulture-related decision-support systems. Indeed, it improves grapevine vegetation detection, enabling both the assessment of vineyard vegetative properties and the optimization of in-field management tasks. Aerial data acquired by sensors coupled to unmanned aerial vehicles (UAVs) may be used to achieve it. Flight campaigns were conducted to acquire both RGB and multispectral data from three vineyards located in Portugal and in Italy. Red, green, blue and near infrared orthorectified mosaics resulted from the photogrammetric processing of the acquired data. They were then used to calculate RGB and multispectral vegetation indices, as well as a crop surface model (CSM). Three different supervised machine learning (ML) approaches-support vector machine (SVM), random forest (RF) and artificial neural network (ANN)-were trained to classify elements present within each vineyard into one of four classes: grapevine, shadow, soil and other vegetation. The trained models were then used to classify vineyards objects, generated from an object-based image analysis (OBIA) approach, into the four classes. Classification outcomes were compared with an automatic point-cloud classification approach and threshold-based approaches. Results shown that ANN provided a better overall classification performance, regardless of the type of features used. Features based on RGB data showed better performance than the ones based only on multispectral data. However, a higher performance was achieved when using features from both sensors. The methods presented in this study that resort to data acquired from different sensors are suitable to be used in the vineyard classification process. Furthermore, they also may be applied in other land use classification scenarios.

2022

Water Hyacinth (Eichhornia crassipes) Detection Using Coarse and High Resolution Multispectral Data

Autores
Padua, L; Antao Geraldes, AM; Sousa, JJ; Rodrigues, MA; Oliveira, V; Santos, D; Miguens, MFP; Castro, JP;

Publicação
DRONES

Abstract
Efficient detection and monitoring procedures of invasive plant species are required. It is of crucial importance to deal with such plants in aquatic ecosystems, since they can affect biodiversity and, ultimately, ecosystem function and services. In this study, it is intended to detect water hyacinth (Eichhornia crassipes) using multispectral data with different spatial resolutions. For this purpose, high-resolution data (<0.1 m) acquired from an unmanned aerial vehicle (UAV) and coarse-resolution data (10 m) from Sentinel-2 MSI were used. Three areas with a high incidence of water hyacinth located in the Lower Mondego region (Portugal) were surveyed. Different classifiers were used to perform a pixel-based detection of this invasive species in both datasets. From the different classifiers used, the results were achieved by the random forest classifiers stand-out (overall accuracy (OA): 0.94). On the other hand, support vector machine performed worst (OA: 0.87), followed by Gaussian naive Bayes (OA: 0.88), k-nearest neighbours (OA: 0.90), and artificial neural networks (OA: 0.91). The higher spatial resolution from UAV-based data enabled us to detect small amounts of water hyacinth, which could not be detected in Sentinel-2 data. However, and despite the coarser resolution, satellite data analysis enabled us to identify water hyacinth coverage, compared well with a UAV-based survey. Combining both datasets and even considering the different resolutions, it was possible to observe the temporal and spatial evolution of water hyacinth. This approach proved to be an effective way to assess the effects of the mitigation/control measures taken in the study areas. Thus, this approach can be applied to detect invasive species in aquatic environments and to monitor their changes over time.

  • 107
  • 386