2022
Authors
Esengönül, M; de Paiva, AC; Rodrigues, JMF; Cunha, A;
Publication
Wireless Mobile Communication and Healthcare - 11th EAI International Conference, MobiHealth 2022, Virtual Event, November 30 - December 2, 2022, Proceedings
Abstract
Diabetes has significant effects on the human body, one of which is the increase in the blood pressure and when not diagnosed early, can cause severe vision complications and even lead to blindness. Early screening is the key to overcoming such issues which can have a significant impact on rural areas and overcrowded regions. Mobile systems can help bring the technology to those in need. Transfer learning based Deep Learning algorithms combined with mobile retinal imaging systems can significantly reduce the screening time and lower the burden on healthcare workers. In this paper, several efficiency factors of Diabetic Retinopathy detection systems based on Convolutional Neural Networks are tested and evaluated for mobile applications. Two main techniques are used to measure the efficiency of DL based DR detection systems. The first method evaluates the effect of dataset change, where the base architecture of the DL model remains the same. The second method measures the effect of base architecture variation, where the dataset remains unchanged. The results suggest that the inclusivity of the datasets, and the dataset size significantly impact the DR detection accuracy and sensitivity. Amongst the five chosen lightweight architectures, EfficientNet-based DR detection algorithms outperformed the other transfer learning models along with APTOS Blindness Detection dataset. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
2022
Authors
Neto, A; Ferreira, S; Libânio, D; Ribeiro, MD; Coimbra, MT; Cunha, A;
Publication
MobiHealth
Abstract
Precancerous conditions such as intestinal metaplasia (IM) have a key role in gastric cancer development and can be detected during endoscopy. During upper gastrointestinal endoscopy (UGIE), misdiagnosis can occur due to technical and human factors or by the nature of the lesions, leading to a wrong diagnosis which can result in no surveillance/treatment and impairing the prevention of gastric cancer. Deep learning systems show great potential in detecting precancerous gastric conditions and lesions by using endoscopic images and thus improving and aiding physicians in this task, resulting in higher detection rates and fewer operation errors. This study aims to develop deep learning algorithms capable of detecting IM in UGIE images with a focus on model explainability and interpretability. In this work, white light and narrow-band imaging UGIE images collected in the Portuguese Institute of Oncology of Porto were used to train deep learning models for IM classification. Standard models such as ResNet50, VGG16 and InceptionV3 were compared to more recent algorithms that rely on attention mechanisms, namely the Vision Transformer (ViT), trained in 818 UGIE images (409 normal and 409 IM). All the models were trained using a 5-fold cross-validation technique and for validation, an external dataset will be tested with 100 UGIE images (50 normal and 50 IM). In the end, explainability methods (Grad-CAM and attention rollout) were used for more clear and more interpretable results. The model which performed better was ResNet50 with a sensitivity of 0.75 (±0.05), an accuracy of 0.79 (±0.01), and a specificity of 0.82 (±0.04). This model obtained an AUC of 0.83 (±0.01), where the standard deviation was 0.01, which means that all iterations of the 5-fold cross-validation have a more significant agreement in classifying the samples than the other models. The ViT model showed promising performance, reaching similar results compared to the remaining models.
2022
Authors
Silva, B; Sousa, JJ; Cunha, A;
Publication
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022)
Abstract
SAR Interferometry (InSAR) techniques are for detecting and monitoring ground deformation all over the planet. Natural disasters such as volcanoes and earthquakes deformations are among the main applications, and the great developments that we have witnessed in recent years suggests that near real-time monitoring will soon be possible. InSAR is developing fast - space agencies are launching more satellites, leading to exponential data growth. Consequently, conventional techniques cannot process all the acquired data. Modern deep learning methods can be a solution since they reach high accuracy in automatically detecting patterns in images and are fast to operate. In this work, we explore the contribution of deep learning vision transformer models to automatically detect seismic deformation in SAR interferograms. A VGG19 model is trained as baseline and ViT model uses 256x256 pixels patches and the full interferogram. The ViT model outperforms the state-of-the-art both for patch and full interferogram approaches, achieving 0.88 and 0.92 F1-score, respectively.
2022
Authors
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;
Publication
CENTERIS/ProjMAN/HCist
Abstract
Insect pests cause significant damage to agricultural production. Smart pest monitoring enables the automatic detection and identification of pests using artificial intelligence techniques. The automatic detection of pests is an important tool to help the farmer decide on the application of pesticides. Several studies were carried out to develop deep learning methods for detecting insect pests. However, it is still an open problem, as there are a scarcity and data features that do not allow the good performance of a deep learning method. Pest24 is a public dataset with great diversity and variability of insects, but it has a low detection rate. To improve detection performance in Pest24, this work proposes a method of automatic detection of insects using deep learning. Two experiments were carried out, applying the YOLOv5 with standard hyperparameters and the hyperparameter tuning obtained by the evolution algorithm. As a result, we obtained a performance superior to that reported in state of the art, with the YOLOv5 method with standard hyperparameters, with an mAP of 72.1%.
2022
Authors
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;
Publication
CENTERIS/ProjMAN/HCist
Abstract
The bedbug and the grape moth are the most significant pests affecting rice and vineyards, causing great damage. However, these pests are only two examples of the many insect pests that exist with great potential to cause significant crop damage. Insect traps are among the most appropriate solution for monitoring and counting, influencing the selection and dosage of the pesticide to be applied for pest control. However, the counting and monitoring operations are based on the frequent visit of technicians to the site and are supported by inefficient counting methods, which is a challenging and time-consuming task. This study proposes the automatic counting of bedbugs and grape moths in traps using deep learning algorithms. We use three different databases, Pest24, Bedbug and Grape moth. Pest24 is a public dataset with a great diversity of insects. The Bedbugs and the Grape moth datasets are private datasets provided by mySense, a precision agriculture platform developed and managed by researchers from the University of Tras-os-Montes e Alto Douro (UTAD). First, we trained the Pest24 dataset with YOLOv5, and we got an mAP of 69.3%. Then, using the weights obtained from the Pest24 dataset, we trained the Bedbug and Grape moth datasets. The best results for the bedbug dataset were obtained with the YOLOv5 with transfer learning with an AP of 96.5% and a counting error of 63.3%. The best result was obtained with YOLOv5 without transfer learning of Pest24 with an AP of 90.9% and a counting error of 6.7 for the Grape moth.
2022
Authors
Pádua, L; Matese, A; Di Gennaro, SF; Morais, R; Peres, E; Sousa, JJ;
Publication
COMPUTERS AND ELECTRONICS IN AGRICULTURE
Abstract
Vineyard classification is an important process within viticulture-related decision-support systems. Indeed, it improves grapevine vegetation detection, enabling both the assessment of vineyard vegetative properties and the optimization of in-field management tasks. Aerial data acquired by sensors coupled to unmanned aerial vehicles (UAVs) may be used to achieve it. Flight campaigns were conducted to acquire both RGB and multispectral data from three vineyards located in Portugal and in Italy. Red, green, blue and near infrared orthorectified mosaics resulted from the photogrammetric processing of the acquired data. They were then used to calculate RGB and multispectral vegetation indices, as well as a crop surface model (CSM). Three different supervised machine learning (ML) approaches-support vector machine (SVM), random forest (RF) and artificial neural network (ANN)-were trained to classify elements present within each vineyard into one of four classes: grapevine, shadow, soil and other vegetation. The trained models were then used to classify vineyards objects, generated from an object-based image analysis (OBIA) approach, into the four classes. Classification outcomes were compared with an automatic point-cloud classification approach and threshold-based approaches. Results shown that ANN provided a better overall classification performance, regardless of the type of features used. Features based on RGB data showed better performance than the ones based only on multispectral data. However, a higher performance was achieved when using features from both sensors. The methods presented in this study that resort to data acquired from different sensors are suitable to be used in the vineyard classification process. Furthermore, they also may be applied in other land use classification scenarios.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.