2024
Authors
Carneiro, GA; Cunha, A; Sousa, J;
Publication
Abstract
2024
Authors
Antunes, C; Rodrigues, JMF; Cunha, A;
Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT III, UAHCI 2024
Abstract
Pneumonia and COVID-19 are respiratory illnesses, the last caused by the severe acute respiratory syndrome virus, coronavirus 2 (SARS-CoV-2). Traditional detection processes can be slow, prone to errors, and laborious, leading to potential human mistakes and a limited ability to keep up with the speed of pathogen development. A web diagnosis application to aid the physician in the diagnosis process is presented, based on a modified deep neural network (AlexNet) to detect COVID-19 on X-rays and computed tomography (CT) scans as well as to detect pneumonia on X-rays. The system reached accuracy results well above 90% in seven well-known and documented datasets regarding the detection of COVID-19 and Pneumonia on X-rays and COVID-19 in CT scans.
2024
Authors
Laroca, H; Rocio, V; Cunha, A;
Publication
Procedia Computer Science
Abstract
Fake news spreads rapidly, creating issues and making detection harder. The purpose of this study is to determine if fake news contains sentiment polarity (positive or negative), identify the polarity of sentiment present in their textual content and determine whether sentiment polarity is a reliable indication of fake news. For this, we use a deep learning model called BERT (Bidirectional Encoder Representations from Transformers), trained on a sentiment polarity dataset to classify the polarity of sentiments from a dataset of true and fake news. The findings show that sentiment polarity is not a reliable single feature for recognizing false news correctly and must be combined with other parameters to improve classification accuracy. © 2024 The Author(s). Published by Elsevier B.V.
2024
Authors
Abay, SG; Lima, F; Geurts, L; Camara, J; Pedrosa, J; Cunha, A;
Publication
Procedia Computer Science
Abstract
Low-cost smartphone-compatible portable ophthalmoscopes can capture visuals of the patient's retina to screen several ophthalmological diseases like glaucoma. The images captured have lower quality and resolution than standard retinography devices but enough for glaucoma screening. Small videos are captured to improve the chance of inspecting the eye properly; however, those videos may not always have enough quality for screening glaucoma, and the patient needs to repeat the inspection later. In this paper, a method for automatic assessment of the quality of videos captured using the D-Eye lens is proposed and evaluated with a personal dataset with 539 videos. Based on two methods developed for retina localization on the images/frames, the Circle Hough Transform method with a precision of 78,12% and the YOLOv7 method with a precision of 99,78%, the quality assessment method automatically decides on the quality of the video by measuring the number of frames of good-quality in each video, according to the chosen threshold. © 2024 Elsevier B.V.. All rights reserved.
2024
Authors
Couto, D; Davies, S; Sousa, J; Cunha, A;
Publication
Procedia Computer Science
Abstract
Interferometric Synthetic Aperture Radar (InSAR) revolutionizes surface study by measuring precise ground surface changes. Phase unwrapping, a key challenge in InSAR, involves removing ambiguity in measured phase. Deep learning algorithms like Generative Adversarial Networks (GANs) offer a potential solution for simplifying the unwrapping process. This work evaluates GANs for InSAR phase unwrapping, replacing SNAPHU with GANs. GANs achieve significantly faster processing times (2.38 interferograms per minute compared to SNAPHU's 0.78 interferograms per minute) with minimal quality degradation. A comparison of SBAS results shows that approximately 84% of GANs points are within 3 millimeters of SNAPHU. These results represent a significant advancement in phase unwrapping methods. While this experiment does not declare a definitive winner, it demonstrates that GANs are a viable alternative in certain scenarios and may replace SNAPHU as the preferred unwrapping method. © 2024 The Author(s). Published by Elsevier B.V.
2024
Authors
Teixeira, I; Sousa, J; Cunha, A;
Publication
Procedia Computer Science
Abstract
Port wine plays a crucial role in the Douro region in Portugal, providing significant economic support and international recognition. The efficient and sustainable management of the wine sector is of utmost importance, which includes the verification of abandoned vineyard plots in the region, covering an area of approximately 250,000 hectares. The manual analysis of aerial images for this purpose is a laborious and resource-intensive task. However, several artificial intelligence (AI) methods are available to assist in this process. This paper presents the development of AI models, specifically deep learning models, for the automatic detection of abandoned vineyards using aerial images. A private image database was expanded, containing a larger collection of images with both abandoned and non-abandoned vineyards. Multiple AI algorithms, including Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), were explored for classification. The results, particularly with the ViTs approach, achieved high accuracy and demonstrated the effectiveness of automatic detection, with the ViT models achieving an accuracy of 99.37% and an F1-score of 98.92%. The proposed AI models provide valuable tools for monitoring and decision-making related to vineyard abandonment. © 2024 The Author(s). Published by Elsevier B.V.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.