2021
Autores
Pedrosa, J; Aresta, G; Ferreira, C; Mendonca, A; Campilho, A;
Publicação
PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES (BIOIMAGING), VOL 2
Abstract
Chest radiography is one of the most ubiquitous medical imaging exams used for the diagnosis and follow-up of a wide array of pathologies. However, chest radiography analysis is time consuming and often challenging, even for experts. This has led to the development of numerous automatic solutions for multipathology detection in chest radiography, particularly after the advent of deep learning. However, the black-box nature of deep learning solutions together with the inherent class imbalance of medical imaging problems often leads to weak generalization capabilities, with models learning features based on spurious correlations such as the aspect and position of laterality, patient position, equipment and hospital markers. In this study, an automatic method based on a YOLOv3 framework was thus developed for the detection of markers and written labels in chest radiography images. It is shown that this model successfully detects a large proportion of markers in chest radiography, even in datasets different from the training source, with a low rate of false positives per image. As such, this method could be used for performing automatic obscuration of markers in large datasets, so that more generic and meaningful features can be learned, thus improving classification performance and robustness.
2022
Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;
Publicação
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)
Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.
2021
Autores
Remeseiro, B; Mendonca, AM; Campilho, A;
Publicação
VISUAL COMPUTER
Abstract
Several systemic diseases affect the retinal blood vessels, and thus, their assessment allows an accurate clinical diagnosis. This assessment entails the estimation of the arteriolar-to-venular ratio (AVR), a predictive biomarker of cerebral atrophy and cardiovascular events in adults. In this context, different automatic and semiautomatic image-based approaches for artery/vein (A/V) classification and AVR estimation have been proposed in the literature, to the point of having become a hot research topic in the last decades. Most of these approaches use a wide variety of image properties, often redundant and/or irrelevant, requiring a training process that limits their generalization ability when applied to other datasets. This paper presents a new automatic method for A/V classification that just uses the local contrast between blood vessels and their surrounding background, computes a graph that represents the vascular structure, and applies a multilevel thresholding to obtain a preliminary classification. Next, a novel graph propagation approach was developed to obtain the final A/V classification and to compute the AVR. Our approach has been tested on two public datasets (INSPIRE and DRIVE), obtaining high classification accuracy rates, especially in the main vessels, and AVR ratios very similar to those provided by human experts. Therefore, our fully automatic method provides the reliable results without any training step, which makes it suitable for use with different retinal image datasets and as part of any clinical routine.
2022
Autores
Penas, S; Araujo, T; Mendonca, AM; Faria, S; Silva, J; Campilho, A; Martins, ML; Sousa, V; Rocha Sousa, A; Carneiro, A; Falcao Reis, F;
Publicação
GRAEFES ARCHIVE FOR CLINICAL AND EXPERIMENTAL OPHTHALMOLOGY
Abstract
Purpose This study aims to investigate retinal and choroidal vascular reactivity to carbogen in central serous chorioretinopathy (CSC) patients. Methods An experimental pilot study including 68 eyes from 20 CSC patients and 14 age and sex-matched controls was performed. The participants inhaled carbogen (5% CO2 + 95% O-2) for 2 min through a high-concentration disposable mask. A 30 degrees disc-centered fundus imaging using infra-red (IR) and macular spectral domain optical coherence tomography (SD-OCT) using the enhanced depth imaging (EDI) technique was performed, both at baseline and after a 2-min gas exposure. A parametric model fitting-based approach for automatic retinal blood vessel caliber estimation was used to assess the mean variation in both arterial and venous vasculature. Choroidal thickness was measured in two different ways: the subfoveal choroidal thickness (SFCT) was calculated using a manual caliper and the mean central choroidal thickness (MCCT) was assessed using an automatic software. Results No significant differences were detected in baseline hemodynamic parameters between both groups. A significant positive correlation was found between the participants' age and arterial diameter variation (p < 0.001, r= 0.447), meaning that younger participants presented a more vasoconstrictive response (negative variation) than older ones. No significant differences were detected in the vasoreactive response between CSC and controls for both arterial and venous vessels (p = 0.63 and p = 0.85, respectively). Although the vascular reactivity was not related to the activity of CSC, it was related to the time of disease, for both the arterial (p = 0.02, r = 0.381) and venous (p = 0.001, r= 0.530) beds. SFCT and MCCT were highly correlated (r= 0.830, p < 0.001). Both SFCT and MCCT significantly increased in CSC patients (p < 0.001 and p < 0.001) but not in controls (p = 0.059 and 0.247). A significant negative correlation between CSC patients' age and MCCT variation (r = - 0.340, p = 0.049) was detected. In CSC patients, the choroidal thickness variation was not related to the activity state, time of disease, or previous photodynamic treatment. Conclusion Vasoreactivity to carbogen was similar in the retinal vessels but significantly higher in the choroidal vessels of CSC patients when compared to controls, strengthening the hypothesis of a choroidal regulation dysfunction in this pathology.
2026
Autores
Melo, M; Carneiro, A; Campilho, A; Mendonça, AM;
Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2025, PT II
Abstract
The segmentation of the foveal avascular zone (FAZ) in optical coherence tomography angiography (OCTA) images plays a crucial role in diagnosing and monitoring ocular diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). However, accurate FAZ segmentation remains challenging due to image quality and variability. This paper provides a comprehensive review of FAZ segmentation techniques, including traditional image processing methods and recent deep learning-based approaches. We propose two novel deep learning methodologies: a multitask learning framework that integrates vessel and FAZ segmentation, and a conditionally trained network that employs vessel-aware loss functions. The performance of the proposed methods was evaluated on the OCTA-500 dataset using the Dice coefficient, Jaccard index, 95% Hausdorff distance, and average symmetric surface distance. Experimental results demonstrate that the multitask segmentation framework outperforms existing state-of-the-art methods, achieving superior FAZ boundary delineation and segmentation accuracy. The conditionally trained network also improves upon standard U-Net-based approaches but exhibits limitations in refining the FAZ contours.
2025
Autores
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;
Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING
Abstract
The increase in complexity of deep learning models demands explanations that can be obtained with methods like Grad-CAM. This method computes an importance map for the last convolutional layer relative to a specific class, which is then upsampled to match the size of the input. However, this final step assumes that there is a spatial correspondence between the last feature map and the input, which may not be the case. We hypothesize that, for models with large receptive fields, the feature spatial organization is not kept during the forward pass, which may render the explanations devoid of meaning. To test this hypothesis, common architectures were applied to a medical scenario on the public VinDr-CXR dataset, to a subset of ImageNet and to datasets derived from MNIST. The results show a significant dispersion of the spatial information, which goes against the assumption of Grad-CAM, and that explainability maps are affected by this dispersion. Furthermore, we discuss several other caveats regarding Grad-CAM, such as feature map rectification, empty maps and the impact of global average pooling or flatten layers. Altogether, this work addresses some key limitations of Grad-CAM which may go unnoticed for common users, taking one step further in the pursuit for more reliable explainability methods.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.