Detalhes
Nome
Aurélio CampilhoCargo
Investigador AfiliadoDesde
01 janeiro 2014
Nacionalidade
PortugalContactos
+351222094106
aurelio.campilho@inesctec.pt
2023
Autores
Pereira, SC; Rocha, J; Campilho, A; Sousa, P; Mendonca, AM;
Publicação
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
Abstract
Background and Objective: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for exam-ple, 224 x 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radi-ological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are com-bined in a parameter-efficient fashion. Methods: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 x 224, 4 48 x 4 48 and 896 x 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. Results: The proposed approach (AUC 83 . 27 +/- 0 . 17 , 7.1M parameters) outperforms standard single-scale models (AUC 81 . 76 +/- 0 . 18 , 82 . 62 +/- 0 . 11 and 82 . 39 +/- 0 . 13 for input sizes 224 x 224, 4 48 x 4 48 and 896 x 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83 . 27 +/- 0 . 11 , 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classifi-cation of all findings, regardless of their size, highlighting the advantages of this approach. Conclusions: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
2023
Autores
Melo, T; Carneiro, A; Campilho, A; Mendonca, AM;
Publicação
JOURNAL OF MEDICAL IMAGING
Abstract
Purpose: The development of accurate methods for retinal layer and fluid segmentation in optical coherence tomography images can help the ophthalmologists in the diagnosis and follow-up of retinal diseases. Recent works based on joint segmentation presented good results for the segmentation of most retinal layers, but the fluid segmentation results are still not satisfactory. We report a hierarchical framework that starts by distinguishing the retinal zone from the background, then separates the fluid-filled regions from the rest, and finally, discriminates the several retinal layers.Approach: Three fully convolutional networks were trained sequentially. The weighting scheme used for computing the loss function during training is derived from the outputs of the networks trained previously. To reinforce the relative position between retinal layers, the mutex Dice loss (included for optimizing the last network) was further modified so that errors between more distant layers are more penalized. The method's performance was evaluated using a public dataset.Results: The proposed hierarchical approach outperforms previous works in the segmentation of the inner segment ellipsoid layer and fluid (Dice coefficient = 0.95 and 0.82, respectively). The results achieved for the remaining layers are at a state-of-the-art level.Conclusions: The proposed framework led to significant improvements in fluid segmentation, without compromising the results in the retinal layers. Thus, its output can be used by ophthalmologists as a second opinion or as input for automatic extraction of relevant quantitative biomarkers.
2023
Autores
Belo, RM; Rocha, J; Mendonca, AM; Campilho, A;
Publicação
FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2022
Abstract
Deep Learning (DL) algorithms allow fast results with high accuracy in medical imaging analysis solutions. However, to achieve a desirable performance, they require large amounts of high quality data. Active Learning (AL) is a subfield of DL that aims for more efficient models requiring ideally fewer data, by selecting the most relevant information for training. CheXpert is a Chest X-Ray (CXR) dataset, containing labels for different pathologic findings, alongside a Support Devices (SD) label. The latter contains several misannotations, which may impact the performance of a pathology detection model. The aim of this work is the detection of SDs in CheXpert CXR images and the comparison of the resulting predictions with the original CheXpert SD annotations, using AL approaches. A subset of 10,220 images was selected, manually annotated for SDs and used in the experimentations. In the first experiment, an initial model was trained on the seed dataset (6,200 images from this subset). The second and third approaches consisted in AL random sampling and least confidence techniques. In both of these, the seed dataset was used initially, and more images were iteratively employed. Finally, in the fourth experiment, a model was trained on the full annotated set. The AL least confidence experiment outperformed the remaining approaches, presenting an AUC of 71.10% and showing that training a model with representative information is favorable over training with all labeled data. This model was used to obtain predictions, which can be useful to limit the use of SD mislabelled images in future models.
2023
Autores
Costa, M; Pereira, SC; Pedrosa, J; Mendonca, AM; Campilho, A;
Publicação
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG
Abstract
Chest radiography is one of the most common imaging exams, but its interpretation is often challenging and timeconsuming, which has motivated the development of automated tools for pathology/abnormality detection. Deep learning models trained on large-scale chest X-ray datasets have shown promising results but are highly dependent on the quality of the data. However, these datasets often contain incorrect metadata and non-compliant or corrupted images. These inconsistencies are ultimately incorporated in the training process, impairing the validity of the results. In this study, a novel approach to detect non-compliant images based on deep features extracted from a patient position classification model and a pre-trained VGG16 model are proposed. This method is applied to CheXpert, a widely used public dataset. From a pool of 100 images, it is shown that the deep feature-based methods based on a patient position classification model are able to retrieve a larger number of non-compliant images (up to 81% of non-compliant images), when compared to the same methods but based on a pretrained VGG16 (up to 73%) and the state of the art uncertainty-based method (50%).
2023
Autores
Brioso, RC; Pedrosa, J; Mendonca, AM; Campilho, A;
Publicação
Proceedings - IEEE Symposium on Computer-Based Medical Systems
Abstract
The importance of X-Ray imaging analysis is paramount for health care institutions since it is the main imaging modality for patient diagnosis, and deep learning can be used to aid clinicians in image diagnosis or structure segmentation. In recent years, several articles demonstrate the capability that deep learning models have in classifying and segmenting chest x-ray images if trained in an annotated dataset. Unfortunately, for segmentation tasks, only a few relatively small datasets have annotations, which poses a problem for the training of robust deep learning strategies. In this work, a semi-supervised approach is developed which consists of using available information regarding other anatomical structures to guide the segmentation when the groundtruth segmentation for a given structure is not available. This semi-supervised is compared with a fully- supervised approach for the tasks of lung segmentation and for multi-structure segmentation (lungs, heart and clavicles) in chest x-ray images. The semi-supervised lung predictions are evaluated visually and show relevant improvements, therefore this approach could be used to improve performance in external datasets with missing groundtruth. The multi-structure predictions show an improvement in mean absolute and Hausdorff distances when compared to a fully supervised approach and visual analysis of the segmentations shows that false positive predictions are removed. In conclusion, the developed method results in a new strategy that can help solve the problem of missing annotations and increase the quality of predictions in new datasets. © 2023 IEEE.
Teses supervisionadas
2022
Autor
Joana Maria Neves da Rocha
Instituição
UP-FEUP
2022
Autor
José Ricardo Ferreira de Castro Ramos
Instituição
UP-FEUP
2022
Autor
Tânia Filipa Fernandes de Melo
Instituição
UP-FEUP
2022
Autor
Sofia Perestrelo de Vasconcelos Cardoso Pereira
Instituição
UP-FCUP
2022
Autor
Carlos Alexandre Nunes Ferreira
Instituição
UP-FEUP
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.