2023
Authors
Mendes, J; Pereira, T; Silva, F; Frade, J; Morgado, J; Freitas, C; Negrao, E; de Lima, BF; da Silva, MC; Madureira, AJ; Ramos, I; Costa, JL; Hespanhol, V; Cunha, A; Oliveira, HP;
Publication
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
Biomedical engineering has been targeted as a potential research candidate for machine learning applications, with the purpose of detecting or diagnosing pathologies. However, acquiring relevant, high-quality, and heterogeneous medical datasets is challenging due to privacy and security issues and the effort required to annotate the data. Generative models have recently gained a growing interest in the computer vision field due to their ability to increase dataset size by generating new high-quality samples from the initial set, which can be used as data augmentation of a training dataset. This study aimed to synthesize artificial lung images from corresponding positional and semantic annotations using two generative adversarial networks and databases of real computed tomography scans: the Pix2Pix approach that generates lung images from the lung segmentation maps; and the conditional generative adversarial network (cCGAN) approach that was implemented with additional semantic labels in the generation process. To evaluate the quality of the generated images, two quantitative measures were used: the domain-specific Frechet Inception Distance and Structural Similarity Index. Additionally, an expert assessment was performed to measure the capability to distinguish between real and generated images. The assessment performed shows the high quality of synthesized images, which was confirmed by the expert evaluation. This work represents an innovative application of GAN approaches for medical application taking into consideration the pathological findings in the CT images and the clinical evaluation to assess the realism of these features in the generated images.
2023
Authors
Lopes, C; Vilaca, A; Rocha, C; Mendes, J;
Publication
PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE
Abstract
The knee is one of the most stressed joints of the human body, being susceptible to ligament injuries and degenerative diseases. Due to the rising incidence of knee pathologies, the number of knee X-rays acquired is also increasing. Such X-rays are obtained for the diagnosis of knee injuries, the evaluation of the knee before and after surgery, and the monitoring of the knee joint's stability. These types of diagnosis and monitoring of the knee usually involve radiography under physical stress. This widely used medical tool provides a more objective analysis of the measurement of the knee laxity than a physical examination does, involving knee stress tests, such as valgus, varus, and Lachman. Despite being an improvement to physical examination regarding the physician's bias, stress radiography is still performed manually in a lot of healthcare facilities. To avoid exposing the physician to radiation and to decrease the number of X-ray images rejected due to inadequate positioning of the patient or the presence of artefacts, positioning systems for stress radiography of the knee have been developed. This review analyses knee positioning systems for X-ray environment, concluding that they have improved the objectivity and reproducibility during stress radiographs, but have failed to either be radiolucent or versatile with a simple ergonomic set-up.
2023
Authors
Sousa, JV; Matos, P; Silva, F; Freitas, P; Oliveira, HP; Pereira, T;
Publication
SENSORS
Abstract
In a clinical context, physicians usually take into account information from more than one data modality when making decisions regarding cancer diagnosis and treatment planning. Artificial intelligence-based methods should mimic the clinical method and take into consideration different sources of data that allow a more comprehensive analysis of the patient and, as a consequence, a more accurate diagnosis. Lung cancer evaluation, in particular, can benefit from this approach since this pathology presents high mortality rates due to its late diagnosis. However, many related works make use of a single data source, namely imaging data. Therefore, this work aims to study the prediction of lung cancer when using more than one data modality. The National Lung Screening Trial dataset that contains data from different sources, specifically, computed tomography (CT) scans and clinical data, was used for the study, the development and comparison of single-modality and multimodality models, that may explore the predictive capability of these two types of data to their full potential. A ResNet18 network was trained to classify 3D CT nodule regions of interest (ROI), whereas a random forest algorithm was used to classify the clinical data, with the former achieving an area under the ROC curve (AUC) of 0.7897 and the latter 0.5241. Regarding the multimodality approaches, three strategies, based on intermediate and late fusion, were implemented to combine the information from the 3D CT nodule ROIs and the clinical data. From those, the best model-a fully connected layer that receives as input a combination of clinical data and deep imaging features, given by a ResNet18 inference model-presented an AUC of 0.8021. Lung cancer is a complex disease, characterized by a multitude of biological and physiological phenomena and influenced by multiple factors. It is thus imperative that the models are capable of responding to that need. The results obtained showed that the combination of different types may have the potential to produce more comprehensive analyses of the disease by the models.
2023
Authors
Pereira, SC; Rocha, J; Campilho, A; Sousa, P; Mendonca, AM;
Publication
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
Abstract
Background and Objective: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for exam-ple, 224 x 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radi-ological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are com-bined in a parameter-efficient fashion. Methods: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 x 224, 4 48 x 4 48 and 896 x 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. Results: The proposed approach (AUC 83 . 27 +/- 0 . 17 , 7.1M parameters) outperforms standard single-scale models (AUC 81 . 76 +/- 0 . 18 , 82 . 62 +/- 0 . 11 and 82 . 39 +/- 0 . 13 for input sizes 224 x 224, 4 48 x 4 48 and 896 x 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83 . 27 +/- 0 . 11 , 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classifi-cation of all findings, regardless of their size, highlighting the advantages of this approach. Conclusions: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
2023
Authors
Melo, T; Carneiro, A; Campilho, A; Mendonca, AM;
Publication
JOURNAL OF MEDICAL IMAGING
Abstract
Purpose: The development of accurate methods for retinal layer and fluid segmentation in optical coherence tomography images can help the ophthalmologists in the diagnosis and follow-up of retinal diseases. Recent works based on joint segmentation presented good results for the segmentation of most retinal layers, but the fluid segmentation results are still not satisfactory. We report a hierarchical framework that starts by distinguishing the retinal zone from the background, then separates the fluid-filled regions from the rest, and finally, discriminates the several retinal layers.Approach: Three fully convolutional networks were trained sequentially. The weighting scheme used for computing the loss function during training is derived from the outputs of the networks trained previously. To reinforce the relative position between retinal layers, the mutex Dice loss (included for optimizing the last network) was further modified so that errors between more distant layers are more penalized. The method's performance was evaluated using a public dataset.Results: The proposed hierarchical approach outperforms previous works in the segmentation of the inner segment ellipsoid layer and fluid (Dice coefficient = 0.95 and 0.82, respectively). The results achieved for the remaining layers are at a state-of-the-art level.Conclusions: The proposed framework led to significant improvements in fluid segmentation, without compromising the results in the retinal layers. Thus, its output can be used by ophthalmologists as a second opinion or as input for automatic extraction of relevant quantitative biomarkers.
2022
Authors
Sousa, J; Pereira, T; Silva, F; Silva, MC; Vilares, AT; Cunha, A; Oliveira, HP;
Publication
APPLIED SCIENCES-BASEL
Abstract
Lung cancer is one of the most common causes of cancer-related mortality, and since the majority of cases are diagnosed when the tumor is in an advanced stage, the 5-year survival rate is dismally low. Nevertheless, the chances of survival can increase if the tumor is identified early on, which can be achieved through screening with computed tomography (CT). The clinical evaluation of CT images is a very time-consuming task and computed-aided diagnosis systems can help reduce this burden. The segmentation of the lungs is usually the first step taken in image analysis automatic models of the thorax. However, this task is very challenging since the lungs present high variability in shape and size. Moreover, the co-occurrence of other respiratory comorbidities alongside lung cancer is frequent, and each pathology can present its own scope of CT imaging appearances. This work investigated the development of a deep learning model, whose architecture consists of the combination of two structures, a U-Net and a ResNet34. The proposed model was designed on a cross-cohort dataset and it achieved a mean dice similarity coefficient (DSC) higher than 0.93 for the 4 different cohorts tested. The segmentation masks were qualitatively evaluated by two experienced radiologists to identify the main limitations of the developed model, despite the good overall performance obtained. The performance per pathology was assessed, and the results confirmed a small degradation for consolidation and pneumocystis pneumonia cases, with a DSC of 0.9015 +/- 0.2140 and 0.8750 +/- 0.1290, respectively. This work represents a relevant assessment of the lung segmentation model, taking into consideration the pathological cases that can be found in the clinical routine, since a global assessment could not detail the fragilities of the model.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.