2022
Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonça, AM; Campilho, A;
Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)
Abstract
Chest radiography is one of the most common medical imaging modalites. However, chest radiography interpretation is a complex task that requires significant expertise. As such, the development of automatic systems for pathology detection has been proposed in literature, particularly using deep learning. However, these techniques suffer from a lack of explainability, which hinders their adoption in clinical scenarios. One technique commonly used by radiologists to support and explain decisions is to search for cases with similar findings for direct comparison. However, this process is extremely time-consuming and can be prone to confirmation bias. Automatic image retrieval methods have been proposed in literature but typically extract features from the whole image, failing to focus on the lesion in which the radiologist is interested. In order to overcome these issues, a novel framework LXIR for lesion-based image retrieval is proposed in this study, based on a state of the art object detection framework (YOLOv5) for the detection of relevant lesions as well as feature representation of those lesions. It is shown that the proposed method can successfully identify lesions and extract features which accurately describe high-order characteristics of each lesion, allowing to retrieve lesions of the same pathological class. Furthermore, it is show that in comparison to SSIM-based retrieval, a classical perceptual metric, and random retrieval of lesions, the proposed method retrieves the most relevant lesions 81% of times, according to the evaluation of two independent radiologists, in comparison to 42% of times by SSIM.
2022
Autores
Pedrosa, J; Aresta, G; Ferreira, C; Carvalho, C; Silva, J; Sousa, P; Ribeiro, L; Mendonca, AM; Campilho, A;
Publicação
SCIENTIFIC REPORTS
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.
2021
Autores
Pedrosa, J; Aresta, G; Ferreira, C; Mendonça, A; Campilho, A;
Publicação
PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES (BIOIMAGING), VOL 2
Abstract
Chest radiography is one of the most ubiquitous medical imaging exams used for the diagnosis and follow-up of a wide array of pathologies. However, chest radiography analysis is time consuming and often challenging, even for experts. This has led to the development of numerous automatic solutions for multipathology detection in chest radiography, particularly after the advent of deep learning. However, the black-box nature of deep learning solutions together with the inherent class imbalance of medical imaging problems often leads to weak generalization capabilities, with models learning features based on spurious correlations such as the aspect and position of laterality, patient position, equipment and hospital markers. In this study, an automatic method based on a YOLOv3 framework was thus developed for the detection of markers and written labels in chest radiography images. It is shown that this model successfully detects a large proportion of markers in chest radiography, even in datasets different from the training source, with a low rate of false positives per image. As such, this method could be used for performing automatic obscuration of markers in large datasets, so that more generic and meaningful features can be learned, thus improving classification performance and robustness.
2022
Autores
Meiburger, KM; Marzola, F; Zahnd, G; Faita, F; Loizou, CP; Lainé, N; Carvalho, C; Steinman, DA; Gibello, L; Bruno, RM; Clarenbach, R; Francesconi, M; Nicolaides, AN; Liebgott, H; Campilho, A; Ghotbi, R; Kyriacou, E; Navab, N; Griffin, M; Panayiotou, AG; Gherardini, R; Varetto, G; Bianchini, E; Pattichis, CS; Ghiadoni, L; Rouco, J; Orkisz, M; Molinari, F;
Publicação
COMPUTERS IN BIOLOGY AND MEDICINE
Abstract
After publishing an in-depth study that analyzed the ability of computerized methods to assist or replace human experts in obtaining carotid intima-media thickness (CIMT) measurements leading to correct therapeutic decisions, here the same consortium joined to present technical outlooks on computerized CIMT measurement systems and provide considerations for the community regarding the development and comparison of these methods, including considerations to encourage the standardization of computerized CIMT measurements and results presentation. A multi-center database of 500 images was collected, upon which three manual segmentations and seven computerized methods were employed to measure the CIMT, including traditional methods based on dynamic programming, deformable models, the first order absolute moment, anisotropic Gaussian derivative filters and deep learning-based image processing approaches based on U-Net convolutional neural networks. An inter- and intra-analyst variability analysis was conducted and segmentation results were analyzed by dividing the database based on carotid morphology, image signal-to-noise ratio, and research center. The computerized methods obtained CIMT absolute bias results that were comparable with studies in literature and they generally were similar and often better than the observed inter- and intra-analyst variability. Several computerized methods showed promising segmentation results, including one deep learning method (CIMT absolute bias = 106 +/- 89 mu m vs. 160 +/- 140 mu m intra-analyst variability) and three other traditional image processing methods (CIMT absolute bias = 139 +/- 119 mu m, 143 +/- 118 mu m and 139 +/- 136 mu m). The entire database used has been made publicly available for the community to facilitate future studies and to encourage an open comparison and technical analysis
2022
Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;
Publicação
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)
Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.
2021
Autores
Remeseiro, B; Mendonça, AM; Campilho, A;
Publicação
VISUAL COMPUTER
Abstract
Several systemic diseases affect the retinal blood vessels, and thus, their assessment allows an accurate clinical diagnosis. This assessment entails the estimation of the arteriolar-to-venular ratio (AVR), a predictive biomarker of cerebral atrophy and cardiovascular events in adults. In this context, different automatic and semiautomatic image-based approaches for artery/vein (A/V) classification and AVR estimation have been proposed in the literature, to the point of having become a hot research topic in the last decades. Most of these approaches use a wide variety of image properties, often redundant and/or irrelevant, requiring a training process that limits their generalization ability when applied to other datasets. This paper presents a new automatic method for A/V classification that just uses the local contrast between blood vessels and their surrounding background, computes a graph that represents the vascular structure, and applies a multilevel thresholding to obtain a preliminary classification. Next, a novel graph propagation approach was developed to obtain the final A/V classification and to compute the AVR. Our approach has been tested on two public datasets (INSPIRE and DRIVE), obtaining high classification accuracy rates, especially in the main vessels, and AVR ratios very similar to those provided by human experts. Therefore, our fully automatic method provides the reliable results without any training step, which makes it suitable for use with different retinal image datasets and as part of any clinical routine.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.