2021
Authors
Sousa, MQE; Pedrosa, J; Rocha, J; Pereira, SC; Mendonça, AM; Campilho, A;
Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021, Houston, TX, USA, December 9-12, 2021
Abstract
Chest radiography is one of the most ubiquitous imaging modalities, playing an essential role in screening, diagnosis and disease management. However, chest radiography interpretation is a time-consuming and complex task, requiring the availability of experienced radiologists. As such, automated diagnosis systems for pathology detection have been proposed aiming to reduce the burden on radiologists and reduce variability in image interpretation. While promising results have been obtained, particularly since the advent of deep learning, there are significant limitations in the developed solutions, namely the lack of representative data for less frequent pathologies and the learning of biases from the training data, such as patient position, medical devices and other markers as proxies for certain pathologies. The lack of explainability is also a challenge for the adoption of these solutions in clinical practice.Generative adversarial networks could play a significant role as a solution for these challenges as they allow to artificially create new realistic images. This way, new synthetic chest radiography images could be used to increase the prevalence of less represented pathology classes and decrease model biases as well as improving the explainability of automatic decisions by generating samples that serve as examples or counter-examples to the image being analysed, ensuring patient privacy.In this study, a few-shot generative adversarial network is used to generate synthetic chest radiography images. A minimum Fréchet Inception Distance score of 17.83 was obtained, allowing to generate convincing synthetic images. Perceptual validation was then performed by asking multiple readers to classify a mixed set of synthetic and real images. An average accuracy of 83.5% was obtained but a strong dependency on reader experience level was observed. While synthetic images showed structural irregularities, the overall image sharpness was a major factor in the decision of readers. The synthetic images were then validated using a MobileNet abnormality classifier and it was shown that over 99% of images were classified correctly, indicating that the generated images were correctly interpreted by the classifier. Finally, the use of the synthetic images during training of a YOLOv5 pathology detector showed that the addition of the synthetic images led to an improvement of mean average precision of 0.05 across 14 pathologies.In conclusion, the usage of few-shot generative adversarial networks for chest radiography image generation was shown and tested in multiple scenarios, establishing a baseline for future experiments to increase the applicability of generative models in clinical scenarios of automatic CXR screening and diagnosis tools.
2022
Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;
Publication
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)
Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.