2021
Autores
Marques, S; Schiavo, F; Ferreira, CA; Pedrosa, J; Cunha, A; Campilho, A;
Publicação
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
Lung cancer is the type of cancer with highest mortality worldwide. Low-dose computerized tomography is the main tool used for lung cancer screening in clinical practice, allowing the visualization of lung nodules and the assessment of their malignancy. However, this evaluation is a complex task and subject to inter-observer variability, which has fueled the need for computer-aided diagnosis systems for lung nodule malignancy classification. While promising results have been obtained with automatic methods, it is often not straightforward to determine which features a given model is basing its decisions on and this lack of explainability can be a significant stumbling block in guaranteeing the adoption of automatic systems in clinical scenarios. Though visual malignancy assessment has a subjective component, radiologists strongly base their decision on nodule features such as nodule spiculation and texture, and a malignancy classification model should thus follow the same rationale. As such, this study focuses on the characterization of lung nodules as a means for the classification of nodules in terms of malignancy. For this purpose, different model architectures for nodule characterization are proposed and compared, with the final goal of malignancy classification. It is shown that models that combine direct malignancy prediction with specific branches for nodule characterization have a better performance than the remaining models, achieving an Area Under the Curve of 0.783. The most relevant features for malignancy classification according to the model were lobulation, spiculation and texture, which is found to be in line with current clinical practice.
2021
Autores
Rocha, J; Pereira, SC; Campilho, A; Mendonça, AM;
Publicação
BHI
Abstract
The worldwide pandemic caused by the new coronavirus (COVID-19) has encouraged the development of multiple computer-aided diagnosis systems to automate daily clinical tasks, such as abnormality detection and classification. Among these tasks, the segmentation of COVID lesions is of high interest to the scientific community, enabling further lesion characterization. Automating the segmentation process can be a useful strategy to provide a fast and accurate second opinion to the physicians, and thus increase the reliability of the diagnosis and disease stratification. The current work explores a CNN-based approach to segment multiple COVID lesions. It includes the implementation of a U-Net structure with a ResNet34 encoder able to deal with the highly imbalanced nature of the problem, as well as the great variability of the COVID lesions, namely in terms of size, shape, and quantity. This approach yields a Dice score of 64.1%, when evaluated on the publicly available COVID-19-20 Lung CT Lesion Segmentation GrandChallenge data set.
2021
Autores
Rocha J.; Mendonça A.M.; Campilho A.;
Publicação
U.Porto Journal of Engineering
Abstract
Backed by more powerful computational resources and optimized training routines, Deep Learning models have proven unprecedented performance and several benefits to extract information from chest X-ray data. This is one of the most common imaging exams, whose increasing demand is reflected in the aggravated radiologists’ workload. Consequently, healthcare would benefit from computer-aided diagnosis systems to prioritize certain exams and further identify possible pathologies. Pioneering work in chest X-ray analysis has focused on the identification of specific diseases, but to the best of the authors’ knowledge no paper has specifically reviewed relevant work on abnormality detection and multi-label thoracic pathology classification. This paper focuses on those issues, selecting the leading chest X-ray based deep learning strategies for comparison. In addition, the paper discloses the current annotated public chest X-ray databases, covering the common thorax diseases.
2021
Autores
Costa, P; Campilho, A; Cardoso, JS;
Publicação
CIARP
Abstract
Cancer is a leading cause of death worldwide. The detection and diagnosis of most cancers are confirmed by a tissue biopsy that is analyzed via the optic microscope. These samples are then scanned to giga-pixel sized images for further digital processing by pathologists. An automated method to segment the malignant regions of these images could be of great interest to detect cancer earlier and increase the agreement between specialists. However, annotating these giga-pixel images is very expensive, time-consuming and error-prone. We evaluate 4 existing annotation efficient methods, including transfer learning and self-supervised learning approaches. The best performing approach was to pretrain a model to colourize a grayscale histopathological image and then finetune that model on a dataset with manually annotated examples. This method was able to improve the Intersection over Union from 0.2702 to 0.3702.
2021
Autores
Sousa, MQE; Pedrosa, J; Rocha, J; Pereira, SC; Mendonça, AM; Campilho, A;
Publicação
BIBM
Abstract
Chest radiography is one of the most ubiquitous imaging modalities, playing an essential role in screening, diagnosis and disease management. However, chest radiography interpretation is a time-consuming and complex task, requiring the availability of experienced radiologists. As such, automated diagnosis systems for pathology detection have been proposed aiming to reduce the burden on radiologists and reduce variability in image interpretation. While promising results have been obtained, particularly since the advent of deep learning, there are significant limitations in the developed solutions, namely the lack of representative data for less frequent pathologies and the learning of biases from the training data, such as patient position, medical devices and other markers as proxies for certain pathologies. The lack of explainability is also a challenge for the adoption of these solutions in clinical practice.Generative adversarial networks could play a significant role as a solution for these challenges as they allow to artificially create new realistic images. This way, new synthetic chest radiography images could be used to increase the prevalence of less represented pathology classes and decrease model biases as well as improving the explainability of automatic decisions by generating samples that serve as examples or counter-examples to the image being analysed, ensuring patient privacy.In this study, a few-shot generative adversarial network is used to generate synthetic chest radiography images. A minimum Fréchet Inception Distance score of 17.83 was obtained, allowing to generate convincing synthetic images. Perceptual validation was then performed by asking multiple readers to classify a mixed set of synthetic and real images. An average accuracy of 83.5% was obtained but a strong dependency on reader experience level was observed. While synthetic images showed structural irregularities, the overall image sharpness was a major factor in the decision of readers. The synthetic images were then validated using a MobileNet abnormality classifier and it was shown that over 99% of images were classified correctly, indicating that the generated images were correctly interpreted by the classifier. Finally, the use of the synthetic images during training of a YOLOv5 pathology detector showed that the addition of the synthetic images led to an improvement of mean average precision of 0.05 across 14 pathologies.In conclusion, the usage of few-shot generative adversarial networks for chest radiography image generation was shown and tested in multiple scenarios, establishing a baseline for future experiments to increase the applicability of generative models in clinical scenarios of automatic CXR screening and diagnosis tools.
2021
Autores
Wanderley, DS; Ferreira, CA; Campilho, A; Silva, JA;
Publicação
CENTERIS/ProjMAN/HCist
Abstract
The detection of ovarian structures from ultrasound images is an important task in gynecological and reproductive medicine. An automatic detection system of ovarian structures can work as a second opinion for less experienced physicians or complex ultrasound interpretations. This work presents a study of three popular CNN-based object detectors applied to the detection of healthy ovarian structures, namely ovary and follicles, in B-mode ultrasound images. The Faster R-CNN presented the best results, with a precision of 95.5% and a recall of 94.7% for both classes, being able to detect all the ovaries correctly. The RetinaNet showed competitive results, exceeding 90% of precision and recall. Despite being very fast and suitable for real-time applications, YOLOv3 was ineffective in detecting ovaries and had the worst results detecting follicles. We also compare CNN results with classical computer vision methods presented in the ovarian follicle detection literature.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.