Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Joana Rocha iniciou o seu mestrado integrado em Bioengenharia na Universidade do Porto em 2014, focando-se em visão por computador e inteligência artificial para aplicações biomédicas. Enquanto investigadora na Swansea University, estudou os padrões de movimento humano, desenvolvendo uma técnica de medição para avaliação automatizada da atividade física em crianças. Em 2018, iniciou os seus trabalhos no INESC-TEC, onde contribuiu para sistemas de diagnóstico assistido por computador para cancro de pulmão, metodologias baseadas em biometria para deteção de ataques de apresentação, e onde trabalha agora em IA explicável para diagnóstico de doenças torácicas.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Joana Maria Rocha
  • Cargo

    Assistente de Investigação
  • Desde

    18 junho 2019
  • Nacionalidade

    Portugal
  • Contactos

    +351222094000
    joana.m.rocha@inesctec.pt
Publicações

2022

Attention-driven Spatial Transformer Network for Abnormality Detection in Chest X-Ray Images

Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;

Publicação
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)

Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.

2021

Segmentation of COVID-19 Lesions in CT Images

Autores
Rocha, J; Pereira, S; Campilho, A; Mendonça, AM;

Publicação
IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2021, Athens, Greece, July 27-30, 2021

Abstract
The worldwide pandemic caused by the new coronavirus (COVID-19) has encouraged the development of multiple computer-aided diagnosis systems to automate daily clinical tasks, such as abnormality detection and classification. Among these tasks, the segmentation of COVID lesions is of high interest to the scientific community, enabling further lesion characterization. Automating the segmentation process can be a useful strategy to provide a fast and accurate second opinion to the physicians, and thus increase the reliability of the diagnosis and disease stratification. The current work explores a CNN-based approach to segment multiple COVID lesions. It includes the implementation of a U-Net structure with a ResNet34 encoder able to deal with the highly imbalanced nature of the problem, as well as the great variability of the COVID lesions, namely in terms of size, shape, and quantity. This approach yields a Dice score of 64.1%, when evaluated on the publicly available COVID-19-20 Lung CT Lesion Segmentation GrandChallenge data set. © 2021 IEEE

2021

A Review on Deep Learning Methods for Chest X-Ray based Abnormality Detection and Thoracic Pathology Classification

Autores
Rocha J.; Mendonça A.M.; Campilho A.;

Publicação
U.Porto Journal of Engineering

Abstract
Backed by more powerful computational resources and optimized training routines, Deep Learning models have proven unprecedented performance and several benefits to extract information from chest X-ray data. This is one of the most common imaging exams, whose increasing demand is reflected in the aggravated radiologists’ workload. Consequently, healthcare would benefit from computer-aided diagnosis systems to prioritize certain exams and further identify possible pathologies. Pioneering work in chest X-ray analysis has focused on the identification of specific diseases, but to the best of the authors’ knowledge no paper has specifically reviewed relevant work on abnormality detection and multi-label thoracic pathology classification. This paper focuses on those issues, selecting the leading chest X-ray based deep learning strategies for comparison. In addition, the paper discloses the current annotated public chest X-ray databases, covering the common thorax diseases.

2021

Chest Radiography Few-Shot Image Synthesis for Automated Pathology Screening Applications

Autores
Sousa, MQE; Pedrosa, J; Rocha, J; Pereira, SC; Mendonça, AM; Campilho, A;

Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021, Houston, TX, USA, December 9-12, 2021

Abstract
Chest radiography is one of the most ubiquitous imaging modalities, playing an essential role in screening, diagnosis and disease management. However, chest radiography interpretation is a time-consuming and complex task, requiring the availability of experienced radiologists. As such, automated diagnosis systems for pathology detection have been proposed aiming to reduce the burden on radiologists and reduce variability in image interpretation. While promising results have been obtained, particularly since the advent of deep learning, there are significant limitations in the developed solutions, namely the lack of representative data for less frequent pathologies and the learning of biases from the training data, such as patient position, medical devices and other markers as proxies for certain pathologies. The lack of explainability is also a challenge for the adoption of these solutions in clinical practice.Generative adversarial networks could play a significant role as a solution for these challenges as they allow to artificially create new realistic images. This way, new synthetic chest radiography images could be used to increase the prevalence of less represented pathology classes and decrease model biases as well as improving the explainability of automatic decisions by generating samples that serve as examples or counter-examples to the image being analysed, ensuring patient privacy.In this study, a few-shot generative adversarial network is used to generate synthetic chest radiography images. A minimum Fréchet Inception Distance score of 17.83 was obtained, allowing to generate convincing synthetic images. Perceptual validation was then performed by asking multiple readers to classify a mixed set of synthetic and real images. An average accuracy of 83.5% was obtained but a strong dependency on reader experience level was observed. While synthetic images showed structural irregularities, the overall image sharpness was a major factor in the decision of readers. The synthetic images were then validated using a MobileNet abnormality classifier and it was shown that over 99% of images were classified correctly, indicating that the generated images were correctly interpreted by the classifier. Finally, the use of the synthetic images during training of a YOLOv5 pathology detector showed that the addition of the synthetic images led to an improvement of mean average precision of 0.05 across 14 pathologies.In conclusion, the usage of few-shot generative adversarial networks for chest radiography image generation was shown and tested in multiple scenarios, establishing a baseline for future experiments to increase the applicability of generative models in clinical scenarios of automatic CXR screening and diagnosis tools.

2020

Conventional Filtering Versus U-Net Based Models for Pulmonary Nodule Segmentation in CT Images

Autores
Rocha, J; Cunha, A; Mendonca, AM;

Publicação
JOURNAL OF MEDICAL SYSTEMS

Abstract
Lung cancer is considered one of the deadliest diseases in the world. An early and accurate diagnosis aims to promote the detection and characterization of pulmonary nodules, which is of vital importance to increase the patients' survival rates. The mentioned characterization is done through a segmentation process, facing several challenges due to the diversity in nodular shape, size, and texture, as well as the presence of adjacent structures. This paper tackles pulmonary nodule segmentation in computed tomography scans proposing three distinct methodologies. First, a conventional approach which applies the Sliding Band Filter (SBF) to estimate the filter's support points, matching the border coordinates. The remaining approaches are Deep Learning based, using the U-Net and a novel network called SegU-Net to achieve the same goal. Their performance is compared, as this work aims to identify the most promising tool to improve nodule characterization. All methodologies used 2653 nodules from the LIDC database, achieving a Dice score of 0.663, 0.830, and 0.823 for the SBF, U-Net and SegU-Net respectively. This way, the U-Net based models yield more identical results to the ground truth reference annotated by specialists, thus being a more reliable approach for the proposed exercise. The novel network revealed similar scores to the U-Net, while at the same time reducing computational cost and improving memory efficiency. Consequently, such study may contribute to the possible implementation of this model in a decision support system, assisting the physicians in establishing a reliable diagnosis of lung pathologies based on this segmentation task.