Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

O meu nome é Ana Maria Mendonça e sou Professora Associada do Departamento de Engenharia Eletrotécnica (DEEC) da Faculdade de Engenharia da Universidade do Porto (FEUP). Foi nesta Universidade que concluí o meu doutoramento em 1994. Fui investigadora do Instituto de Engenharia Biomédica (INEB) até 2014, mas a partir de 2015 integrei o Centro de Investigação em Engenharia Biomédica em do INESC TEC como investigadora sénior.

Na minha atividade de gestão de ensino superior e investigação, fui membro do Conselho Executivo do DEEC e, mais recentemente, Subdiretora da FEUP. No INEB, integrei a Direção do Instituto inicialmente como vogal e, posteriormente, como Presidente da Direção.

Fui membro eleito do Conselho Científico da FEUP e sou atualmente membro do Conselho Pedagógico desta escola. Integrei as comissões científicas de vários ciclos de estudo da FEUP e sou atualment Diretora da Licenciatura e do Mestrado em BioEngenharia, do Mestrado em Engenharia Biomédica e do Programa Doutoral em Engenharia Biomédica da FEUP.

Tenho colaborado como investigadora ou como responsável em diversos projetos de investigação, principalmente na área da imagem biomédica. O meu trabalho de investigação centrou-se essencialmente no desenvolvimento de metodologias de análise de imagem e classificação tendo como objetivo a extração de informação útil de imagens médicas para apoiar o diagnóstico médico. O trabalho passado foi dedicado essencialmente às patologias da retina, do pulmão e doenças genéticas, mas o trabalho atual está essencialmente focado no desenvolvimento de sistema de apoio ao diagnóstico em oftalmologia e radiologia.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Ana Maria Mendonça
  • Cargo

    Investigador Sénior
  • Desde

    01 janeiro 2015
  • Nacionalidade

    Portugal
  • Contactos

    +351222094106
    ana.mendonca@inesctec.pt
006
Publicações

2026

Multitask Learning Approach for Foveal Avascular Zone Segmentation in OCTA Images

Autores
Melo, M; Carneiro, A; Campilho, A; Mendonça, AM;

Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2025, PT II

Abstract
The segmentation of the foveal avascular zone (FAZ) in optical coherence tomography angiography (OCTA) images plays a crucial role in diagnosing and monitoring ocular diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). However, accurate FAZ segmentation remains challenging due to image quality and variability. This paper provides a comprehensive review of FAZ segmentation techniques, including traditional image processing methods and recent deep learning-based approaches. We propose two novel deep learning methodologies: a multitask learning framework that integrates vessel and FAZ segmentation, and a conditionally trained network that employs vessel-aware loss functions. The performance of the proposed methods was evaluated on the OCTA-500 dataset using the Dice coefficient, Jaccard index, 95% Hausdorff distance, and average symmetric surface distance. Experimental results demonstrate that the multitask segmentation framework outperforms existing state-of-the-art methods, achieving superior FAZ boundary delineation and segmentation accuracy. The conditionally trained network also improves upon standard U-Net-based approaches but exhibits limitations in refining the FAZ contours.

2025

Grad-CAM: The impact of large receptive fields and other caveats

Autores
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;

Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING

Abstract
The increase in complexity of deep learning models demands explanations that can be obtained with methods like Grad-CAM. This method computes an importance map for the last convolutional layer relative to a specific class, which is then upsampled to match the size of the input. However, this final step assumes that there is a spatial correspondence between the last feature map and the input, which may not be the case. We hypothesize that, for models with large receptive fields, the feature spatial organization is not kept during the forward pass, which may render the explanations devoid of meaning. To test this hypothesis, common architectures were applied to a medical scenario on the public VinDr-CXR dataset, to a subset of ImageNet and to datasets derived from MNIST. The results show a significant dispersion of the spatial information, which goes against the assumption of Grad-CAM, and that explainability maps are affected by this dispersion. Furthermore, we discuss several other caveats regarding Grad-CAM, such as feature map rectification, empty maps and the impact of global average pooling or flatten layers. Altogether, this work addresses some key limitations of Grad-CAM which may go unnoticed for common users, taking one step further in the pursuit for more reliable explainability methods.

2025

Anatomically-Guided Inpainting for Local Synthesis of Normal Chest Radiographs

Autores
Pedrosa, J; Pereira, SC; Silva, J; Mendonça, AM; Campilho, A;

Publicação
DEEP GENERATIVE MODELS, DGM4MICCAI 2024

Abstract
Chest radiography (CXR) is one of the most used medical imaging modalities. Nevertheless, the interpretation of CXR images is time-consuming and subject to variability. As such, automated systems for pathology detection have been proposed and promising results have been obtained, particularly using deep learning. However, these tools suffer from poor explainability, which represents a major hurdle for their adoption in clinical practice. One proposed explainability method in CXR is through contrastive examples, i.e. by showing an alternative version of the CXR except without the lesion being investigated. While image-level normal/healthy image synthesis has been explored in literature, normal patch synthesis via inpainting has received little attention. In this work, a method to synthesize contrastive examples in CXR based on local synthesis of normal CXR patches is proposed. Based on a contextual attention inpainting network (CAttNet), an anatomically-guided inpainting network (AnaCAttNet) is proposed that leverages anatomical information of the original CXR through segmentation to guide the inpainting for a more realistic reconstruction. A quantitative evaluation of the inpainting is performed, showing that AnaCAttNet outperforms CAttNet (FID of 0.0125 and 0.0132 respectively). Qualitative evaluation by three readers also showed that AnaCAttNet delivers superior reconstruction quality and anatomical realism. In conclusion, the proposed anatomical segmentation module for inpainting is shown to improve inpainting performance.

2024

CLARE-XR: explainable regression-based classification of chest radiographs with label embeddings

Autores
Rocha, J; Pereira, SC; Sousa, P; Campilho, A; Mendonca, AM;

Publicação
SCIENTIFIC REPORTS

Abstract
An automatic system for pathology classification in chest X-ray scans needs more than predictive performance, since providing explanations is deemed essential for fostering end-user trust, improving decision-making, and regulatory compliance. CLARE-XR is a novel methodology that, when presented with an X-ray image, identifies the associated pathologies and provides explanations based on the presentation of similar cases. The diagnosis is achieved using a regression model that maps an image into a 2D latent space containing the reference coordinates of all findings. The references are generated once through label embedding, before the regression step, by converting the original binary ground-truth annotations to 2D coordinates. The classification is inferred minding the distance from the coordinates of an inference image to the reference coordinates. Furthermore, as the regressor is trained on a known set of images, the distance from the coordinates of an inference image to the coordinates of the training set images also allows retrieving similar instances, mimicking the common clinical practice of comparing scans to confirm diagnoses. This inherently interpretable framework discloses specific classification rules and visual explanations through automatic image retrieval methods, outperforming the multi-label ResNet50 classification baseline across multiple evaluation settings on the NIH ChestX-ray14 dataset.

2024

DeepClean - Contrastive Learning Towards Quality Assessment in Large-Scale CXR Data Sets

Autores
Pereira, SC; Pedrosa, J; Rocha, J; Sousa, P; Campilho, A; Mendon a, AM;

Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM

Abstract
Large-scale datasets are essential for training deep learning models in medical imaging. However, many of these datasets contain poor-quality images that can compromise model performance and clinical reliability. In this study, we propose a framework to detect non-compliant images, such as corrupted scans, incomplete thorax X-rays, and images of non-thoracic body parts, by leveraging contrastive learning for feature extraction and parametric or non-parametric scoring methods for out-ofdistribution ranking. Our approach was developed and tested on the CheXpert dataset, achieving an AUC of 0.75 in a manually labeled subset of 1,000 images, and further qualitatively and visually validated on the external PadChest dataset, where it also performed effectively. Our results demonstrate the potential of contrastive learning to detect non-compliant images in largescale medical datasets, laying the foundation for future work on reducing dataset pollution and improving the robustness of deep learning models in clinical practice.