Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

My name is Ana Maria Mendonça and I am currently Associate Professor at the Department of Electrical and Computer Engineering (DEEC) of the Faculty of Engineering of the University of Porto (FEUP), where I got my PhD in 1994. I was a researcher at the Institute for Biomedical Engineering (INEB) until 2014, but since 2015 I am a senior researcher at INESC. At INEB, I was a member of the Board of Directors and afterwards President of the Board.

In my management activities in higher education and research, I was a member of the Executive Board of DEEC and more recently Deputy Director of FEUP. At INEB, I was a member of the Institute's Board of Directors, initially as a member and later as President of the Board.

I was an elected member of FEUP's Scientific Council and am currently a member of the school's Pedagogical Council. I was a member of the scientific committee of several academic programmes and, currently, I am the Director of the First Degree and the Master Degree in BioEngineering, of the Biomedical Engineering Master and the Doctoral Programme in Biomedical Engineering.

I have been collaborating as a research and also as responsible in several research projects, mostly dedicated to the development of image analysis and classification methodologies aiming at extracting essential information from medical images in order to support the diagnosis process. Past work has been mostly devoted to three main areas: retinal pathologies, lung diseases and genetic disorders, but ongoing work is mainly focused on the development of Computer-Aided Diagnosis systems in Ophthalmology and Radiology.

Interest
Topics
Details

Details

  • Name

    Ana Maria Mendonça
  • Role

    Senior Researcher
  • Since

    01st January 2015
  • Nationality

    Portugal
  • Contacts

    +351222094106
    ana.mendonca@inesctec.pt
004
Publications

2024

STERN: Attention-driven Spatial Transformer Network for abnormality detection in chest X-ray images

Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;

Publication
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.

2024

Automated image label extraction from radiology reports — A review

Authors
C Pereira, S; Mendonça, AM; Campilho, A; Sousa, P; Teixeira Lopes, C;

Publication
Artificial Intelligence in Medicine

Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones. © 2024 The Author(s)

2023

Lightweight multi-scale classification of chest radiographs via size-specific batch normalization

Authors
Pereira, SC; Rocha, J; Campilho, A; Sousa, P; Mendonca, AM;

Publication
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE

Abstract
Background and Objective: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for exam-ple, 224 x 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radi-ological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are com-bined in a parameter-efficient fashion. Methods: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 x 224, 4 48 x 4 48 and 896 x 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. Results: The proposed approach (AUC 83 . 27 +/- 0 . 17 , 7.1M parameters) outperforms standard single-scale models (AUC 81 . 76 +/- 0 . 18 , 82 . 62 +/- 0 . 11 and 82 . 39 +/- 0 . 13 for input sizes 224 x 224, 4 48 x 4 48 and 896 x 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83 . 27 +/- 0 . 11 , 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classifi-cation of all findings, regardless of their size, highlighting the advantages of this approach. Conclusions: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

2023

Retinal layer and fluid segmentation in optical coherence tomography images using a hierarchical framework

Authors
Melo, T; Carneiro, A; Campilho, A; Mendonca, AM;

Publication
JOURNAL OF MEDICAL IMAGING

Abstract
Purpose: The development of accurate methods for retinal layer and fluid segmentation in optical coherence tomography images can help the ophthalmologists in the diagnosis and follow-up of retinal diseases. Recent works based on joint segmentation presented good results for the segmentation of most retinal layers, but the fluid segmentation results are still not satisfactory. We report a hierarchical framework that starts by distinguishing the retinal zone from the background, then separates the fluid-filled regions from the rest, and finally, discriminates the several retinal layers.Approach: Three fully convolutional networks were trained sequentially. The weighting scheme used for computing the loss function during training is derived from the outputs of the networks trained previously. To reinforce the relative position between retinal layers, the mutex Dice loss (included for optimizing the last network) was further modified so that errors between more distant layers are more penalized. The method's performance was evaluated using a public dataset.Results: The proposed hierarchical approach outperforms previous works in the segmentation of the inner segment ellipsoid layer and fluid (Dice coefficient = 0.95 and 0.82, respectively). The results achieved for the remaining layers are at a state-of-the-art level.Conclusions: The proposed framework led to significant improvements in fluid segmentation, without compromising the results in the retinal layers. Thus, its output can be used by ophthalmologists as a second opinion or as input for automatic extraction of relevant quantitative biomarkers.

2023

OCT Image Synthesis through Deep Generative Models

Authors
Melo, T; Cardoso, J; Carneiro, A; Campilho, A; Mendonça, AM;

Publication
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS

Abstract
The development of accurate methods for OCT image analysis is highly dependent on the availability of large annotated datasets. As such datasets are usually expensive and hard to obtain, novel approaches based on deep generative models have been proposed for data augmentation. In this work, a flow-based network (SRFlow) and a generative adversarial network (ESRGAN) are used for synthesizing high-resolution OCT B-scans from low-resolution versions of real OCT images. The quality of the images generated by the two models is assessed using two standard fidelity-oriented metrics and a learned perceptual quality metric. The performance of two classification models trained on real and synthetic images is also evaluated. The obtained results show that the images generated by SRFlow preserve higher fidelity to the ground truth, while the outputs of ESRGAN present, on average, better perceptual quality. Independently of the architecture of the network chosen to classify the OCT B-scans, the model's performance always improves when images generated by SRFlow are included in the training set.

Supervised
thesis

2022

Interpretable Machine Learning and its Application to Medical Decision Support Systems

Author
Tiago Filipe Sousa Gonçalves

Institution
UP-FEUP

2022

Técnicas de aprendizagem máquina aplicadas à covid-19

Author
Milene Sofia Alves Fraga

Institution
UTAD

2022

Automatic Eyetracking-Assisted Chest Radiography Pathology Screening

Author
Rui Manuel Azevedo dos Santos

Institution
UP-FEUP

2022

Development of a neurophysiologic intraoperative monitoring system for spine surgical procedure

Author
Pedro Filipe Pereira da Fonseca

Institution
UP-FEUP

2022

Improving Data Visualization Reports for a Financial Services Company

Author
Rafael Martins Nogueira

Institution
UP-FEUP