Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Aurélio Campilho
  • Cargo

    Investigador Afiliado
  • Desde

    01 janeiro 2014
  • Nacionalidade

    Portugal
  • Contactos

    +351222094106
    aurelio.campilho@inesctec.pt
005
Publicações

2024

STERN: Attention-driven Spatial Transformer Network for abnormality detection in chest X-ray images

Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;

Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.

2024

Automated image label extraction from radiology reports - A review

Autores
Pereira, SC; Mendonca, AM; Campilho, A; Sousa, P; Lopes, CT;

Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.

2024

Towards automatic forecasting of lung nodule diameter with tabular data and CT imaging

Autores
Ferreira, CA; Venkadesh, KV; Jacobs, C; Coimbra, M; Campilho, A;

Publicação
Biomed. Signal Process. Control.

Abstract
Objective: This study aims to forecast the progression of lung cancer by estimating the future diameter of lung nodules. Methods: This approach uses as input the tabular data, axial images from tomography scans, and both data types, employing a ResNet50 model for image feature extraction and direct analysis of patient information for tabular data. The data are processed through a neural network before prediction. In the training phase, class weights are assigned based on the rarity of different types of nodules within the dataset, in alignment with nodule management guidelines. Results: Tabular data alone yielded the most accurate results, with a mean absolute deviation of 0.99 mm. For malignant nodules, the best performance, marked by a deviation of 2.82 mm, was achieved using tabular data applying Lung-RADS class weights during training. The tabular data results highlight the influence of using the initial nodule size as an input feature. These results surpass the literature reference of 348-day volume doubling time for malignant nodules. Conclusion: The developed predictive model is optimized for integration into a clinical workflow after detecting, segmenting, and classifying nodules. It provides accurate growth forecasts, establishing a more objective basis for determining follow-up intervals. Significance: With lung cancer's low survival rates, the capacity for precise nodule growth prediction represents a significant breakthrough. This methodology promises to revolutionize patient care and management, enhancing the chances for early risk assessment and effective intervention. © 2024 The Author(s)

2024

A Comparative Study of Feature-Based and End-to-End Approaches for Lung Nodule Classification in CT Volumes to Lung-RADS Follow-up Recommendation

Autores
Ferreira, A; Ramos, I; Coimbra, M; Campilho, A;

Publicação
2024 IEEE 22nd Mediterranean Electrotechnical Conference, MELECON 2024

Abstract
Lung cancer represents a significant health concern necessitating diligent monitoring of individuals at risk. While the detection of pulmonary nodules warrants clinical attention, not all cases require immediate surgical intervention, often calling for a strategic approach to follow-up decisions. The LungRADS guideline serves as a cornerstone in clinical practice, furnishing structured recommendations based on various nodule characteristics, including size, calcification, and texture, outlined within established reference tables. However, the reliance on labor-intensive manual measurements underscores the potential advantages of integrating decision support systems into this process. Herein, we propose a feature-based methodology aimed at enhancing clinical decision-making by automating the assessment of nodules in computed tomography scans. Leveraging algorithms tailored for nodule calcification, texture analysis, and segmentation, our approach facilitates the automated classification of follow-up recommendations aligned with Lung-RADS criteria. Comparison with a previously reported end-to-end image-based classification method revealed competitive performance, with the feature-based approach achieving an accuracy of 0.701 ± 0.026, while the end-to-end method attained 0.727 ± 0.020. The inherent explainability of the feature-based approach offers distinct advantages, allowing clinicians to scrutinize and modify individual features to address disagreements or rectify inaccuracies, thereby tailoring follow-up recommendations to patient profiles. © 2024 IEEE.

2023

Lightweight multi-scale classification of chest radiographs via size-specific batch normalization

Autores
Pereira, SC; Rocha, J; Campilho, A; Sousa, P; Mendonca, AM;

Publicação
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE

Abstract
Background and Objective: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for exam-ple, 224 x 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radi-ological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are com-bined in a parameter-efficient fashion. Methods: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 x 224, 4 48 x 4 48 and 896 x 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. Results: The proposed approach (AUC 83 . 27 +/- 0 . 17 , 7.1M parameters) outperforms standard single-scale models (AUC 81 . 76 +/- 0 . 18 , 82 . 62 +/- 0 . 11 and 82 . 39 +/- 0 . 13 for input sizes 224 x 224, 4 48 x 4 48 and 896 x 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83 . 27 +/- 0 . 11 , 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classifi-cation of all findings, regardless of their size, highlighting the advantages of this approach. Conclusions: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

Teses
supervisionadas

2022

Computer-aided diagnosis and follow-up of prevalent eye diseases using OCT/OCTA images

Autor
Tânia Filipa Fernandes de Melo

Instituição
UP-FEUP

2022

Artificial Intelligence-based Decision Support Models for COVID-19 Detection

Autor
Sofia Perestrelo de Vasconcelos Cardoso Pereira

Instituição
UP-FEUP

2022

Collaborative Tools for Lung Cancer Diagnosis in Computed Tomography

Autor
Carlos Alexandre Nunes Ferreira

Instituição
UP-FEUP

2022

Explainable Artificial Medical Intelligence for Automated Thoracic Pathology Screening

Autor
Joana Maria Neves da Rocha

Instituição
UP-FEUP

2022

content based image retrieval as a computer aided diagnosis tool for radiologists

Autor
José Ricardo Ferreira de Castro Ramos

Instituição
UP-FEUP