Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Jaime Cardoso

2023

Author Correction: Computer-aided diagnosis through medical image retrieval in radiology (Scientific Reports, (2022), 12, 1, (20732), 10.1038/s41598-022-25027-2)

Autores
Silva, W; Gonçalves, T; Härmä, K; Schröder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;

Publicação
Scientific Reports

Abstract
The original version of this Article contained an error in the Acknowledgements section. “This work was partially funded by the Project TAMI—Transparent Artificial Medical Intelligence (NORTE- 01-0247-FEDER-045905) financed by ERDF—European Regional Fund through the North Portugal Regional Operational Program—NORTE 2020 and by the Portuguese Foundation for Science and Technology—FCT under the CMU—Portugal International Partnership, and also by the Portuguese Foundation for Science and Technology—FCT within PhD grants SFRH/BD/139468/2018 and 2020.06434.BD. The authors thank the Swiss National Science Foundation grant number 198388, as well as the Lindenhof foundation for their grant support.” now reads: “This work was supported by National Funds through the Portuguese Funding Agency, FCT–Foundation for Science and Technology Portugal, under Project LA/P/0063/2020, and also by the Portuguese Foundation for Science and Technology - FCT within PhD grants SFRH/BD/139468/2018 and 2020.06434.BD. The authors thank the Swiss National Science Foundation grant number 198388, as well as the Lindenhof foundation for their grant support.” The original Article has been corrected. © The Author(s) 2023.

2023

Deep Edge Detection Methods for the Automatic Calculation of the Breast Contour

Autores
Freitas, N; Silva, D; Mavioso, C; Cardoso, MJ; Cardoso, JS;

Publicação
BIOENGINEERING-BASEL

Abstract
Breast cancer conservative treatment (BCCT) is a form of treatment commonly used for patients with early breast cancer. This procedure consists of removing the cancer and a small margin of surrounding tissue, while leaving the healthy tissue intact. In recent years, this procedure has become increasingly common due to identical survival rates and better cosmetic outcomes than other alternatives. Although significant research has been conducted on BCCT, there is no gold-standard for evaluating the aesthetic results of the treatment. Recent works have proposed the automatic classification of cosmetic results based on breast features extracted from digital photographs. The computation of most of these features requires the representation of the breast contour, which becomes key to the aesthetic evaluation of BCCT. State-of-the-art methods use conventional image processing tools that automatically detect breast contours based on the shortest path applied to the Sobel filter result in a 2D digital photograph of the patient. However, because the Sobel filter is a general edge detector, it treats edges indistinguishably, i.e., it detects too many edges that are not relevant to breast contour detection and too few weak breast contours. In this paper, we propose an improvement to this method that replaces the Sobel filter with a novel neural network solution to improve breast contour detection based on the shortest path. The proposed solution learns effective representations for the edges between the breasts and the torso wall. We obtain state of the art results on a dataset that was used for developing previous models. Furthermore, we tested these models on a new dataset that contains more variable photographs and show that this new approach shows better generalization capabilities as the previously developed deep models do not perform so well when faced with a different dataset for testing. The main contribution of this paper is to further improve the capabilities of models that perform the objective classification of BCCT aesthetic results automatically by improving upon the current standard technique for detecting breast contours in digital photographs. To that end, the models introduced are simple to train and test on new datasets which makes this approach easily reproducible.

2022

Computer-aided diagnosis through medical image retrieval in radiology

Autores
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;

Publicação
SCIENTIFIC REPORTS

Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.

2023

A CAD system for automatic dysplasia grading on H&E cervical whole-slide images

Autores
Oliveira, SP; Montezuma, D; Moreira, A; Oliveira, D; Neto, PC; Monteiro, A; Monteiro, J; Ribeiro, L; Goncalves, S; Pinto, IM; Cardoso, JS;

Publicação
SCIENTIFIC REPORTS

Abstract
Cervical cancer is the fourth most common female cancer worldwide and the fourth leading cause of cancer-related death in women. Nonetheless, it is also among the most successfully preventable and treatable types of cancer, provided it is early identified and properly managed. As such, the detection of pre-cancerous lesions is crucial. These lesions are detected in the squamous epithelium of the uterine cervix and are graded as low- or high-grade intraepithelial squamous lesions, known as LSIL and HSIL, respectively. Due to their complex nature, this classification can become very subjective. Therefore, the development of machine learning models, particularly directly on whole-slide images (WSI), can assist pathologists in this task. In this work, we propose a weakly-supervised methodology for grading cervical dysplasia, using different levels of training supervision, in an effort to gather a bigger dataset without the need of having all samples fully annotated. The framework comprises an epithelium segmentation step followed by a dysplasia classifier (non-neoplastic, LSIL, HSIL), making the slide assessment completely automatic, without the need for manual identification of epithelial areas. The proposed classification approach achieved a balanced accuracy of 71.07% and sensitivity of 72.18%, at the slide-level testing on 600 independent samples, which are publicly available upon reasonable request.

2023

A simple machine learning-based framework for faster multi-scale simulations of path-independent materials at large strains

Autores
Carneiro, AMC; Alves, AFC; Coelho, RPC; Cardoso, JS; Pires, FMA;

Publicação
FINITE ELEMENTS IN ANALYSIS AND DESIGN

Abstract
Coupled multi-scale finite element analyses have gained traction over the last years due to the increasing available computational resources. Nevertheless, in the pursuit of accurate results within a reasonable time frame, replacing these high-fidelity micromechanical simulations with reduced-order data-driven models has been explored recently by the modelling community. In this work, two classes of machine learning models are trained for a porous hyperelastic microstructure to predict (i) whether the microscopic equilibrium problem is likely to fail and (ii) the stress-strain response. The former may be used to identify critical macroscopic points where one may fall back to the high-fidelity analysis and possibly apply convergence bowl-widening techniques. For the latter, both a linear regression with polynomial features and artificial Neural Networks have been used, and the required stress-strain derivatives for solving the equilibrium problem have been derived analytically. A weight regularisation is introduced to stabilise the tangent operator and several strategies are discussed for imposing null stresses in undeformed configurations for both regression models. The regression techniques, here analysed exclusively in the context of porous hyperelastic materials, evidence very promising prospects to accelerate multi-scale analyses of solids under large deformation.

2024

Classification of Pulmonary Nodules in 2-[<SUP>18</SUP>F]FDG PET/CT Images with a 3D Convolutional Neural Network

Autores
Alves, VM; Cardoso, JD; Gama, J;

Publicação
NUCLEAR MEDICINE AND MOLECULAR IMAGING

Abstract
Purpose 2-[F-18]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[F-18]FDG PET images.Methods One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[F-18]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used.Results The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives.Conclusion A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[F-18]FDG PET images.

  • 39
  • 63