Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2024

REPRODUCING ASYMMETRIES CAUSED BY BREAST CANCER TREATMENT IN PRE-OPERATIVE BREAST IMAGES

Autores
Freitas, N; Montenegro, H; Cardoso, MJ; Cardoso, JS;

Publicação
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Breast cancer locoregional treatment causes alterations to the physical aspect of the breast, often negatively impacting the self-esteem of patients unaware of the possible aesthetic outcomes of those treatments. To improve patients' self-esteem and enable a more informed choice of treatment when multiple options are available, the possibility to predict how the patient might look like after surgery would be of invaluable help. However, no work has been proposed to predict the aesthetic outcomes of breast cancer treatment. As a first step, we compare traditional computer vision and deep learning approaches to reproduce asymmetries of post-operative patients on pre-operative breast images. The results suggest that the traditional approach is better at altering the contour of the breast. In contrast, the deep learning approach succeeds in realistically altering the position and direction of the nipple.

2024

ON THE SUITABILITY OF B-COS NETWORKS FOR THE MEDICAL DOMAIN

Autores
Rio-Torto, I; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publicação
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties.

2024

Weather and Meteorological Optical Range Classification for Autonomous Driving

Autores
Pereira, C; Cruz, RPM; Fernandes, JND; Pinto, JR; Cardoso, JS;

Publicação
IEEE Trans. Intell. Veh.

Abstract

2024

Learning Ordinality in Semantic Segmentation

Autores
Cristino, R; Cruz, RPM; Cardoso, JS;

Publicação
CoRR

Abstract

2024

Deep Learning-based Prediction of Breast Cancer Tumor and Immune Phenotypes from Histopathology

Autores
Gonçalves, T; Arias, DP; Willett, J; Hoebel, KV; Cleveland, MC; Ahmed, SR; Gerstner, ER; Cramer, JK; Cardoso, JS; Bridge, CP; Kim, AE;

Publicação
CoRR

Abstract

2024

Interpretable AI for medical image analysis: methods, evaluation, and clinical considerations

Autores
Gonçalves, T; Hedström, A; Pahud de Mortanges, A; Li, X; Müller, H; Cardoso, S; Reyes, M;

Publicação
Trustworthy Ai in Medical Imaging

Abstract
In the healthcare context, artificial intelligence (AI) has the potential to power decision support systems and help health professionals in their clinical decisions. However, given its complexity, AI is usually seen as a black box that receives data and outputs a prediction. This behavior may jeopardize the adoption of this technology by the healthcare community, which values the existence of explanations to justify a clinical decision. Besides, the developers must have a strategy to assess and audit these systems to ensure their reproducibility and quality in production. The field of interpretable artificial intelligence emerged to study how these algorithms work and clarify their behavior. This chapter reviews several interpretability of AI algorithms for medical imaging, discussing their functioning, limitations, benefits, applications, and evaluation strategies. The chapter concludes with considerations that might contribute to bringing these methods closer to the daily routine of healthcare professionals. © 2025 Elsevier Inc. All rights reserved.

  • 27
  • 381