Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2024

Abstract PO3-19-11: CINDERELLA Clinical Trial (NCT05196269): using artificial intelligence-driven healthcare to enhance breast cancer locoregional treatment decisions

Authors
Eduard-Alexandru Bonci; Orit Kaidar-Person; Marília Antunes; Oriana Ciani; Helena Cruz; Rosa Di Micco; Oreste Davide Gentilini; Nicole Rotmensz; Pedro Gouveia; Jörg Heil; Pawel Kabata; Nuno Freitas; Tiago Gonçalves; Miguel Romariz; Helena Montenegro; Hélder P. Oliveira; Jaime S. Cardoso; Henrique Martins; Daniela Lopes; Marta Martinho; Ludovica Borsoi; Elisabetta Listorti; Carlos Mavioso; Martin Mika; André Pfob; Timo Schinköthe; Giovani Silva; Maria-Joao Cardoso;

Publication
Cancer Research

Abstract
Abstract Background. Breast cancer treatment has improved overall survival rates, with different locoregional approaches offering patients similar locoregional control but variable aesthetic outcomes that may lead to disappointment and poor quality of life (QoL). There are no standardized methods for informing patients of the different therapies prior to intervention, nor validated tools for evaluation of aesthetics and patients' expectations. The CINDERELLA Project is based on years of research and developments of new healthcare technologies by various partners, aimed to provide an artificial intelligence (AI) tool to aid shared decision-making by showing breast cancer patients the predicted aesthetic outcomes of their locoregional treatment. The clinical trial will evaluate the use of this tool within an AI cloud-based platform approach (CINDERELLA App) versus a standard approach. We anticipate that the CINDERELLA App will lead to improved satisfaction, psychosocial well-being and health-related QoL while maintaining the quality of care and providing environmental and economic benefits. Trial design. CINDERELLA is an international multicentric interventional randomized controlled open-label clinical trial. Using the CINDERELLA App, the AI and Digital Health arm will provide patients with complete information about the proposed types of locoregional treatments and photographs of similar patients previously treated with the same techniques. The Control arm will follow the standard approach of each clinical site. Randomization will be conducted online using the digital health platform CANKADO, ensuring a balanced distribution of participants between the two groups. CANKADO is the underlying platform through which physicians control the patients' app content and conduct all data collection. Privacy, data protection and ethical principles in AI usage were taken into account. Eligibility criteria. Patients diagnosed with primary breast cancer without evidence of systemic disease. All patients must sign an informed consent and be able to use a web-based app autonomously or with home-based support. Specific aims. Primary objective: to assess the levels of agreement among patients' expectations regarding the aesthetic outcome before and 12 months after locoregional treatment. The trial will also evaluate the aesthetic outcome level of agreement between the AI evaluation tool and self-evaluation. Secondary objectives: health-related QoL (EQ-5D-5L and BREAST-Q ICHOM questionnaires) and resource consumption (e.g., time spent in the hospital, out-of-pocket expenses). The questionnaires and photographs will be applied prior to any treatment, at wound healing, at 6 and 12 months following the completion of locoregional therapy. Statistical methods. Wilcoxon signed rank test will be used to assess the intervention's impact on the agreement level between expectations and obtained results. Weighted Cohen's kappa will be calculated to measure the improvement in classifying aesthetic results with intervention. Statistical tests and/or bootstrap techniques will compare results between arms. A similarity measure will be calculated between self-evaluation and outcome obtained with the AI tool for each participant, and a beta regression model will be used to analyze the intervention's effect. Secondary objectives will be evaluated by scoring questionnaires based on provided guidelines. Target accrual. The clinical trial, led by Champalimaud Clinical Centre, will enroll a minimum of 515 patients in each arm between July 2023 and January 2025. Recruitment is currently open at five study sites in Germany, Israel, Italy, Poland and Portugal. The clinical trial is still open for further international study sites. Funding. European Union grant HORIZON-HLTH-2021-DISEASE-04-04 Agreement No. 101057389. Citation Format: Eduard-Alexandru Bonci, Orit Kaidar-Person, Marília Antunes, Oriana Ciani, Helena Cruz, Rosa Di Micco, Oreste Davide Gentilini, Nicole Rotmensz, Pedro Gouveia, Jörg Heil, Pawel Kabata, Nuno Freitas, Tiago Gonçalves, Miguel Romariz, Helena Montenegro, Hélder P. Oliveira, Jaime S. Cardoso, Henrique Martins, Daniela Lopes, Marta Martinho, Ludovica Borsoi, Elisabetta Listorti, Carlos Mavioso, Martin Mika, André Pfob, Timo Schinköthe, Giovani Silva, Maria-Joao Cardoso. CINDERELLA Clinical Trial (NCT05196269): using artificial intelligence-driven healthcare to enhance breast cancer locoregional treatment decisions [abstract]. In: Proceedings of the 2023 San Antonio Breast Cancer Symposium; 2023 Dec 5-9; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2024;84(9 Suppl):Abstract nr PO3-19-11.

2024

Predicting Aesthetic Outcomes in Breast Cancer Surgery: A Multimodal Retrieval Approach

Authors
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;

Publication
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - First Deep Breast Workshop, Deep-Breath 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, Proceedings

Abstract
Breast cancer treatments often affect patients’ body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients’ upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Endpoint Detection in Breast Images for Automatic Classification of Breast Cancer Aesthetic Results

Authors
Freitas, N; Veloso, C; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;

Publication
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - First Deep Breast Workshop, Deep-Breath 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, Proceedings

Abstract
Breast cancer is the most common type of cancer in women worldwide. Because of high survival rates, there has been an increased interest in patient Quality of Life after treatment. Aesthetic results play an important role in this aspect, as these treatments can leave a mark on a patient’s self-image. Despite that, there are no standard ways of assessing aesthetic outcomes. Commonly used software such as BCCT.core or BAT require the manual annotation of keypoints, which makes them time-consuming for clinical use and can lead to result variability depending on the user. Recently, there have been attempts to leverage both traditional and Deep Learning algorithms to detect keypoints automatically. In this paper, we compare several methods for the detection of Breast Endpoints across two datasets. Furthermore, we present an extended evaluation of using these models as input for full contour prediction and aesthetic evaluation using the BCCT.core software. Overall, the YOLOv9 model, fine-tuned for this task, presents the best results considering both accuracy and usability, making this architecture the best choice for this application. The main contribution of this paper is the development of a pipeline for full breast contour prediction, which reduces clinician workload and user variability for automatic aesthetic assessment. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Parameter-Efficient Generation of Natural Language Explanations for Chest X-ray Classification

Authors
Torto, IR; Cardoso, JS; Teixeira, LF;

Publication
Medical Imaging with Deep Learning, 3-5 July 2024, Paris, France.

Abstract

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Authors
Patrício, C; Neves, C; Teixeira, F;

Publication
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

TOWARDS CONCEPT-BASED INTERPRETABILITY OF SKIN LESION DIAGNOSIS USING VISION-LANGUAGE MODELS

Authors
Patricio, C; Teixeira, LF; Neves, JC;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation.

  • 14
  • 318