Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2024

Classification of Keratitis from Eye Corneal Photographs using Deep Learning

Authors
Beirao, MM; Matos, J; Gon alves, T; Kase, C; Nakayama, LF; de Freitas, D; Cardoso, JS;

Publication
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM

Abstract
Keratitis is an inflammatory corneal condition responsible for 10% of visual impairment in low- and middleincome countries (LMICs), with bacteria, fungi, or amoeba as the most common infection etiologies. While an accurate and timely diagnosis is crucial for the selected treatment and the patients' sight outcomes, due to the high cost and limited availability of laboratory diagnostics in LMICs, diagnosis is often made by clinical observation alone, despite its lower accuracy. In this study, we investigate and compare different deep learning approaches to diagnose the source of infection: 1) three separate binary models for infection type predictions; 2) a multitask model with a shared backbone and three parallel classification layers (Multitask V1); and, 3) a multitask model with a shared backbone and a multi-head classification layer (Multitask V2). We used a private Brazilian cornea dataset to conduct the empirical evaluation. We achieved the best results with Multitask V2, with an area under the receiver operating characteristic curve (AUROC) confidence intervals of 0.7413-0.7740 (bacteria), 0.83950.8725 (fungi), and 0.9448-0.9616 (amoeba). A statistical analysis of the impact of patient features on models' performance revealed that sex significantly affects amoeba infection prediction, and age seems to affect fungi and bacteria predictions.

2024

Classification of Keratitis from Eye Corneal Photographs using Deep Learning

Authors
Beirão, MM; Matos, J; Gonçalves, T; Kase, C; Nakayama, LF; Freitas, Dd; Cardoso, JS;

Publication
CoRR

Abstract

2024

Abstract PO3-19-11: CINDERELLA Clinical Trial (NCT05196269): using artificial intelligence-driven healthcare to enhance breast cancer locoregional treatment decisions

Authors
Eduard-Alexandru Bonci; Orit Kaidar-Person; Marília Antunes; Oriana Ciani; Helena Cruz; Rosa Di Micco; Oreste Davide Gentilini; Nicole Rotmensz; Pedro Gouveia; Jörg Heil; Pawel Kabata; Nuno Freitas; Tiago Gonçalves; Miguel Romariz; Helena Montenegro; Hélder P. Oliveira; Jaime S. Cardoso; Henrique Martins; Daniela Lopes; Marta Martinho; Ludovica Borsoi; Elisabetta Listorti; Carlos Mavioso; Martin Mika; André Pfob; Timo Schinköthe; Giovani Silva; Maria-Joao Cardoso;

Publication
Cancer Research

Abstract
Abstract Background. Breast cancer treatment has improved overall survival rates, with different locoregional approaches offering patients similar locoregional control but variable aesthetic outcomes that may lead to disappointment and poor quality of life (QoL). There are no standardized methods for informing patients of the different therapies prior to intervention, nor validated tools for evaluation of aesthetics and patients' expectations. The CINDERELLA Project is based on years of research and developments of new healthcare technologies by various partners, aimed to provide an artificial intelligence (AI) tool to aid shared decision-making by showing breast cancer patients the predicted aesthetic outcomes of their locoregional treatment. The clinical trial will evaluate the use of this tool within an AI cloud-based platform approach (CINDERELLA App) versus a standard approach. We anticipate that the CINDERELLA App will lead to improved satisfaction, psychosocial well-being and health-related QoL while maintaining the quality of care and providing environmental and economic benefits. Trial design. CINDERELLA is an international multicentric interventional randomized controlled open-label clinical trial. Using the CINDERELLA App, the AI and Digital Health arm will provide patients with complete information about the proposed types of locoregional treatments and photographs of similar patients previously treated with the same techniques. The Control arm will follow the standard approach of each clinical site. Randomization will be conducted online using the digital health platform CANKADO, ensuring a balanced distribution of participants between the two groups. CANKADO is the underlying platform through which physicians control the patients' app content and conduct all data collection. Privacy, data protection and ethical principles in AI usage were taken into account. Eligibility criteria. Patients diagnosed with primary breast cancer without evidence of systemic disease. All patients must sign an informed consent and be able to use a web-based app autonomously or with home-based support. Specific aims. Primary objective: to assess the levels of agreement among patients' expectations regarding the aesthetic outcome before and 12 months after locoregional treatment. The trial will also evaluate the aesthetic outcome level of agreement between the AI evaluation tool and self-evaluation. Secondary objectives: health-related QoL (EQ-5D-5L and BREAST-Q ICHOM questionnaires) and resource consumption (e.g., time spent in the hospital, out-of-pocket expenses). The questionnaires and photographs will be applied prior to any treatment, at wound healing, at 6 and 12 months following the completion of locoregional therapy. Statistical methods. Wilcoxon signed rank test will be used to assess the intervention's impact on the agreement level between expectations and obtained results. Weighted Cohen's kappa will be calculated to measure the improvement in classifying aesthetic results with intervention. Statistical tests and/or bootstrap techniques will compare results between arms. A similarity measure will be calculated between self-evaluation and outcome obtained with the AI tool for each participant, and a beta regression model will be used to analyze the intervention's effect. Secondary objectives will be evaluated by scoring questionnaires based on provided guidelines. Target accrual. The clinical trial, led by Champalimaud Clinical Centre, will enroll a minimum of 515 patients in each arm between July 2023 and January 2025. Recruitment is currently open at five study sites in Germany, Israel, Italy, Poland and Portugal. The clinical trial is still open for further international study sites. Funding. European Union grant HORIZON-HLTH-2021-DISEASE-04-04 Agreement No. 101057389. Citation Format: Eduard-Alexandru Bonci, Orit Kaidar-Person, Marília Antunes, Oriana Ciani, Helena Cruz, Rosa Di Micco, Oreste Davide Gentilini, Nicole Rotmensz, Pedro Gouveia, Jörg Heil, Pawel Kabata, Nuno Freitas, Tiago Gonçalves, Miguel Romariz, Helena Montenegro, Hélder P. Oliveira, Jaime S. Cardoso, Henrique Martins, Daniela Lopes, Marta Martinho, Ludovica Borsoi, Elisabetta Listorti, Carlos Mavioso, Martin Mika, André Pfob, Timo Schinköthe, Giovani Silva, Maria-Joao Cardoso. CINDERELLA Clinical Trial (NCT05196269): using artificial intelligence-driven healthcare to enhance breast cancer locoregional treatment decisions [abstract]. In: Proceedings of the 2023 San Antonio Breast Cancer Symposium; 2023 Dec 5-9; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2024;84(9 Suppl):Abstract nr PO3-19-11.

2024

Parameter-Efficient Generation of Natural Language Explanations for Chest X-ray Classification

Authors
Rio-Torto, I; Cardoso, JS; Teixeira, LF;

Publication
MEDICAL IMAGING WITH DEEP LEARNING

Abstract
The increased interest and importance of explaining neural networks' predictions, especially in the medical community, associated with the known unreliability of saliency maps, the most common explainability method, has sparked research into other types of explanations. Natural Language Explanations (NLEs) emerge as an alternative, with the advantage of being inherently understandable by humans and the standard way that radiologists explain their diagnoses. We extend upon previous work on NLE generation for multi-label chest X-ray diagnosis by replacing the traditional decoder-only NLE generator with an encoder-decoder architecture. This constitutes a first step towards Reinforcement Learning-free adversarial generation of NLEs when no (or few) ground-truth NLEs are available for training, since the generation is done in the continuous encoder latent space, instead of in the discrete decoder output space. However, in the current scenario, large amounts of annotated examples are still required, which are especially costly to obtain in the medical domain, given that they need to be provided by clinicians. Thus, we explore how the recent developments in Parameter-Efficient Fine-Tuning (PEFT) can be leveraged for this usecase. We compare different PEFT methods and find that integrating the visual information into the NLE generator layers instead of only at the input achieves the best results, even outperforming the fully fine-tuned encoder-decoder-based model, while only training 12% of the model parameters. Additionally, we empirically demonstrate the viability of supervising the NLE generation process on the encoder latent space, thus laying the foundation for RL-free adversarial training in low ground-truth NLE availability regimes. The code is publicly available at https://github.com/icrto/peft-nles.

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Authors
Patrício, C; Neves, C; Teixeira, F;

Publication
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

TOWARDS CONCEPT-BASED INTERPRETABILITY OF SKIN LESION DIAGNOSIS USING VISION-LANGUAGE MODELS

Authors
Patricio, C; Teixeira, LF; Neves, JC;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation.

  • 31
  • 384