2024
Autores
Beirão, MM; Matos, J; Gonçalves, T; Kase, C; Nakayama, LF; Freitas, Dd; Cardoso, JS;
Publicação
CoRR
Abstract
2024
Autores
Rio-Torto, I; Cardoso, JS; Teixeira, LF;
Publicação
MEDICAL IMAGING WITH DEEP LEARNING
Abstract
The increased interest and importance of explaining neural networks' predictions, especially in the medical community, associated with the known unreliability of saliency maps, the most common explainability method, has sparked research into other types of explanations. Natural Language Explanations (NLEs) emerge as an alternative, with the advantage of being inherently understandable by humans and the standard way that radiologists explain their diagnoses. We extend upon previous work on NLE generation for multi-label chest X-ray diagnosis by replacing the traditional decoder-only NLE generator with an encoder-decoder architecture. This constitutes a first step towards Reinforcement Learning-free adversarial generation of NLEs when no (or few) ground-truth NLEs are available for training, since the generation is done in the continuous encoder latent space, instead of in the discrete decoder output space. However, in the current scenario, large amounts of annotated examples are still required, which are especially costly to obtain in the medical domain, given that they need to be provided by clinicians. Thus, we explore how the recent developments in Parameter-Efficient Fine-Tuning (PEFT) can be leveraged for this usecase. We compare different PEFT methods and find that integrating the visual information into the NLE generator layers instead of only at the input achieves the best results, even outperforming the fully fine-tuned encoder-decoder-based model, while only training 12% of the model parameters. Additionally, we empirically demonstrate the viability of supervising the NLE generation process on the encoder latent space, thus laying the foundation for RL-free adversarial training in low ground-truth NLE availability regimes. The code is publicly available at https://github.com/icrto/peft-nles.
2024
Autores
Gonçalves, T; Hedström, A; Pahud de Mortanges, A; Li, X; Müller, H; Cardoso, S; Reyes, M;
Publicação
Trustworthy Ai in Medical Imaging
Abstract
In the healthcare context, artificial intelligence (AI) has the potential to power decision support systems and help health professionals in their clinical decisions. However, given its complexity, AI is usually seen as a black box that receives data and outputs a prediction. This behavior may jeopardize the adoption of this technology by the healthcare community, which values the existence of explanations to justify a clinical decision. Besides, the developers must have a strategy to assess and audit these systems to ensure their reproducibility and quality in production. The field of interpretable artificial intelligence emerged to study how these algorithms work and clarify their behavior. This chapter reviews several interpretability of AI algorithms for medical imaging, discussing their functioning, limitations, benefits, applications, and evaluation strategies. The chapter concludes with considerations that might contribute to bringing these methods closer to the daily routine of healthcare professionals. © 2025 Elsevier Inc. All rights reserved.
2024
Autores
Caldeira, E; Neto, PC; Gonçalves, T; Damer, N; Sequeira, AF; Cardoso, JS;
Publicação
Science Talks
Abstract
2024
Autores
Beirao, MM; Matos, J; Gon alves, T; Kase, C; Nakayama, LF; de Freitas, D; Cardoso, JS;
Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM
Abstract
Keratitis is an inflammatory corneal condition responsible for 10% of visual impairment in low- and middleincome countries (LMICs), with bacteria, fungi, or amoeba as the most common infection etiologies. While an accurate and timely diagnosis is crucial for the selected treatment and the patients' sight outcomes, due to the high cost and limited availability of laboratory diagnostics in LMICs, diagnosis is often made by clinical observation alone, despite its lower accuracy. In this study, we investigate and compare different deep learning approaches to diagnose the source of infection: 1) three separate binary models for infection type predictions; 2) a multitask model with a shared backbone and three parallel classification layers (Multitask V1); and, 3) a multitask model with a shared backbone and a multi-head classification layer (Multitask V2). We used a private Brazilian cornea dataset to conduct the empirical evaluation. We achieved the best results with Multitask V2, with an area under the receiver operating characteristic curve (AUROC) confidence intervals of 0.7413-0.7740 (bacteria), 0.83950.8725 (fungi), and 0.9448-0.9616 (amoeba). A statistical analysis of the impact of patient features on models' performance revealed that sex significantly affects amoeba infection prediction, and age seems to affect fungi and bacteria predictions.
2024
Autores
Eduard-Alexandru Bonci; Orit Kaidar-Person; Marília Antunes; Oriana Ciani; Helena Cruz; Rosa Di Micco; Oreste Davide Gentilini; Nicole Rotmensz; Pedro Gouveia; Jörg Heil; Pawel Kabata; Nuno Freitas; Tiago Gonçalves; Miguel Romariz; Helena Montenegro; Hélder P. Oliveira; Jaime S. Cardoso; Henrique Martins; Daniela Lopes; Marta Martinho; Ludovica Borsoi; Elisabetta Listorti; Carlos Mavioso; Martin Mika; André Pfob; Timo Schinköthe; Giovani Silva; Maria-Joao Cardoso;
Publicação
Cancer Research
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.