2024
Autores
Brás, C; Montenegro, H; Cai, Y; Corbetta, V; Huo, Y; Silva, W; Cardoso, S; Landman, A; Išgum, I;
Publicação
Trustworthy Ai in Medical Imaging
Abstract
Rising adoption of AI-driven solutions in medical imaging is associated with an emerging need to develop strategies to introduce explainability as an important aspect of trustworthiness of AI models. This chapter addresses the most commonly used explainability techniques in medical image analysis, namely methods generating visual, example-based, textual, and concept-based explanations. To obtain visual explanations, we explore backpropagation- and perturbation-based methods. To yield example-based explanations, we focus on prototype-, distance-, and retrieval-based techniques, as well as counterfactual explanations. Finally, to produce textual and concept-based explanations, we delve into image captioning and testing with concept activation vectors, respectively. This chapter aims at providing understanding of the conceptual underpinning, advantages and limitations of each method, as well as to interpret their generated explanations in the context of medical image analysis. © 2025 Elsevier Inc. All rights reserved.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.