Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2024

CINDERELLA Trial: validation of an artificial-intelligence cloud-based platform to improve the shared decision-making process and outcomes in breast cancer patients proposed for locoregional treatment

Autores
Eduard-Alexandru Bonci; Orit Kaidar-Person; Marilia Antunes; Oriana Ciani; Helena Cruz; Rosa Di Micco; Oreste Gentilini; Pedro Gouveia; Jörg Heil; Pawel Kabata; Nuno Freitas; Tiago Gonçalves; Miguel Romariz; Henrique Martins; Carlos Mavioso; Martin Mika; André Pfob; Timo Schinköthe; Giovani Silva; Maria-João Cardoso;

Publicação
European Journal of Surgical Oncology

Abstract

2024

CINDERELLA Clinical trial (NCT05196269): using artificial intelligence-driven healthcare to enhance breast cancer locoregional treatment decisions

Autores
Bonel, EA; Kaidar-Person, O; Antunes, M; Ciani, O; Cruz, H; Di Micco, R; Gentilini, O; Heil, J; Kabata, P; Romariz, M; Gonçalves, T; Martins, H; Borsoi, L; Mika, M; Pfob, A; Romem, N; Schinköthe, T; Silva, G; Senkus, E; Cardoso, MJ;

Publicação
ANNALS OF SURGICAL ONCOLOGY

Abstract

2024

Realistic Model Parameter Optimization: Shadow Robot Dexterous Hand Use-Case

Autores
Correia, T; Ribeiro, FM; Pinto, VH;

Publicação
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT II, OL2A 2023

Abstract
The notable expansion of technologies related to automated processes has been observed in recent years, largely driven by the significant advantages they provide across diverse industries. Concurrently, there has been a rise in simulation technologies aimed at replicating these complex systems. Nevertheless, in order to fully leverage the potential of these technologies, it is crucial to ensure the highest possible resemblance of simulations to real-world scenarios. In brief, this work consists of the development of a data acquisition and processing pipeline allowing a posterior search for the optimal physical parameters in MuJoCo simulator to obtain a more accurate simulation of a dexterous robotic hand. In the end, a Random Search optimization algorithm was used to validate this same pipeline.

2024

<i>DeViL</i>: Decoding Vision features into Language

Autores
Dani, M; Rio Torto, I; Alaniz, S; Akata, Z;

Publicação
PATTERN RECOGNITION, DAGM GCPR 2023

Abstract
Post-hoc explanation methods have often been criticised for abstracting away the decision-making process of deep neural networks. In this work, we would like to provide natural language descriptions for what different layers of a vision backbone have learned. Our DeViL method generates textual descriptions of visual features at different layers of the network as well as highlights the attribution locations of learned concepts. We train a transformer network to translate individual image features of any vision layer into a prompt that a separate off-the-shelf language model decodes into natural language. By employing dropout both per-layer and per-spatial-location, our model can generalize training on image-text pairs to generate localized explanations. As it uses a pre-trained language model, our approach is fast to train and can be applied to any vision backbone. Moreover, DeViL can create open-vocabulary attribution maps corresponding to words or phrases even outside the training scope of the vision model. We demonstrate that DeViL generates textual descriptions relevant to the image content on CC3M, surpassing previous lightweight captioning models and attribution maps, uncovering the learned concepts of the vision backbone. Further, we analyze fine-grained descriptions of layers as well as specific spatial locations and show that DeViL outperforms the current state-of-the-art on the neuron-wise descriptions of the MILANNOTATIONS dataset.

2024

An interpretable machine learning system for colorectal cancer diagnosis from pathology slides

Autores
Neto, PC; Montezuma, D; Oliveira, SP; Oliveira, D; Fraga, J; Monteiro, A; Monteiro, J; Ribeiro, L; Gonçalves, S; Reinhard, S; Zlobec, I; Pinto, IM; Cardoso, JS;

Publicação
NPJ PRECISION ONCOLOGY

Abstract
Considering the profound transformation affecting pathology practice, we aimed to develop a scalable artificial intelligence (AI) system to diagnose colorectal cancer from whole-slide images (WSI). For this, we propose a deep learning (DL) system that learns from weak labels, a sampling strategy that reduces the number of training samples by a factor of six without compromising performance, an approach to leverage a small subset of fully annotated samples, and a prototype with explainable predictions, active learning features and parallelisation. Noting some problems in the literature, this study is conducted with one of the largest WSI colorectal samples dataset with approximately 10,500 WSIs. Of these samples, 900 are testing samples. Furthermore, the robustness of the proposed method is assessed with two additional external datasets (TCGA and PAIP) and a dataset of samples collected directly from the proposed prototype. Our proposed method predicts, for the patch-based tiles, a class based on the severity of the dysplasia and uses that information to classify the whole slide. It is trained with an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists through spatial annotations. The mixed-supervision scheme allowed for an intelligent sampling strategy effectively evaluated in several different scenarios without compromising the performance. On the internal dataset, the method shows an accuracy of 93.44% and a sensitivity between positive (low-grade and high-grade dysplasia) and non-neoplastic samples of 0.996. On the external test samples varied with TCGA being the most challenging dataset with an overall accuracy of 84.91% and a sensitivity of 0.996.

2024

Explainable AI for medical image analysis

Autores
Brás, C; Montenegro, H; Cai, Y; Corbetta, V; Huo, Y; Silva, W; Cardoso, S; Landman, A; Išgum, I;

Publicação
Trustworthy Ai in Medical Imaging

Abstract
Rising adoption of AI-driven solutions in medical imaging is associated with an emerging need to develop strategies to introduce explainability as an important aspect of trustworthiness of AI models. This chapter addresses the most commonly used explainability techniques in medical image analysis, namely methods generating visual, example-based, textual, and concept-based explanations. To obtain visual explanations, we explore backpropagation- and perturbation-based methods. To yield example-based explanations, we focus on prototype-, distance-, and retrieval-based techniques, as well as counterfactual explanations. Finally, to produce textual and concept-based explanations, we delve into image captioning and testing with concept activation vectors, respectively. This chapter aims at providing understanding of the conceptual underpinning, advantages and limitations of each method, as well as to interpret their generated explanations in the context of medical image analysis. © 2025 Elsevier Inc. All rights reserved.

  • 49
  • 394