Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Miguel Coimbra, licenciado em Engenharia Eletrotécnica e de Computadores (Faculdade de Engenharia da Universidade do Porto) e doutorado em Engenharia Electrónica (Queen Mary, University of London), é Professor Catedrático no Departamento de Ciência de Computadores da Faculdade de Ciências da Universidade do Porto. É vogal do Conselho Executivo da Faculdade de Ciências da Universidade do Porto desde abril de 2019, coordenador da linha TEC4Health do INESC TEC desde janeiro de 2019, e coordenador do laboratório BioImaging Lab do INESC TEC desde janeiro de 2022. Foi presidente do Portugal Chapter da IEEE Engineering in Medicine and Biology Society entre 2018 e 2022. Foi um dos fundadores em 2007 da Delegação do Porto do Instituto de Telecomunicações, da qual foi coordenador entre 2015 e 2019. Nesta criou e coordenou entre 2008 e 2014 o grupo de Interactive Multimedia. Foi diretor entre 2014 e 2016 do Mestrado em Informática Médica da Universidade do Porto, e co-fundador em 2013 da IS4H – Interactive Systems for Healthcare, uma empresa spin-off da Universidade do Porto, onde licencia e vende produtos baseados nas tecnologias interativas de auscultação desenvolvidas pela sua equipa.

A nível de atividade científica liderou ou participou em múltiplos projetos na interface entre a ciência de computadores e a saúde, nomeadamente em cardiologia, gastroenterologia e reumatologia, com colaborações atuais e passadas com instituições de saúde em Portugal, Brasil (Pernambuco, Paraíba, Minas Gerais, São Paulo), Alemanha e Suécia. Os quase 15 anos de experiência em ciência de computadores, mais concretamente na área da informática para a saúde (visão computacional, processamento de sinal biomédico, interação pessoa-máquina), levaram ao desenvolvimento e instalação de sistemas para a colheita e análise de sinais de auscultação, processamento de imagens de ecocardiografia para rastreio de febre reumática, monitorização de stress e fadiga de bombeiros em ação, análise de imagem endoscópica para deteção de cancro, sistemas de apoio à decisão para cápsula endoscópica, e quantificação de padrões de movimento 3D para epilepsia, entre outros. É (co)-autor de um total de 133 publicações científicas, incluindo 3 capítulos em livros e 29 artigos em revista, sendo 25 destes em revistas de primeiro quartil, 17 dos quais nas prestigiadas IEEE Transactions. A nível de formação avançada já terminou com sucesso a orientação de 4 investigadores de pós-doutoramento, 6 estudantes de doutoramento e 47 estudantes de mestrado. Durante os últimos 13 anos atraiu e geriu mais de 2M€ de financiamento para investigação, distribuídos por um total de 16 projetos nacionais ou internacionais onde atuou como investigador principal do projeto ou como líder da equipa de investigação da sua instituição.


Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Miguel Coimbra
  • Cargo

    Coordenador de TEC4
  • Desde

    15 setembro 1998
  • Nacionalidade

    Portugal
  • Contactos

    +351222094106
    miguel.coimbra@inesctec.pt
010
Publicações

2025

QUAIDE - Quality assessment of AI preclinical studies in diagnostic endoscopy

Autores
Antonelli, G; Libanio, D; De Groof, AJ; van der Sommen, F; Mascagni, P; Sinonquel, P; Abdelrahim, M; Ahmad, O; Berzin, T; Bhandari, P; Bretthauer, M; Coimbra, M; Dekker, E; Ebigbo, A; Eelbode, T; Frazzoni, L; Gross, SA; Ishihara, R; Kaminski, MF; Messmann, H; Mori, Y; Padoy, N; Parasa, S; Pilonis, ND; Renna, F; Repici, A; Simsek, C; Spadaccini, M; Bisschops, R; Bergman, JJGHM; Hassan, C; Ribeiro, MD;

Publicação
GUT

Abstract
Artificial intelligence (AI) holds significant potential for enhancing quality of gastrointestinal (GI) endoscopy, but the adoption of AI in clinical practice is hampered by the lack of rigorous standardisation and development methodology ensuring generalisability. The aim of the Quality Assessment of pre-clinical AI studies in Diagnostic Endoscopy (QUAIDE) Explanation and Checklist was to develop recommendations for standardised design and reporting of preclinical AI studies in GI endoscopy. The recommendations were developed based on a formal consensus approach with an international multidisciplinary panel of 32 experts among endoscopists and computer scientists. The Delphi methodology was employed to achieve consensus on statements, with a predetermined threshold of 80% agreement. A maximum three rounds of voting were permitted. Consensus was reached on 18 key recommendations, covering 6 key domains: data acquisition and annotation (6 statements), outcome reporting (3 statements), experimental setup and algorithm architecture (4 statements) and result presentation and interpretation (5 statements). QUAIDE provides recommendations on how to properly design (1. Methods, statements 1-14), present results (2. Results, statements 15-16) and integrate and interpret the obtained results (3. Discussion, statements 17-18). The QUAIDE framework offers practical guidance for authors, readers, editors and reviewers involved in AI preclinical studies in GI endoscopy, aiming at improving design and reporting, thereby promoting research standardisation and accelerating the translation of AI innovations into clinical practice.

2024

Using generative adversarial networks for endoscopic image augmentation of stomach precancerous lesions

Autores
Magalhães, B; Neto, A; Almeida, E; Libânio, D; Chaves, J; Ribeiro, MD; Coimbra, MT; Cunha, A;

Publicação
CENTERIS 2024 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2024, Funchal, Madeira Island, Portugal, November 13-15, 2024.

Abstract
The medical imaging field contends with limited data for training deep learning (DL) models. Our study evaluated traditional data augmentation (DA) and Generative Adversarial Networks (GANs) in enhancing DL models for identifying stomach precancerous lesions. Classic DA consistently outperformed GAN-based methods with ResNet50 (0.94 vs 0.93 accuracy) and ViT (0.85 vs 0.84 accuracy) models achieving higher accuracy and other performance metrics with DA compared to GANs. Despite this, GAN augmentation showed significant improvements when compared to train with the original dataset, highlighting its role in diversifying datasets and aiding generalization across different medical imaging datasets. Combining both augmentation techniques can enhance model robustness and generalisation capabilities in DL applications for medical diagnostics, leveraging DA's consistency and GANs' diversity. © 2025 Elsevier B.V.. All rights reserved.

2024

Foundational Models for Pathology and Endoscopy Images: Application for Gastric Inflammation

Autores
Kerdegari, H; Higgins, K; Veselkov, D; Laponogov, I; Polaka, I; Coimbra, M; Pescino, JA; Leja, M; Dinis-Ribeiro, M; Kanonnikoff, TF; Veselkov, K;

Publicação
DIAGNOSTICS

Abstract
The integration of artificial intelligence (AI) in medical diagnostics represents a significant advancement in managing upper gastrointestinal (GI) cancer, which is a major cause of global cancer mortality. Specifically for gastric cancer (GC), chronic inflammation causes changes in the mucosa such as atrophy, intestinal metaplasia (IM), dysplasia, and ultimately cancer. Early detection through endoscopic regular surveillance is essential for better outcomes. Foundation models (FMs), which are machine or deep learning models trained on diverse data and applicable to broad use cases, offer a promising solution to enhance the accuracy of endoscopy and its subsequent pathology image analysis. This review explores the recent advancements, applications, and challenges associated with FMs in endoscopy and pathology imaging. We started by elucidating the core principles and architectures underlying these models, including their training methodologies and the pivotal role of large-scale data in developing their predictive capabilities. Moreover, this work discusses emerging trends and future research directions, emphasizing the integration of multimodal data, the development of more robust and equitable models, and the potential for real-time diagnostic support. This review aims to provide a roadmap for researchers and practitioners in navigating the complexities of incorporating FMs into clinical practice for the prevention/management of GC cases, thereby improving patient outcomes.

2024

Explainable Multimodal Deep Learning for Heart Sounds and Electrocardiogram Classification

Autores
Oliveira, B; Lobo, A; Botelho Costa, CIA; Carvalho, RF; Coimbra, MT; Renna, F;

Publicação
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
We introduce a Gradient-weighted Class Activation Mapping (Grad-CAM) methodology to assess the performance of five distinct models for binary classification (normal/abnormal) of synchronized heart sounds and electrocardiograms. The applied models comprise a one-dimensional convolutional neural network (1D-CNN) using solely ECG signals, a two-dimensional convolutional neural network (2D-CNN) applied separately to PCG and ECG signals, and two multimodal models that employ both signals. In the multimodal models, we implement two fusion approaches: an early fusion and a late fusion. The results indicate a performance improvement in using an early fusion model for the joint classification of both signals, as opposed to using a PCG 2D-CNN or ECG 1D-CNN alone (e.g., ROC-AUC score of 0.81 vs. 0.79 and 0.79, respectively). Although the ECG 2D-CNN demonstrates a higher ROC-AUC score (0.82) compared to the early fusion model, it exhibits a lower F1-score (0.85 vs. 0.86). Grad-CAM unveils that the models tend to yield higher gradients in the QRS complex and T/P-wave of the ECG signal, as well as between the two PCG fundamental sounds (S1 and S2), for discerning normalcy or abnormality, thus showcasing that the models focus on clinically relevant features of the recorded data.

2024

Improving Endoscopy Lesion Classification Using Self-Supervised Deep Learning

Autores
Lopes, I; Vakalopoulou, M; Ferrante, E; Libânio, D; Ribeiro, MD; Coimbra, MT; Renna, F;

Publicação
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
In this work, we assess the impact of self-supervised learning (SSL) approaches on the detection of gastritis atrophy (GA) and intestinal metaplasia (IM) conditions. GA and IM are precancerous gastric lesions. Detecting these lesions is crucial to intervene early and prevent their progression to cancer. A set of experiments is conducted over the Chengdu dataset, by considering different amounts of annotated data in the training phase. Our results reveal that, when all available data is used for training, SSL approaches achieve a classification accuracy on par with a supervised learning baseline, (81.52% vs 81.76%). Interestingly, we observe that in low-data regimes (here represented as retaining only 12.5% of annotated data for training), the SSL model guarantees an accuracy gain with respect to the supervised learning baseline of approximately 1.5% (73.00% vs 71.52%). This observation hints at the potential of SSL models in leveraging unlabeled data, thus showcasing more robust performance improvements and generalization. Experimental results also show that SSL performance is significantly dependent on the specific data augmentation techniques and parameters adopted for contrastive learning, thus advocating for further investigations into the definition of optimal data augmentation frameworks specifically tailored for gastric lesion detection applications.