Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Miguel Coimbra

2009

ECCA - Endoscopic Capsule Capview cAtaloguer

Authors
Lima, S; Silva Cunha, JPS; Coimbra, M; Soares, JM;

Publication
WORLD CONGRESS ON MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING, VOL 25, PT 5

Abstract
Statistical pattern recognition research, namely in applied computer vision, typically needs highly accurate massive datasets to train and test its classifiers. This paper presents extensive work for creating a large clinically annotated dataset of high confidence events for gastroenterology. More specifically, we address images and videos obtained using endoscopic capsule imaging technology, which contain some kind of lesion. The purpose of such dataset is to boost scientific research in computer aided diagnostic systems for a technology that would clearly benefit from them.

2006

Combining color with spatial and temporal position of the endoscopic capsule for improved topographic classification and segmentation

Authors
Coimbra, M; Kustra, J; Campos, P; Silva Cunha, JP;

Publication
CEUR Workshop Proceedings

Abstract
Capsule endoscopy is a recent technology with a clear need for automatic tools that reduce the long exam annotation times of exams. We have previously developed a topographic segmentation method, which is now improved by using spatial and temporal position information. Two approaches are studied: using this information as a confidence measure for our previous segmentation method, and direct integrating of this data into the image classification process. These allow us not only to automatically know when we have obtained results with error magnitudes close to human errors, but also to reduce these automatic errors to much lower values. All the developed methods have been integrated in the CapView annotation software, currently used for clinical practice in hospitals responsible for over 250 capsule exams per year, and where we estimate that the two hour annotation times are reduced by around 15 minutes.

2009

A TOOL FOR ENDOSCOPIC CAPSULE DATASET PREPARATION FOR CLINICAL VIDEO EVENT DETECTOR ALGORITHMS

Authors
Lima, S; Cunha, JP; Coimbra, M; Soares, JM;

Publication
HEALTHINF 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS

Abstract
In all R&D projects there's at least one phase of model verification and accuracy, and when we are working with visual information (such as pictures and video) this phase should be emphasised. When working with medical information and clinical trials the truth of automatic results must be accurate. This work is based on the need of a huge and well annotated dataset of pictures retrieved from endoscopic capsule. This datasets should be used to learn the computer vision algorithms focused on endoscopic capsule video processing, and event detection.

2009

A SURVEY OF AUDIO PROCESSING ALGORITHMS FOR DIGITAL STETHOSCOPES

Authors
Hedayioglu, FD; Coimbra, MT; Mattos, SD;

Publication
HEALTHINF 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS

Abstract
Digital stethoscopes have been drawing the attention of the biomedical engineering community for some time now, as seen from patent applications and scientific publications. In the future, we expect 'intelligent stethoscopes' to assist the clinician in cardiac exam analysis and diagnostic, potentiating functionalities Such as the teaching of auscultation, telemedicine, and personalized healthcare. In this paper we review the most recent heart sound processing publications, discussing their adequacy for implementation in digital stethoscopes. Our results show a body of interesting and promising work, although we identify three important limitations of this research field: lack of a set of universally accepted heart-sound features, badly described experimental methodologies and absence of a clinical validation step. Correcting these flaws is vital for creating convincing next-generation 'intelligent' digital stethoscopes that the medical community can use and trust.

2022

Preliminary Study of Deep Learning Algorithms for Metaplasia Detection in Upper Gastrointestinal Endoscopy

Authors
Neto, A; Ferreira, S; Libânio, D; Ribeiro, MD; Coimbra, MT; Cunha, A;

Publication
MobiHealth

Abstract
Precancerous conditions such as intestinal metaplasia (IM) have a key role in gastric cancer development and can be detected during endoscopy. During upper gastrointestinal endoscopy (UGIE), misdiagnosis can occur due to technical and human factors or by the nature of the lesions, leading to a wrong diagnosis which can result in no surveillance/treatment and impairing the prevention of gastric cancer. Deep learning systems show great potential in detecting precancerous gastric conditions and lesions by using endoscopic images and thus improving and aiding physicians in this task, resulting in higher detection rates and fewer operation errors. This study aims to develop deep learning algorithms capable of detecting IM in UGIE images with a focus on model explainability and interpretability. In this work, white light and narrow-band imaging UGIE images collected in the Portuguese Institute of Oncology of Porto were used to train deep learning models for IM classification. Standard models such as ResNet50, VGG16 and InceptionV3 were compared to more recent algorithms that rely on attention mechanisms, namely the Vision Transformer (ViT), trained in 818 UGIE images (409 normal and 409 IM). All the models were trained using a 5-fold cross-validation technique and for validation, an external dataset will be tested with 100 UGIE images (50 normal and 50 IM). In the end, explainability methods (Grad-CAM and attention rollout) were used for more clear and more interpretable results. The model which performed better was ResNet50 with a sensitivity of 0.75 (±0.05), an accuracy of 0.79 (±0.01), and a specificity of 0.82 (±0.04). This model obtained an AUC of 0.83 (±0.01), where the standard deviation was 0.01, which means that all iterations of the 5-fold cross-validation have a more significant agreement in classifying the samples than the other models. The ViT model showed promising performance, reaching similar results compared to the remaining models.

2023

Colonoscopic Polyp Detection with Deep Learning Assist

Authors
Neto, A; Couto, D; Coimbra, MT; Cunha, A;

Publication
VISIGRAPP (4: VISAPP)

Abstract
Colorectal cancer is the third most common cancer and the second cause of cancer-related deaths in the world. Colonoscopic surveillance is extremely important to find cancer precursors such as adenomas or serrated polyps. Identifying small or flat polyps can be challenging during colonoscopy and highly dependent on the colonoscopist's skills. Deep learning algorithms can enable improvement of polyp detection rate and consequently assist to reduce physician subjectiveness and operation errors. This study aims to compare YOLO object detection architecture with self-attention models. In this study, the Kvasir-SEG polyp dataset, composed of 1000 colonoscopy annotated still images, were used to train (700 images) and validate (300images) the performance of polyp detection algorithms. Well-defined architectures such as YOLOv4 and different YOLOv5 models were compared with more recent algorithms that rely on self-attention mechanisms, namely the DETR model, to understand which technique can be more helpful and reliable in clinical practice. In the end, the YOLOv5 proved to be the model achieving better results for polyp detection with 0.81 mAP, however, the DETR had 0.80 mAP proving to have the potential of reaching similar performances when compared to more well-established architectures.

  • 25
  • 27