2009
Autores
Hedayioglu, FD; Coimbra, MT; Mattos, SD;
Publicação
HEALTHINF 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS
Abstract
Digital stethoscopes have been drawing the attention of the biomedical engineering community for some time now, as seen from patent applications and scientific publications. In the future, we expect 'intelligent stethoscopes' to assist the clinician in cardiac exam analysis and diagnostic, potentiating functionalities Such as the teaching of auscultation, telemedicine, and personalized healthcare. In this paper we review the most recent heart sound processing publications, discussing their adequacy for implementation in digital stethoscopes. Our results show a body of interesting and promising work, although we identify three important limitations of this research field: lack of a set of universally accepted heart-sound features, badly described experimental methodologies and absence of a clinical validation step. Correcting these flaws is vital for creating convincing next-generation 'intelligent' digital stethoscopes that the medical community can use and trust.
2022
Autores
Neto, A; Ferreira, S; Libânio, D; Ribeiro, MD; Coimbra, MT; Cunha, A;
Publicação
Wireless Mobile Communication and Healthcare - 11th EAI International Conference, MobiHealth 2022, Virtual Event, November 30 - December 2, 2022, Proceedings
Abstract
Precancerous conditions such as intestinal metaplasia (IM) have a key role in gastric cancer development and can be detected during endoscopy. During upper gastrointestinal endoscopy (UGIE), misdiagnosis can occur due to technical and human factors or by the nature of the lesions, leading to a wrong diagnosis which can result in no surveillance/treatment and impairing the prevention of gastric cancer. Deep learning systems show great potential in detecting precancerous gastric conditions and lesions by using endoscopic images and thus improving and aiding physicians in this task, resulting in higher detection rates and fewer operation errors. This study aims to develop deep learning algorithms capable of detecting IM in UGIE images with a focus on model explainability and interpretability. In this work, white light and narrow-band imaging UGIE images collected in the Portuguese Institute of Oncology of Porto were used to train deep learning models for IM classification. Standard models such as ResNet50, VGG16 and InceptionV3 were compared to more recent algorithms that rely on attention mechanisms, namely the Vision Transformer (ViT), trained in 818 UGIE images (409 normal and 409 IM). All the models were trained using a 5-fold cross-validation technique and for validation, an external dataset will be tested with 100 UGIE images (50 normal and 50 IM). In the end, explainability methods (Grad-CAM and attention rollout) were used for more clear and more interpretable results. The model which performed better was ResNet50 with a sensitivity of 0.75 (±0.05), an accuracy of 0.79 (±0.01), and a specificity of 0.82 (±0.04). This model obtained an AUC of 0.83 (±0.01), where the standard deviation was 0.01, which means that all iterations of the 5-fold cross-validation have a more significant agreement in classifying the samples than the other models. The ViT model showed promising performance, reaching similar results compared to the remaining models. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
2023
Autores
Neto, A; Couto, D; Coimbra, MT; Cunha, A;
Publicação
Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2023, Volume 4: VISAPP, Lisbon, Portugal, February 19-21, 2023.
Abstract
Colorectal cancer is the third most common cancer and the second cause of cancer-related deaths in the world. Colonoscopic surveillance is extremely important to find cancer precursors such as adenomas or serrated polyps. Identifying small or flat polyps can be challenging during colonoscopy and highly dependent on the colonoscopist's skills. Deep learning algorithms can enable improvement of polyp detection rate and consequently assist to reduce physician subjectiveness and operation errors. This study aims to compare YOLO object detection architecture with self-attention models. In this study, the Kvasir-SEG polyp dataset, composed of 1000 colonoscopy annotated still images, were used to train (700 images) and validate (300images) the performance of polyp detection algorithms. Well-defined architectures such as YOLOv4 and different YOLOv5 models were compared with more recent algorithms that rely on self-attention mechanisms, namely the DETR model, to understand which technique can be more helpful and reliable in clinical practice. In the end, the YOLOv5 proved to be the model achieving better results for polyp detection with 0.81 mAP, however, the DETR had 0.80 mAP proving to have the potential of reaching similar performances when compared to more well-established architectures. © 2023 by SCITEPRESS - Science and Technology Publications, Lda.
2023
Autores
Ferraz, S; Coimbra, M; Pedrosa, J;
Publicação
FRONTIERS IN CARDIOVASCULAR MEDICINE
Abstract
Echocardiography is the most frequently used imaging modality in cardiology. However, its acquisition is affected by inter-observer variability and largely dependent on the operator's experience. In this context, artificial intelligence techniques could reduce these variabilities and provide a user independent system. In recent years, machine learning (ML) algorithms have been used in echocardiography to automate echocardiographic acquisition. This review focuses on the state-of-the-art studies that use ML to automate tasks regarding the acquisition of echocardiograms, including quality assessment (QA), recognition of cardiac views and assisted probe guidance during the scanning process. The results indicate that performance of automated acquisition was overall good, but most studies lack variability in their datasets. From our comprehensive review, we believe automated acquisition has the potential not only to improve accuracy of diagnosis, but also help novice operators build expertise and facilitate point of care healthcare in medically underserved areas.
2022
Autores
Gaudio, A; Coimbra, MT; Campilho, A; Smailagic, A; Schmidt, SE; Renna, F;
Publicação
Computing in Cardiology, CinC 2022, Tampere, Finland, September 4-7, 2022
Abstract
Late diagnoses of patients affected by pulmonary artery hypertension (PH) have a poor outcome. This observation has led to a call for earlier, non-invasive PH detection. Cardiac auscultation offers a non-invasive and cost-effective alternative to both right heart catheterization and doppler analysis in analysis of PH. We propose to detect PH via analysis of digital heart sound recordings with over-parameterized deep neural networks. In contrast with previous approaches in the literature, we assess the impact of a pre-processing step aiming to separate S2 sound into the aortic (A2) and pulmonary (P2) components. We obtain an area under the ROC curve of. 95, improving over our adaptation of a state-of-the-art Gaussian mixture model PH detector by +.17. Post-hoc explanations and analysis show that the availability of separated A2 and P2 components contributes significantly to prediction. Analysis of stethoscope heart sound recordings with deep networks is an effective, low-cost and non-invasive solution for the detection of pulmonary hypertension. © 2022 Creative Commons.
2022
Autores
Reyna, MA; Kiarashi, Y; Elola, A; Oliveira, J; Renna, F; Gu, A; Perez Alday, EA; Sadr, N; Sharma, A; Silva Mattos, Sd; Coimbra, MT; Sameni, R; Rad, AB; Clifford, GD;
Publicação
Computing in Cardiology, CinC 2022, Tampere, Finland, September 4-7, 2022
Abstract
The George B. Moody PhysioNet Challenge 2022 explored the detection of abnormal heart function from phonocardiogram (PCG) recordings. Although ultrasound imaging is becoming more common for investigating heart defects, the PCG still has the potential to assist with rapid and low-cost screening, and the automated annotation of PCG recordings has the potential to further improve access. Therefore, for this Challenge, we asked participants to design working, open-source algorithms that use PCG recordings to identify heart murmurs and clinical outcomes. This Challenge makes several innovations. First, we sourced 5272 PCG recordings from 1568 patients in Brazil, providing high-quality data for an underrepresented population. Second, we required the Challenge teams to submit working code for training and running their models, improving the reproducibility and reusability of the algorithms. Third, we devised a cost-based evaluation metric that reflects the costs of screening, treatment, and diagnostic errors, facilitating the development of more clinically relevant algorithms. A total of 87 teams submitted 779 algorithms during the Challenge. These algorithms represent a diversity of approaches from both academia and industry for detecting abnormal cardiac function from PCG recordings. © 2022 Creative Commons.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.