2024
Authors
Amaro, M; Oliveira, HP; Pereira, T;
Publication
2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024
Abstract
Lung Cancer (LC) is still among the top main causes of death worldwide, and it is the leading death number among other cancers. Several AI-based methods have been developed for the early detection of LC, trying to use Computed Tomography (CT) images to identify the initial signs of the disease. The survival prediction could help the clinicians to adequate the treatment plan and all the proceedings, by the identification of the most severe cases that need more attention. In this study, several deep learning models were compared to predict the survival of LC patients using CT images. The best performing model, a CNN with 3 layers, achieved an AUC value of 0.80, a Precision value of 0.56 and a Recall of 0.64. The obtained results showed that CT images carry information that can be used to assess the survival of LC.
2025
Authors
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Cardoso, JS;
Publication
MEDICAL IMAGE ANALYSIS
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&Estains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
2024
Authors
Teiga, I; Sousa, JV; Silva, F; Pereira, T; Oliveira, HP;
Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT III, UAHCI 2024
Abstract
Significant medical image visualization and annotation tools, tailored for clinical users, play a crucial role in disease diagnosis and treatment. Developing algorithms for annotation assistance, particularly machine learning (ML)-based ones, can be intricate, emphasizing the need for a user-friendly graphical interface for developers. Many software tools are available to meet these requirements, but there is still room for improvement, making the research for new tools highly compelling. The envisioned tool focuses on navigating sequences of DICOM images from diverse modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) scans, Ultrasound (US), and X-rays. Specific requirements involve implementing manual annotation features such as freehand drawing, copying, pasting, and modifying annotations. A scripting plugin interface is essential for running Artificial Intelligence (AI)-based models and adjusting results. Additionally, adaptable surveys complement graphical annotations with textual notes, enhancing information provision. The user evaluation results pinpointed areas for improvement, including incorporating some useful functionalities, as well as enhancements to the user interface for a more intuitive and convenient experience. Despite these suggestions, participants praised the application's simplicity and consistency, highlighting its suitability for the proposed tasks. The ability to revisit annotations ensures flexibility and ease of use in this context.
2023
Authors
Charlton, PH; Allen, J; Bailon, R; Baker, S; Behar, JA; Chen, F; Clifford, GD; Clifton, DA; Davies, HJ; Ding, C; Ding, XR; Dunn, J; Elgendi, M; Ferdoushi, M; Franklin, D; Gil, E; Hassan, MF; Hernesniemi, J; Hu, X; Ji, N; Khan, Y; Kontaxis, S; Korhonen, I; Kyriacou, PA; Laguna, P; Lazaro, J; Lee, CK; Levy, J; Li, YM; Liu, CY; Liu, J; Lu, L; Mandic, DP; Marozas, V; Mejía-Mejía, E; Mukkamala, R; Nitzan, M; Pereira, T; Poon, CCY; Ramella-Roman, JC; Saarinen, H; Shandhi, MMH; Shin, H; Stansby, G; Tamura, T; Vehkaoja, A; Wang, WK; Zhang, YT; Zhao, N; Zheng, DC; Zhu, TT;
Publication
PHYSIOLOGICAL MEASUREMENT
Abstract
Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology.
2023
Authors
Ribeiro, L; Oliveira, HP; Hu, X; Pereira, T;
Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023
Abstract
PPG signal is a valuable resource for continuous heart rate monitoring; however, this signal suffers from artifact movements, which is particularly relevant during physical exercise and makes this biomedical signal difficult to use for heart rate detection during those activities. The purpose of this study was to develop learning models to determine heart rate using data from wearables (PPG and acceleration signals) and dealing with noise during physical exercise. Learning models based on CNNs and LSTMs were developed to predict the heart rate. The PPG signal was combined with data from accelerometers trying to overcome the noise movement on the PPG signal. Two datasets were used on this work: the 2015 IEEE Signal Processing Cup (SPC) dataset was used for training and testing, and another dataset was used for validation of the learning model (PPG-DaLiA dataset). The predictions obtained by the learning model represented a mean average error of 7.033±5.376 bpm for the SCP dataset, while a mean average error of 9.520±8.443 bpm for the validation set. The use of acceleration data increases the performance of the learning models on the prediction of the heart rate, showing the benefits of using this source of data to overcome the noise movement problem on the PPG signal. The combination of PPG signal with acceleration data could allow the learning models to use more information regarding the motion artifacts that affect the PPG and improve performance on the physiological event detections, which will largely spread the use of wearables on the healthcare applications for continuous monitor the physiological state allowing early and accurate detection of pathological events.
2023
Authors
Gomes, A; Pereira, T; Silva, F; Franco, P; Carvalho, DC; Dias, SC; Oliveira, HP;
Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023
Abstract
Bone marrow edema (BME) or bone marrow lesion is the term attributed to an observed signal change within the bone marrow in magnetic resonance imaging (MRI). BME can be originated from multiple mechanisms, with pain being the main symptom. The presence of BME is an unspecific but sensitive sign with a wide differential diagnosis, that may act as a guide that leads to a systematic and correct interpretation of the magnetic resonance examination. An automatic approach for BME detection and quantification aims to reduce the overload of clinicians, decreasing human error and accelerating the time to the correct diagnosis. In this work, the bone region on the MRI slice was split into several patches and a CNN-based model was trained to detect BME in each patch from the MRI slice. The learning model developed achieved an AUC of 0.853 ± 0.056, showing that the CNN-based model can be used to detect BME in the MRI and confirming the patch strategy implemented to deal with the small data size and allowing the neural network to learn the specific information related with the classification task by reducing the region of the image to be considered. A learning model that can help clinicians with BME identification will decrease the time and the error for the diagnosis, and represent the first step for a more objective assessment of the BME. © 2023 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.