Details
Name
Joana Vale SousaRole
Research AssistantSince
01st December 2020
Nationality
PortugalCentre
Telecommunications and MultimediaContacts
+351222094000
joana.v.sousa@inesctec.pt
2025
Authors
Freire, AM; Rodrigues, EM; Sousa, JV; Gouveia, M; Ferreira-Santos, D; Pereira, T; Oliveira, HP; Sousa, P; Silva, AC; Fernandes, MS; Hespanhol, V; Araújo, J;
Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, UAHCI 2025, PT I
Abstract
Lung cancer remains one of the most common and lethal forms of cancer, with approximately 1.8 million deaths annually, often diagnosed at advanced stages. Early detection is crucial, but it depends on physicians' accurate interpretation of computed tomography (CT) scans, a process susceptible to human limitations and variability. ByMe has developed a medical image annotation and anonymization tool designed to address these challenges through a human-centered approach. The tool enables physicians to seamlessly add structured attribute-based annotations (e.g., size, location, morphology) directly within their established workflows, ensuring intuitive interaction.Integrated with Picture Archiving and Communication Systems (PACS), the tool streamlines the annotation process and enhances usability by offering a dedicated worklist for retrospective and prospective case analysis. Robust anonymization features ensure compliance with privacy regulations such as the General Data Protection Regulation (GDPR), enabling secure dataset sharing for research and developing artificial intelligence (AI) models. Designed to empower AI integration, the tool not only facilitates the creation of high-quality datasets but also lays the foundation for incorporating AI-driven insights directly into clinical workflows. Focusing on usability, workflow integration, and privacy, this innovation bridges the gap between precision medicine and advanced technology. By providing the means to develop and train AI models for lung cancer detection, it holds the potential to significantly accelerate diagnosis as well as enhance its accuracy and consistency.
2025
Authors
Amaro, M; Sousa, JV; Gouveia, M; Oliveira, HP; Pereira, T;
Publication
Measurement and Evaluations in Cancer Care
Abstract
2025
Authors
Sousa, JV; Oliveira, HP; Pereira, T;
Publication
2025 IEEE 25th International Conference on Bioinformatics and Bioengineering (BIBE)
Abstract
2024
Authors
Teiga, I; Sousa, JV; Silva, F; Pereira, T; Oliveira, HP;
Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT III, UAHCI 2024
Abstract
Significant medical image visualization and annotation tools, tailored for clinical users, play a crucial role in disease diagnosis and treatment. Developing algorithms for annotation assistance, particularly machine learning (ML)-based ones, can be intricate, emphasizing the need for a user-friendly graphical interface for developers. Many software tools are available to meet these requirements, but there is still room for improvement, making the research for new tools highly compelling. The envisioned tool focuses on navigating sequences of DICOM images from diverse modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) scans, Ultrasound (US), and X-rays. Specific requirements involve implementing manual annotation features such as freehand drawing, copying, pasting, and modifying annotations. A scripting plugin interface is essential for running Artificial Intelligence (AI)-based models and adjusting results. Additionally, adaptable surveys complement graphical annotations with textual notes, enhancing information provision. The user evaluation results pinpointed areas for improvement, including incorporating some useful functionalities, as well as enhancements to the user interface for a more intuitive and convenient experience. Despite these suggestions, participants praised the application's simplicity and consistency, highlighting its suitability for the proposed tasks. The ability to revisit annotations ensures flexibility and ease of use in this context.
2023
Authors
Ribeiro, G; Pereira, T; Silva, F; Sousa, J; Carvalho, DC; Dias, SC; Oliveira, HP;
Publication
APPLIED SCIENCES-BASEL
Abstract
Bone marrow edema (BME) is the term given to the abnormal fluid signal seen within the bone marrow on magnetic resonance imaging (MRI). It usually indicates the presence of underlying pathology and is associated with a myriad of conditions/causes. However, it can be misleading, as in some cases, it may be associated with normal changes in the bone, especially during the growth period of childhood, and objective methods for assessment are lacking. In this work, learning models for BME detection were developed. Transfer learning was used to overcome the size limitations of the dataset, and two different regions of interest (ROI) were defined and compared to evaluate their impact on the performance of the model: bone segmention and intensity mask. The best model was obtained for the high intensity masking technique, which achieved a balanced accuracy of 0.792 +/- 0.034. This study represents a comparison of different models and data regularization techniques for BME detection and showed promising results, even in the most difficult range of ages: children and adolescents. The application of machine learning methods will help to decrease the dependence on the clinicians, providing an initial stratification of the patients based on the probability of edema presence and supporting their decisions on the diagnosis.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.