Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    João Pedro Monteiro
  • Cargo

    Estudante Externo
  • Desde

    20 outubro 2011
003
Publicações

2022

3D Breast Volume Estimation

Autores
Gouveia, PF; Oliveira, HP; Monteiro, JP; Teixeira, JF; Silva, NL; Pinto, D; Mavioso, C; Anacleto, J; Martinho, M; Duarte, I; Cardoso, JS; Cardoso, F; Cardoso, MJ;

Publicação
EUROPEAN SURGICAL RESEARCH

Abstract
Introduction: Breast volume estimation is considered crucial for breast cancer surgery planning. A single, easy, and reproducible method to estimate breast volume is not available. This study aims to evaluate, in patients proposed for mastectomy, the accuracy of the calculation of breast volume from a low-cost 3D surface scan (Microsoft Kinect) compared to the breast MRI and water displacement technique. Material and Methods: Patients with a Tis/T1-T3 breast cancer proposed for mastectomy between July 2015 and March 2017 were assessed for inclusion in the study. Breast volume calculations were performed using a 3D surface scan and the breast MRI and water displacement technique. Agreement between volumes obtained with both methods was assessed with the Spearman and Pearson correlation coefficients. Results: Eighteen patients with invasive breast cancer were included in the study and submitted to mastectomy. The level of agreement of the 3D breast volume compared to surgical specimens and breast MRI volumes was evaluated. For mastectomy specimen volume, an average (standard deviation) of 0.823 (0.027) and 0.875 (0.026) was obtained for the Pearson and Spearman correlations, respectively. With respect to MRI annotation, we obtained 0.828 (0.038) and 0.715 (0.018). Discussion: Although values obtained by both methodologies still differ, the strong linear correlation coefficient suggests that 3D breast volume measurement using a low-cost surface scan device is feasible and can approximate both the MRI breast volume and mastectomy specimen with sufficient accuracy. Conclusion: 3D breast volume measurement using a depth-sensor low-cost surface scan device is feasible and can parallel MRI breast and mastectomy specimen volumes with enough accuracy. Differences between methods need further development to reach clinical applicability. A possible approach could be the fusion of breast MRI and the 3D surface scan to harmonize anatomic limits and improve volume delimitation.

2019

Geometry-Based Skin Colour Estimation for Bare Torso Surface Reconstruction

Autores
Monteiro, JP; Zolfagharnasab, H; Oliveira, HP;

Publicação
IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT II

Abstract
Three-dimensional imaging techniques have been endeavouring at reaching affordable ubiquity. Nevertheless, its use in clinical practice can be hampered by less than naturally looking surfaces that greatly impact its visual inspection. This work considers the task of surface reconstruction from point clouds of non-rigid scenes acquired through structured-light-based methods, wherein the reconstructed surface contains some level of imperfection to be inpainted before visualized by experts in a clinically oriented context. Appertain to the topic, the recovery of colour information for missing or damaged partial regions is considered. A local geometry-based interpolation method is proposed for the reconstruction of the bare human torso and compared against a reference differential equations based inpainting method. Widely used perceptual distance-based metrics, such as PSNR, SSIM and MS-SSIM, and the evaluation from a panel of experienced breast cancer surgeons is presented for the discussion on inpainting quality assessment.

2018

Three-dimensional planning tool for breast conserving surgery: A technological review

Autores
Oliveira, SP; Morgado, P; Gouveia, PF; Teixeira, JF; Bessa, S; Monteiro, JP; Zolfagharnasab, H; Reis, M; Silva, NL; Veiga, D; Cardoso, MJ; Oliveira, HP; Ferreira, MJ;

Publicação
Critical Reviews in Biomedical Engineering

Abstract
Breast cancer is one of the most common malignanciesaffecting women worldwide. However, despite its incidence trends have increased, the mortality rate has significantly decreased. The primary concern in any cancer treatment is the oncological outcome but, in the case of breast cancer, the surgery aesthetic result has become an important quality indicator for breast cancer patients. In this sense, an adequate surgical planning and prediction tool would empower the patient regarding the treatment decision process, enabling a better communication between the surgeon and the patient and a better understanding of the impact of each surgical option. To develop such tool, it is necessary to create complete 3D model of the breast, integrating both inner and outer breast data. In this review, we thoroughly explore and review the major existing works that address, directly or not, the technical challenges involved in the development of a 3D software planning tool in the field of breast conserving surgery. © 2018 by Begell House, Inc.

2017

Multi-modal Complete Breast Segmentation

Autores
Zolfagharnasab, H; Monteiro, JP; Teixeira, JF; Borlinhas, F; Oliveira, HP;

Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017)

Abstract
Automatic segmentation of breast is an important step in the context of providing a planning tool for breast cancer conservative treatment, being important to segment completely the breast region in an objective way; however, current methodologies need user interaction or detect breast contour partially. In this paper, we propose a methodology to detect the complete breast contour, including the pectoral muscle, using multi-modality data. Exterior contour is obtained from 3D reconstructed data acquired from low-cost RGB-D sensors, and the interior contour (pectoral muscle) is obtained from Magnetic Resonance Imaging (MRI) data. Quantitative evaluation indicates that the proposed methodology performs an acceptable detection of breast contour, which is also confirmed by visual evaluation.

2016

Cognition inspired format for the expression of computer vision metadata

Autores
Castro, H; Monteiro, J; Pereira, A; Silva, D; Coelho, G; Carvalho, P;

Publicação
MULTIMEDIA TOOLS AND APPLICATIONS

Abstract
Over the last decade noticeable progress has occurred in automated computer interpretation of visual information. Computers running artificial intelligence algorithms are growingly capable of extracting perceptual and semantic information from images, and registering it as metadata. There is also a growing body of manually produced image annotation data. All of this data is of great importance for scientific purposes as well as for commercial applications. Optimizing the usefulness of this, manually or automatically produced, information implies its precise and adequate expression at its different logical levels, making it easily accessible, manipulable and shareable. It also implies the development of associated manipulating tools. However, the expression and manipulation of computer vision results has received less attention than the actual extraction of such results. Hence, it has experienced a smaller advance. Existing metadata tools are poorly structured, in logical terms, as they intermix the declaration of visual detections with that of the observed entities, events and comprising context. This poor structuring renders such tools rigid, limited and cumbersome to use. Moreover, they are unprepared to deal with more advanced situations, such as the coherent expression of the information extracted from, or annotated onto, multi-view video resources. The work here presented comprises the specification of an advanced XML based syntax for the expression and processing of Computer Vision relevant metadata. This proposal takes inspiration from the natural cognition process for the adequate expression of the information, with a particular focus on scenarios of varying numbers of sensory devices, notably, multi-view video.