Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
Publications

Publications by C-BER

2019

Full-body motion assessment: Concurrent validation of two body tracking depth sensors versus a gold standard system during gait

Authors
Vilas Boas, MDC; Choupina, HMP; Rocha, AP; Fernandes, JM; Cunha, JPS;

Publication
Journal of Biomechanics

Abstract
RGB-D cameras provide 3-D body joint data in a low-cost, portable and non-intrusive way, when compared with reference motion capture systems used in laboratory settings. In this contribution, we evaluate the validity of both Microsoft Kinect versions (v1 and v2) for motion analysis against a Qualisys system in a simultaneous protocol. Two different walking directions in relation to the Kinect (towards – WT, and away – WA) were explored. For each gait trial, measures related with all body parts were computed: velocity of all joints, distance between symmetrical joints, and angle at some joints. For each measure, we compared each Kinect version and Qualisys by obtaining the mean true error and mean absolute error, Pearson's correlation coefficient, and optical-to-depth ratio. Although both Kinect v1 and v2 and/or WT and WA data present similar accuracy for some measures, better results were achieved, overall, when using WT data provided by the Kinect v2, especially for velocity measures. Moreover, the velocity and distance presented better results than angle measures. Our results show that both Kinect versions can be an alternative to more expensive systems such as Qualisys, for obtaining distance and velocity measures as well as some angles metrics (namely the knee angles). This conclusion is important towards the off-lab non-intrusive assessment of motor function in different areas, including sports and healthcare. © 2019 Elsevier Ltd

2019

An unsupervised metaheuristic search approach for segmentation and volume measurement of pulmonary nodules in lung CT scans

Authors
Shakibapour, E; Cunha, A; Aresta, G; Mendonca, AM; Campilho, A;

Publication
Expert Systems with Applications

Abstract
This paper proposes a new methodology to automatically segment and measure the volume of pulmonary nodules in lung computed tomography (CT) scans. Estimating the malignancy likelihood of a pulmonary nodule based on lesion characteristics motivated the development of an unsupervised pulmonary nodule segmentation and volume measurement as a preliminary stage for pulmonary nodule characterization. The idea is to optimally cluster a set of feature vectors composed by intensity and shape-related features in a given feature data space extracted from a pre-detected nodule. For that purpose, a metaheuristic search based on evolutionary computation is used for clustering the corresponding feature vectors. The proposed method is simple, unsupervised and is able to segment different types of nodules in terms of location and texture without the need for any manual annotation. We validate the proposed segmentation and volume measurement on the Lung Image Database Consortium and Image Database Resource Initiative – LIDC-IDRI dataset. The first dataset is a group of 705 solid and sub-solid (assessed as part-solid and non-solid) nodules located in different regions of the lungs, and the second, more challenging, is a group of 59 sub-solid nodules. The average Dice scores of 82.35% and 71.05% for the two datasets show the good performance of the segmentation proposal. Comparisons with previous state-of-the-art techniques also show acceptable and comparable segmentation results. The volumes of the segmented nodules are measured via ellipsoid approximation. The correlation and statistical significance between the measured volumes of the segmented nodules and the ground-truth are obtained by Pearson correlation coefficient value, obtaining an R-value = 92.16% with a significance level of 5%. © 2018 Elsevier Ltd

2019

Convolutional Neural Network Architectures for Texture Classification of Pulmonary Nodules

Authors
Ferreira, CA; Cunha, A; Mendonça, AM; Campilho, A;

Publication
Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications - Lecture Notes in Computer Science

Abstract

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publication
Medical Image Analysis

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. © 2018 Elsevier B.V.

2019

Wide Residual Network for Lung-Rads™ Screening Referral

Authors
Ferreira, CA; Aresta, G; Cunha, A; Mendonca, AM; Campilho, A;

Publication
2019 IEEE 6th Portuguese Meeting on Bioengineering (ENBENG)

Abstract

2019

Analysis of the performance of specialists and an automatic algorithm in retinal image quality assessment

Authors
Wanderley, DS; Araujo, T; Carvalho, CB; Maia, C; Penas, S; Carneiro, A; Mendonca, AM; Campilho, A;

Publication
2019 IEEE 6th Portuguese Meeting on Bioengineering (ENBENG)

Abstract

  • 2
  • 52