Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2015

Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

Authors
Moreira, AHJ; Queiros, S; Morais, P; Rodrigues, NF; Correia, AR; Fernandes, V; Pinho, ACM; Fonseca, JC; Vilaca, JL;

Publication
MEDICAL IMAGING 2015: COMPUTER-AIDED DIAGNOSIS

Abstract
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a three-step approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34 mu m and 108 mu m, and angular misfits of 0.15+/-0.08 degrees and 1.4 degrees, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

2015

A-scan ultrasound system for real-time puncture safety assessment during percutaneous nephrolithotomy

Authors
Rodrigues, PL; Rodrigues, NF; Fonseca, JC; von Kruger, MA; Pereira, WCA; Vilaca, JL;

Publication
MEDICAL IMAGING 2015: ULTRASONIC IMAGING AND TOMOGRAPHY

Abstract
Background: Kidney stone is a major universal health problem, affecting 10% of the population worldwide. Percutaneous nephrolithotomy is a first-line and established procedure for disintegration and removal of renal stones. Its surgical success depends on the precise needle puncture of renal calyces, which remains the most challenging task for surgeons. This work describes and tests a new ultrasound based system to alert the surgeon when undesirable anatomical structures are in between the puncture path defined through a tracked needle. Methods: Two circular ultrasound transducers were built with a single 3.3-MHz piezoelectric ceramic PZT SN8, 25.4 mm of radius and resin-epoxy matching and backing layers. One matching layer was designed with a concave curvature to work as an acoustic lens with long focusing. The A-scan signals were filtered and processed to automatically detect reflected echoes. Results: The transducers were mapped in water tank and tested in a study involving 45 phantoms. Each phantom mimics different needle insertion trajectories with a percutaneous path length between 80 and 150 mm. Results showed that the beam cross-sectional area oscillates around the ceramics radius and it was possible to automatically detect echo signals in phantoms with length higher than 80 mm. Conclusions: This new solution may alert the surgeon about anatomical tissues changes during needle insertion, which may decrease the need of X-Ray radiation exposure and ultrasound image evaluation during percutaneous puncture.

2015

Computer-aided recognition of dental implants in X-ray images

Authors
Morais, P; Queiros, S; Moreira, AHJ; Ferreira, A; Ferreira, E; Duque, D; Rodrigues, NF; Vilaca, JL;

Publication
MEDICAL IMAGING 2015: COMPUTER-AIDED DIAGNOSIS

Abstract
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant's manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97+/-0.01, 2.24+/-0.85 pixels and 11.12+/-6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.

2015

CanIHelp: A Platform for Inclusive Collaboration

Authors
Paredes, H; Fernandes, H; Sousa, A; Fortes, R; Koch, F; Filipe, V; Barroso, J;

Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: ACCESS TO INTERACTION, PT II

Abstract
Technology plays a key role in daily life of people with special needs, being a mean of integration or even communication with society. By built up experience, we find that support tools play a crucial part in empowerment of persons with special needs and small advances may represent shifts and opportunities. The diversity of solutions and the need for dedicated hardware to each feature represents a barrier to its use, compromising the success of the solutions against, among others, problems of usability and scale. This paper aims to explore the concept of inclusive collaboration to enhance the mutual interaction and assistance. The proposed approach combines and generalizes the usage of human computation in a collaborative environment with assistive technologies creating redundancy and complementarity in the solutions provided, contributing to enhance the quality of life of people with special needs and the elderly. The CanIHelp platform is an embodiment of the concept as a result from an orchestrated model using mechanisms of collective intelligence through social inclusion initiatives. The platform features up for integrating assistive technologies, collaborative tools and multiple multimedia communication channels, accessible through multimodal interfaces for universal access. A discussion of the impacts of fostering collaboration and broadening from the research concepts to the societal impacts is presented. As final remarks a set of future research challenges and guidelines are identified.

2015

Context-aware, accessibility and dynamic adaptation of mobile interfaces in business environments

Authors
Sousa, A; Barroso, J; Paredes, H; Fernandes, H; Filipe, V;

Publication
PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT AND TECHNOLOGIES FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION

Abstract
Technology entered in our lives and changed not only the way we communicate and interact with each other, but also our habits and the experiences in the real and digital worlds. However, due to the rapid progress, we use technology in every moment of our day and sometimes this causes some frustration because the way we interact with the applications is not the most effective for the context we are in. This problem is even more significant in the business environments, where effectively the time we take to finish some kind of task can mean profit or loss for the business. The key to these problems can be in the adaptation of the interface to user needs and constrains as it happens in solutions for situational induced impairment and disabilities (SIID). This can be made by inference the context in which the user it is by using different sensors available on mobile platform and different sources of information such as user profile, agenda and usage history. In this paper we propose a review of the main challenges of the dynamic adaptation of interfaces, with a case of application in a business environment. (c) 2015 The Authors. Published by Elsevier B.V.

2015

Exploring Smart Environments Through Human Computation for Enhancing Blind Navigation

Authors
Paredes, H; Fernandes, H; Sousa, A; Fernandes, L; Koch, FL; Fortes, RPM; Filipe, V; Barroso, J;

Publication
CARE/MFSC@AAMAS

Abstract
In this paper the orchestration of wearable sensors with human computation is explored to provide map metadata for blind navigation. Technological navigation aids for blind must provide accurate information about the environment and select the best path to reach a chosen destination. Urban barriers represent dangers for the blind users. The dynamism of smart cities promotes a constant change of these dangers and therefore a potentially “dangerous territory” for these users. Previous work demonstrated that redundant solutions in smart environments complemented by human computation could provide a reliable and trustful data source for a new generation of blind navigation systems. We propose and discuss a modular architecture, which interacts with environmental sensors to gather information and process the acquired data with advanced algorithms empowered by human computation. The gathered metadata should enable the creation of “happy maps” that are delivered to blind users through a previously developed navigation system.

  • 454
  • 685