2019
Autores
Carneiro, G; Manuel, J; Tavares, RS; Bradley, AP; Papa, JP; Nascimento, JC; Cardoso, JS; Lu, Z; Belagiannis, V;
Publicação
COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION
Abstract
2017
Autores
Rosado, L; Oliveira, J; Vasconcelos, MJM; da Costa, JMC; Elias, D; Cardoso, JS;
Publicação
PROCEEDINGS OF THE 10TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 1: BIODEVICES
Abstract
Microscopic examination is currently the gold standard test for diagnosis of several neglected tropical diseases. However, reliable identification of parasitic infections requires in-depth train and access to proper equipment for subsequent microscopic analysis. These requirements are closely related with the increasing interest in the development of computer-aided diagnosis systems, and Mobile Health is starting to play an important role when it comes to health in Africa, allowing for distributed solutions that provide access to complex diagnosis even in rural areas. In this paper, we present a 3D-printed microscope that can easily be attached to a wide range of mobile devices models. To the best of our knowledge, this is the first proposed smartphone-based alternative to conventional microscopy that allows autonomous acquisition of a pre-defined number of images at 1000x magnification with suitable resolution, by using a motorized automated stage fully powered and controlled by a smartphone, without the need of manual focus of the smear slide. Reference smears slides with different parasites were used to test the device. The acquired images showed that was possible to visually detect those agents, which clearly illustrate the potential that this device can have, specially in developing countries with limited access to healthcare services.
2019
Autores
Pinto, JR; Cardoso, JS; Lourenço, A;
Publicação
The Biometric Computing
Abstract
2020
Autores
Ferreira, PM; Pernes, D; Rebelo, A; Cardoso, JS;
Publicação
TWELFTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2019)
Abstract
Sign Language Recognition (SLR) has become an appealing topic in modern societies because such technology can ideally be used to bridge the gap between deaf and hearing people. Although important steps have been made towards the development of real-world SLR systems, signer-independent SLR is still one of the bottleneck problems of this research field. In this regard, we propose a deep neural network along with an adversarial training objective, specifically designed to address the signer-independent problem. Concretely speaking, the proposed model consists of an encoder, mapping from input images to latent representations, and two classifiers operating on these underlying representations: (i) the signclassifier, for predicting the class/sign labels, and (ii) the signer-classifier, for predicting their signer identities. During the learning stage, the encoder is simultaneously trained to help the sign-classifier as much as possible while trying to fool the signer-classifier. This adversarial training procedure allows learning signer-invariant latent representations that are in fact highly discriminative for sign recognition. Experimental results demonstrate the effectiveness of the proposed model and its capability of dealing with the large inter-signer variations.
2020
Autores
Mavioso, C; Araujo, RJ; Oliveira, HP; Anacleto, JC; Vasconcelos, MA; Pinto, D; Gouveia, PF; Alves, C; Cardoso, F; Cardoso, JS; Cardoso, MJ;
Publicação
BREAST
Abstract
The deep inferior epigastric perforator (DIEP) is the most commonly used free flap in mastectomy reconstruction. Preoperative imaging techniques are routinely used to detect location, diameter and course of perforators, with direct intervention from the imaging team, who subsequently draw a chart that will help surgeons choosing the best vascular support for the reconstruction. In this work, the feasibility of using a computer software to support the preoperative planning of 40 patients proposed for breast reconstruction with a DIEP flap is evaluated for the first time. Blood vessel centreline extraction and local characterization algorithms are applied to identify perforators and compared with the manual mapping, aiming to reduce the time spent by the imaging team, as well as the inherent subjectivity to the task. Comparing with the measures taken during surgery, the software calibre estimates were worse for vessels smaller than 1.5 mm (P = 6e-4) but better for the remaining ones (P = 2e-3). Regarding vessel location, the vertical component of the software output was significantly different from the manual measure (P = 0.02), nonetheless that was irrelevant during surgery as errors in the order of 2-3 mm do not have impact in the dissection step. Our trials support that a reduction of the time spent is achievable using the automatic tool (about 2 h/case). The introduction of artificial intelligence in clinical practice intends to simplify the work of health professionals and to provide better outcomes to patients. This pilot study paves the way for a success story. (C) 2020 The Authors. Published by Elsevier Ltd.
2020
Autores
Goncalves, T; Silva, W; Cardoso, J;
Publicação
XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019
Abstract
Breast cancer is a highly mutable and rapidly evolving disease, with a large worldwide incidence. Even though, it is estimated that approximately 90% of the cases are treatable and curable if detected on early staging and given the best treatment. Nowadays, with the existence of breast cancer routine screening habits, better clinical treatment plans and proper management of the disease, it is possible to treat most cancers with conservative approaches, also known as breast cancer conservative treatments (BCCT). With such a treatment methodology, it is possible to focus on the aesthetic results of the surgery and the patient's Quality of Life, which may influence BCCT outcomes. In the past, this assessment would be done through subjective methods, where a panel of experts would be needed to perform the assessment; however, with the development of computer vision techniques, objective methods, such as BAT (c) and BCCT.core, which perform the assessment based on asymmetry measurements, have been used. On the other hand, they still require information given by the user and none of them has been considered the gold standard for this task. Recently, with the advent of deep learning techniques, algorithms capable of improving the performance of traditional methods on the detection of breast fiducial points (required for asymmetry measurements) have been proposed and showed promising results. There is still, however, a large margin for investigation on how to integrate such algorithms in a complete application, capable of performing an end-to-end classification of the BCCT outcomes. Taking this into account, this thesis shows a comparative study between deep convolutional networks for image segmentation and two different quality-driven keypoint detection architectures for the detection of the breast contour. One that uses a deep learning model that has learned to predict the quality (given by the mean squared error) of an array of keypoints, and, based on this quality, applies the backpropagation algorithm, with gradient descent, to improve them; another which uses a deep learning model which was trained with the quality as a regularization method and that used iterative refinement, in each training step, to improve the quality of the keypoints that were fed into the network. Although none of the methods surpasses the current state of the art, they present promising results for the creation of alternative methodologies to address other regression problems in which the learning of the quality metric may be easier. Following the current trend in the field of web development and with the objective of transferring BCCT.core to an online format, a prototype of a web application for the automatic keypoint detection was developed and is presented in this document. Currently, the user may upload an image and automatically detect and/or manipulate its keypoints. This prototype is completely scalable and can be upgraded with new functionalities according to the user's needs.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.