Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Paulo Cunha

2018

Experimental and Theoretical Evaluation of the Trapping Performance of Polymeric Lensed Optical Fibers: Single Biological Cells versus Synthetic Structures

Autores
Paiva, JS; Ribeiro, RSR; Jorge, PAS; Rosa, CC; Azevedo, MM; Sampaio, P; Cunha, JPS;

Publicação
BIOPHOTONICS: PHOTONIC SOLUTIONS FOR BETTER HEALTH CARE VI

Abstract
Optical Tweezers (OTs) have been widely applied in Biology, due to their outstanding focusing abilities, which make them able to exert forces on micro-sized particles. The magnitude of such forces (pN) is strong enough to trap their targets. However, the most conventional OT setups are based on complex configurations, being associated with focusing difficulties with biologic samples. Optical Fiber Tweezers (OFTs), which consist in optical fibers with a lens in one of its extremities are valuable alternatives to Conventional Optical Tweezers (COTs). OFTs are flexible, simpler, low-cost and easy to handle. However, its trapping performance when manipulating biological and complex structures remains poorly characterized. In this study, we experimentally characterized the optical trapping of a biological cell found within a culture of rodent glial neuronal cells, using a polymeric lens fabricated through a photo-polymerization method on the top of a fiber. Its trapping performance was compared with two synthetic microspheres (PMMA, polystyrene) and two simple cells (a yeast and a Drosophila Melanogaster cell). Moreover, the experimental results were also compared with theoretical calculations made using a numerical model based on the Finite Differences Time Domain. It was found that, although the mammalian neuronal cell had larger dimensions, the magnitude of forces exerted on it was the lowest among all particles. Our results allowed us to quantify, for the first time, the complexity degree of manipulating such "demanding" cells in comparison with known targets. Thus, they can provide valuable insights about the influence of particle parameters such as size, refractive index, homogeneity degree and nature (biologic, synthetic). Furthermore, the theoretical results matched the experimental ones which validates the proposed model.

2018

A wearable system for the stress monitoring of air traffic controllers during an air traffic control refresher training and the trier social stress test: A comparative study

Autores
Rodrigues, S; Paiva, JS; Dias, D; Aleixo, M; Filipe, R; Cunha, JPS;

Publicação
Open Bioinformatics Journal

Abstract
Background: Air Traffic Control (ATC) is a complex and demanding process, exposing Air Traffic Controllers (ATCs) to high stress. Recently, efforts have been made in ATC to maintain safety and efficiency in the face of increasing air traffic demands. Computer simulations have been a useful tool for ATC training, improving ATCs skills and consequently traffic safety. Objectives: This study aims to: a) evaluate psychophysiological indices of stress in an ATC simulation environment using a wearable biomonitoring platform. In order to obtain a measure of ATCs stress levels, results from an experimental study with the same participants, that included a stress-induced task were used as a stress ground truth; b) understand if there are differences in stress levels of ATCs with different job functions (“advisors” vs “operationals”) when performing an ATC Refresher Training, in a simulator environment. Methods: Two studies were conducted with ATCs: Study 1, that included a stress-induced task - the Trier Social Stress Test (TSST) and Study 2, that included an ATC simulation task. Linear Heart Rate Variability (HRV) features from ATCs were acquired using a medical grade wearable Electrocardiogram (ECG) device. Self-reports were used to measure perceived stress. Results: TSST was self-reported as being much more stressful than the simulation task. Physiological data supports these results. Results from study 2 showed more stress among the “advisors” group when comparing to the “operational” group. Conclusion: Results point to the importance of the development of quantified Occupational Health (qOHealth) devices to allow monitoring and differentiation of ATCs stress responses. © 2018 Donato and Denaro.

2018

Fabrication of Multimode-Single Mode Polymer Fiber Tweezers for Single Cell Trapping and Identification with Improved Performance

Autores
Rodrigues, SM; Paiva, JS; Ribeiro, RSR; Soppera, O; Cunha, JPS; Jorge, PAS;

Publicação
SENSORS

Abstract
Optical fiber tweezers have been gaining prominence in several applications in Biology and Medicine. Due to their outstanding focusing abilities, they are able to trap and manipulate microparticles, including cells, needing any physical contact and with a low degree of invasiveness to the trapped cell. Recently, we proposed a fiber tweezer configuration based on a polymeric micro-lens on the top of a single mode fiber, obtained by a self-guided photopolymerization process. This configuration is able to both trap and identify the target through the analysis of short-term portions of the back-scattered signal. In this paper, we propose a variant of this fabrication method, capable of producing more robust fiber tips, which produce stronger trapping effects on targets by as much as two to ten fold. These novel lenses maintain the capability of distinguish the different classes of trapped particles based on the back-scattered signal. This novel fabrication method consists in the introduction of a multi mode fiber section on the tip of a single mode (SM) fiber. A detailed description of how relevant fabrication parameters such as the length of the multi mode section and the photopolymerization laser power can be tuned for different purposes (e.g., microparticles trapping only, simultaneous trapping and sensing) is also provided, based on both experimental and theoretical evidences.

2018

NeuroKinect 3.0: Multi-bed 3Dvideo-EEG system for epilepsy clinical motion monitoring

Autores
Choupina, HMP; Rocha, AP; Fernandes, JM; Vollmar, C; Noachtar, S; Cunha, JPS;

Publicação
Studies in Health Technology and Informatics

Abstract
Epilepsy diagnosis is typically performed through 2Dvideo-EEG monitoring, relying on the viewer's subjective interpretation of the patient's movements of interest. Several attempts at quantifying seizure movements have been performed in the past using 2D marker-based approaches, which have several drawbacks for the clinical routine (e.g. occlusions, lack of precision, and discomfort for the patient). These drawbacks are overcome with a 3D markerless approach. Recently, we published the development of a single-bed 3Dvideo-EEG system using a single RGB-D camera (Kinect v1). In this contribution, we describe how we expanded the previous single-bed system to a multi-bed departmental one that has been managing 6.61 Terabytes per day since March 2016. Our unique dataset collected so far includes 2.13 Terabytes of multimedia data, corresponding to 278 3Dvideo-EEG seizures from 111 patients. To the best of the authors' knowledge, this system is unique and has the potential of being spread to multiple EMUs around the world for the benefit of a greater number of patients. © 2018 European Federation for Medical Informatics (EFMI) and IOS Press.

2018

Quantitative and qualitative analysis of ictal vocalization in focal epilepsy syndromes

Autores
Hartl, E; Knoche, T; Choupina, HMP; Remi, J; Vollmar, C; Cunha, JPS; Noachtar, S;

Publicação
SEIZURE-EUROPEAN JOURNAL OF EPILEPSY

Abstract
Purpose: To investigate the frequency, localizing significance, and intensity characteristics of ictal vocalization in different focal epilepsy syndromes. Methods: Up to four consecutive focal seizures were evaluated in 277 patients with lesional focal epilepsy, excluding isolated auras and subclinical EEG seizure patterns. Vocalization was considered to be present if observed in at least one of the analyzed seizures and not being of speech quality. Intensity features of ictal vocalization were analyzed in a subsample of 17 patients with temporal and 19 with extratemporal epilepsy syndrome. Results: Ictal vocalization was observed in 37% of the patients (102/277) with similar frequency amongst different focal epilepsy syndromes. Localizing significance was found for its co-occurrence with ictal automatisms, which identified patients with temporal seizure onset with a sensitivity of 92% and specificity of 70%. Quantitative analysis of vocalization intensity allowed to distinguish seizures of frontal from temporal lobe origin based on the intensity range (p = 0.0003), intensity variation (p < 0.0001), as well as the intensity increase rate at the beginning of the vocalization (p = 0.003), which were significantly higher in frontal lobe seizures. No significant difference was found for mean intensity and mean vocalization duration. Conclusions: Although ictal vocalization is similarly common in different focal epilepsies, it shows localizing significance when taken into account the co-occurring seizure semiology. It especially increases the localizing value of automatisms, predicting a temporal seizure onset with a sensitivity of 92% and specificity of 70%. Quantitative parameters of the intensity dynamic objectively distinguished frontal lobe seizures, establishing an observer independent tool for semiological seizure evaluation.

2018

System for automatic gait analysis based on a single RGB-D camera

Autores
Rocha, AP; Pereira Choupina, HMP; Vilas Boas, MD; Fernandes, JM; Silva Cunha, JPS;

Publicação
PLOS ONE

Abstract
Human gait analysis provides valuable information regarding the way of walking of a given subject. Low-cost RGB-D cameras, such as the Microsoft Kinect, are able to estimate the 3-D position of several body joints without requiring the use of markers. This 3-D information can be used to perform objective gait analysis in an affordable. portable, and non-intrusive way. In this contribution, we present a system for fully automatic gait analysis using a single RGB-D camera, namely the second version of the Kinect. Our system does not require any manual intervention (except for starting/stopping the data acquisition), since it firstly recognizes whether the subject is walking or not, and identifies the different gait cycles only when walking is detected. For each gait cycle, it then computes several gait parameters, which can provide useful information in various contexts, such as sports, healthcare, and biometric identification. The activity recognition is performed by a predictive model that distinguishes between three activities (walking, standing and marching), and between two postures of the subject (facing the sensor, and facing away from it). The model was built using a multilayer perceptron algorithm and several measures extracted from 3-D joint data, achieving an overall accuracy and F-1 score of 98%. For gait cycle detection, we implemented an algorithm that estimates the instants corresponding to left and right heel strikes, relying on the distance between ankles, and the velocity of left and right ankles. The algorithm achieved errors for heel strike instant and stride duration estimation of 15 +/- 25 ms and 1 +/- 29 ms (walking towards the sensor), and 12 +/- 23 ms and 2 +/- 24 ms (walking away from the sensor ) Our gait cycle detection solution can be used with any other RGB-D camera that provides the 3-D position of the main body joints.

  • 9
  • 40