Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por HumanISE

2018

O PAPEL DO DOCUMENTO FOTOGRÁFICO NOS ARQUIVOS

Autores
Rodrigues, JS;

Publicação
Páginas a&b Arquivos & Bibliotecas

Abstract

2018

Keep my head on my shoulders! Why third-person is bad for navigation in VR

Autores
Medeiros, D; dos Anjos, RK; Mendes, D; Pereira, JM; Raposo, A; Jorge, J;

Publicação
24TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2018)

Abstract
Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.

2018

Smart Choices for Deviceless and Device-Based Manipulation in Immersive Virtual Reality

Autores
Caputo, FM; Mendes, D; Bonetti, A; Saletti, G; Giachetti, A;

Publicação
2018 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018, Tuebingen/Reutlingen, Germany, 18-22 March 2018

Abstract
The choice of a suitable method for object manipulation is one of the most critical aspects of virtual environment design. It has been shown that different environments or applications might benefit from direct manipulation approaches, while others might be more usable with indirect ones, exploiting, for example, three dimensional virtual widgets. When it comes to mid-Air interactions, the success of a manipulation technique is not only defined by the kind of application but also by the hardware setup, especially when specific restrictions exist. In this paper we present an experimental evaluation of different techniques and hardware for mid-Air object manipulation in immersive virtual environments (IVE). We compared task performances using both deviceless and device-based tracking solutions, combined with direct and widget-based approaches. We also tested, in the case of freehand manipulation, the effects of different visual feedback, comparing the use of a realistic virtual hand rendering with a simple cursor-like visualization. © 2018 IEEE.

2018

A Study on Natural 3D Shape Manipulation in VR

Autores
Cordeiro, E; Giannini, F; Monti, M; Mendes, D; Ferreira, A;

Publicação
Italian Chapter Conference 2018 - Smart Tools and Apps in computer Graphics, STAG 2018, Brescia, Italy, October 18-19, 2018

Abstract
Current immersive modeling environments use non-natural tools and interfaces to support traditional shape manipulation operations. In the future, we expect the availability of natural methods of interaction with 3D models in immersive environments to become increasingly important in several industrial applications. In this paper, we present a study conducted on a group of potential users with the aim of verifying if there is a common strategy in gestural and vocal interaction in immersive environments when the objective is modifying a 3D shape model. The results indicate that users adopt different strategies to perform the different tasks but in the execution of a specific activity it is possible to identify a set of similar and recurrent gestures. In general, the gestures made are physically plausible. During the experiment, the vocal interaction was used quite rarely and never to express a command to the system but rather to better specify what the user was doing with gestures.

2018

Segmentation of kidney and renal collecting system on 3D computed tomography images

Autores
Oliveira, B; Torres, HR; Queiros, SF; Morais, P; Fonseca, JC; D'hooge, J; Rodrigues, NF; Vilaça, JL;

Publicação
6th IEEE International Conference on Serious Games and Applications for Health, SeGAH 2018, Vienna, Austria, May 16-18, 2018

Abstract
Surgical training for minimal invasive kidney interventions (MIKI) has huge importance within the urology field. Within this topic, simulate MIKI in a patient-specific virtual environment can be used for pre-operative planning using the real patient's anatomy, possibly resulting in a reduction of intra-operative medical complications. However, the validated VR simulators perform the training in a group of standard models and do not allow patient-specific training. For a patient-specific training, the standard simulator would need to be adapted using personalized models, which can be extracted from pre-operative images using segmentation strategies. To date, several methods have already been proposed to accurately segment the kidney in computed tomography (CT) images. However, most of these works focused on kidney segmentation only, neglecting the extraction of its internal compartments. In this work, we propose to adapt a coupled formulation of the B-Spline Explicit Active Surfaces (BEAS) framework to simultaneously segment the kidney and the renal collecting system (CS) from CT images. Moreover, from the difference of both kidney and CS segmentations, one is able to extract the renal parenchyma also. The segmentation process is guided by a new energy functional that combines both gradient and region-based energies. The method was evaluated in 10 kidneys from 5 CT datasets, with different image properties. Overall, the results demonstrate the accuracy of the proposed strategy, with a Dice overlap of 92.5%, 86.9% and 63.5%, and a point-to-surface error around 1.6 mm, 1.9 mm and 4 mm for the kidney, renal parenchyma and CS, respectively. © 2018 IEEE.

2018

6th IEEE International Conference on Serious Games and Applications for Health, SeGAH 2018, Vienna, Austria, May 16-18, 2018

Autores
Vilaça, JL; Grechenig, T; Duque, D; Rodrigues, N; Dias, N;

Publicação
SeGAH

Abstract

  • 297
  • 647