Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por HumanISE

2019

Proceedings of the 24th European Conference on Pattern Languages of Programs, EuroPLoP 2019, Irsee, Germany, July 3-7, 2019

Autores
Sousa, TB;

Publicação
EuroPLop

Abstract

2019

Using Virtual Reality Environments to Predict Pedestrian Behaviour

Autores
Costa, JF; Jacob, J; Rúbio, TRPM; Silva, DC; Cardoso, HL; Ferreira, S; Rodrigues, R; Oliveira, E; Rossetti, RJF;

Publicação
ISC2

Abstract
Pedestrian behaviour modelling and simulation play a fundamental role in reducing traffic risks and new policies implementation costs. However, representing human behaviour in this dynamic environment is not a trivial task and such models require an accurate representation of pedestrian behaviour. Virtual environments have been gaining notoriety as a behaviour elicitation tool, but it is still necessary to understand the validity of this technique in the context of pedestrian studies, as well as to create guidelines for its use. This work proposes a proper methodology for pedestrian behaviour elicitation using virtual reality environments in conjunction with surveys or questionnaires. The methodology focuses on gathering data about the subject, the context, and the action taken, as well as on analyzing the collected data to finally output a behavioural model. The resulting model can be used as a feedback signal to improve environment conditions for experiment iterations. A concrete implementation was built based on this methodology, serving as an example for future studies. A virtual reality traffic environment and two surveys were used as data sources for pedestrian crossing experiments. The subjects controlled a virtual avatar using an HTC Vive and were asked to traverse the distance between two points in a city. The data collected during the experiment was analyzed and used as input to a machine learning model capable of predicting pedestrian speed, taking into account their actions and perceptions. The proposed methodology allowed for the successful data gathering and its use to predict pedestrian behaviour with fairly acceptable accuracy.

2019

The Feeling of Presence: An Immersive Perspective

Autores
Assaf, R; Rodrigues, R;

Publicação
ARTECH

Abstract
The main goal of the conference is to promote the interest in the current digital culture and its intersection with art and technology as an important research field, and also to create a common space for discussion and exchange of new experiences. It seeks to foster greater understanding about digital arts and culture across a wide spectrum of cultural, disciplinary, and professional practices. To this end, many scholars, teachers, researchers, artists, comput-er professionals, and others who are working within the broadly defined areas of digital arts, culture and education across the world, submitted their innovative work to the conference.

2019

Extended Reality Framework for Remote Collaborative Interactions in Virtual Environments

Autores
Pereira, V; Matos, T; Rodrigues, R; Nóbrega, R; Jacob, J;

Publicação
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2019)

Abstract
This paper proposes the implementation of a framework for the development of collaborative extended reality (XR) applications. Using the framework, developers can focus on understanding which collaborative mechanisms they need to implement for the respective reality model application. In this paper we specifically study collaborative mechanisms around object manipulation in Virtual Reality (VR). As such, we planned a VR prototype using the proposed framework, which was used to validate the various interaction and collaboration features in VR. The gathered data from the user tests revealed that they enjoyed the experience and the collaborative mechanisms helped them work together. Furthermore, to understand whether the framework allowed for the development of XR applications, we decided to implement an augmented reality prototype as well. Afterwards, we ran an experiment with 4 VR and 3 AR users sharing the same virtual environment. The experiment was successful at allowing them to interact in real-time in the same shared environment. Therefore, the framework enables the development of XR applications that support different mixed-reality technologies.

2019

ISVC - Digital Platform for Detection and Prevention of Computer Vision Syndrome

Autores
Vieira, F; Oliveira, E; Rodrigues, N;

Publicação
2019 IEEE 7th International Conference on Serious Games and Applications for Health, SeGAH 2019

Abstract
This paper describes the research, development and evaluation process of a solution based on computer vision for the detection and prevention of Computer Vision Syndrome, a type of eye fatigue characterized by the appearance of ocular symptoms during or after prolonged periods watching digital screens. The system developed targets users of computers and mobile devices, detecting and warning users to the occurrence of eye fatigue situations and suggesting corrective behaviours in order to prevent more complicated health consequences. The implementation resorts to machine learning techniques, using eye images datasets for training the eye state detection algorithm. OpenCV Lib was used for eye's segmentation and subsequent fatigue analysis. The final goal of the system is to provide users and health professionals with quality data analysis of eye fatigue levels, in order to raise awareness over accumulated stress and promote behaviour change. © 2019 IEEE.

2019

Top-Down Human Pose Estimation with Depth Images and Domain Adaptation

Autores
Rodrigues, N; Torres, H; Oliveira, B; Borges, J; Queiros, S; Mendes, J; Fonseca, J; Coelho, V; Brito, JH;

Publicação
PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5

Abstract
In this paper, a method for estimation of human pose is proposed, making use of ToF (Time of Flight) cameras. For this, a YOLO based object detection method was used, to develop a top-down method. In the first stage, a network was developed to detect people in the image. In the second stage, a network was developed to estimate the joints of each person, using the image result from the first stage. We show that a deep learning network trained from scratch with ToF images yields better results than taking a deep neural network pretrained on RGB data and retraining it with ToF data. We also show that a top-down detector, with a person detector and a joint detector works better than detecting the body joints over the entire image.

  • 282
  • 700