2019
Autores
Assaf, R; Rodrigues, R;
Publicação
ARTECH 2019: 9th International Conference on Digital and Interactive Arts, Braga, Portugal, October 23-25, 2019
Abstract
The main goal of the conference is to promote the interest in the current digital culture and its intersection with art and technology as an important research field, and also to create a common space for discussion and exchange of new experiences. It seeks to foster greater understanding about digital arts and culture across a wide spectrum of cultural, disciplinary, and professional practices. To this end, many scholars, teachers, researchers, artists, comput-er professionals, and others who are working within the broadly defined areas of digital arts, culture and education across the world, submitted their innovative work to the conference. © 2019 ACM.
2019
Autores
Pereira, V; Matos, T; Rodrigues, R; Nobrega, R; Jacob, J;
Publicação
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2019)
Abstract
This paper proposes the implementation of a framework for the development of collaborative extended reality (XR) applications. Using the framework, developers can focus on understanding which collaborative mechanisms they need to implement for the respective reality model application. In this paper we specifically study collaborative mechanisms around object manipulation in Virtual Reality (VR). As such, we planned a VR prototype using the proposed framework, which was used to validate the various interaction and collaboration features in VR. The gathered data from the user tests revealed that they enjoyed the experience and the collaborative mechanisms helped them work together. Furthermore, to understand whether the framework allowed for the development of XR applications, we decided to implement an augmented reality prototype as well. Afterwards, we ran an experiment with 4 VR and 3 AR users sharing the same virtual environment. The experiment was successful at allowing them to interact in real-time in the same shared environment. Therefore, the framework enables the development of XR applications that support different mixed-reality technologies.
2019
Autores
Fernandes, H; Costa, P; Filipe, V; Paredes, H; Barroso, J;
Publicação
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY
Abstract
The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility posed by visual impairment. This work presents an umbrella review. Visually impaired people often want more than just information about their location and often need to relate their current location to the features existing in the surrounding environment. Extensive research has been dedicated into building assistive systems. Assistive systems for human navigation, in general, aim to allow their users to safely and efficiently navigate in unfamiliar environments by dynamically planning the path based on the user's location, respecting the constraints posed by their special needs. Modern mobile assistive technologies are becoming more discrete and include a wide range of mobile computerized devices, including ubiquitous technologies such as mobile phones. Technology can be used to determine the user's location, his relation to the surroundings (context), generate navigation instructions and deliver all this information to the blind user.
2019
Autores
Reis, A; Liberato, M; Paredes, H; Martins, P; Barroso, J;
Publicação
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments - 13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II
Abstract
Information can be conveyed to the user by means of a narrative, modeled according to the user’s context. A case in point is the weather, which can be perceived differently and with distinct levels of importance according to the user’s context. For example, for a blind person, the weather is an important element to plan and move between locations. In fact, weather can make it very difficult or even impossible for a blind person to successfully negotiate a path and navigate from one place to another. To provide proper information, narrated and delivered according to the person’s context, this paper proposes a project for the creation of weather narratives, targeted at specific types of users and contexts. The proposal’s main objective is to add value to the data, acquired through the observation of weather systems, by interpreting that data, in order to identify relevant information and automatically create narratives, in a conversational way or with machine metadata language. These narratives should communicate specific aspects of the evolution of the weather systems in an efficient way, providing knowledge and insight in specific contexts and for specific purposes. Currently, there are several language generator’ systems, which automatically create weather forecast reports, based on previously processed and synthesized information. This paper, proposes a wider and more comprehensive approach to the weather systems phenomena, proposing a full process, from the raw data to a contextualized narration, thus providing a methodology and a tool that might be used for various contexts and weather systems. © 2019, Springer Nature Switzerland AG.
2019
Autores
Paulino, D; Reis, A; Paredes, H; Fernandes, H; Barroso, J;
Publicação
International Journal of Recent Technology and Engineering
Abstract
This study has the objective of select the best service at image processing and recognition, running in the cloud, and best suited for usage in systems to aid and improve the daily lives of blind people. To accomplish this purpose, a set of candidate services was built, including Microsoft Cognitive Services and Google Cloud Vision. A test mobile app was developed to automatically take pictures, which are sent to the online cloud services for processing. The results and the functionalities were evaluated with the aim to measure their accuracy and relevance. The following variables were registered: relative accuracy, represented by the ratio of the number of accurate results vs. the number of results shown; confidence degree, representing the service accuracy (when provided by the service); and relevance, identifying situations that can be useful in the daily lives of the blind people. The results have shown that these two services, Microsoft Cognitive Services and Google Cloud Vision, provided good accuracy and significance, in supporting systems to help blind people in their daily tasks. It was chosen some functionalities in two APIs of services running in the cloud like face identification, image description, objects, and text recognition. © BEIESP.
2019
Autores
Carvalho, J; Santos, A; Paredes, H;
Publicação
PROCEEDINGS OF THE 2019 IEEE 23RD INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD)
Abstract
A multiplicity of innovative applications has been developed based on the mobile crowdsourcing (MCS) paradigm. One group of applications addresses the creation of accessibility maps in large cities. In this context, a conceptual model of a system for the detection and the timely notification of the existence of temporary obstacles and other dangers in the urban environment is proposed in "Pervasive Crowd Mapping for Dynamic Environments". This concept (PCM4DE) encompasses, among other technologies, the use of crowdsourcing. The system will be particularly useful to people with disabilities and elderly people. An exploratory literature review showed that data quality and the motivation strategies fur participating in the systems based on MCS remain two of the key challenges to the effectiveness of those systems. This paper aims to contribute to the implementation of the PCM4DE concept by proposing the development of a mechanism that should improve the data quality through motivation forms which will enable a positive personal user experience, i.e., an experience which meets the participant's objectives, needs and preferences.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.