Cookies
Usamos cookies para melhorar nosso site e a sua experiência. Ao continuar a navegar no site, você aceita a nossa política de cookies. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Daniel Mendes
  • Cluster

    Informática
  • Cargo

    Investigador Auxiliar
  • Desde

    01 abril 2020
003
Publicações

2020

Collaborative Tabletops for Blind People: The Effect of Auditory Design on Workspace Awareness

Autores
Mendes, D; Reis, S; Guerreiro, J; Nicolau, H;

Publicação
Proc. ACM Hum. Comput. Interact.

Abstract
Interactive tabletops offer unique collaborative features, particularly their size, geometry, orientation and, more importantly, the ability to support multi-user interaction. Although previous efforts were made to make interactive tabletops accessible to blind people, the potential to use them in collaborative activities remains unexplored. In this paper, we present the design and implementation of a multi-user auditory display for interactive tabletops, supporting three feedback modes that vary on how much information about the partners' actions is conveyed. We conducted a user study with ten blind people to assess the effect of feedback modes on workspace awareness and task performance. Furthermore, we analyze the type of awareness information exchanged and the emergent collaboration strategies. Finally, we provide implications for the design of future tabletop collaborative tools for blind users. © 2020 ACM.

2020

Evaluating Animated Transitions between Contiguous Visualizations for Streaming Big Data

Autores
Pereira, T; Moreira, J; Mendes, D; Gonçalves, D;

Publicação
31st IEEE Visualization Conference, IEEE VIS 2020 - Short Papers, Virtual Event, USA, October 25-30, 2020

Abstract

2020

Incidental Visualizations: Pre-Attentive Primitive Visual Tasks

Autores
Moreira, J; Mendes, D; Gonçalves, D;

Publicação
AVI '20: International Conference on Advanced Visual Interfaces, Island of Ischia, Italy, September 28 - October 2, 2020

Abstract
In InfoVis design, visualizations make use of pre-attentive features to highlight visual artifacts and guide users' perception into relevant information during primitive visual tasks. These are supported by visual marks such as dots, lines, and areas. However, research assumes our pre-attentive processing only allows us to detect specific features in charts. We argue that a visualization can be completely perceived pre-attentively and still convey relevant information. In this work, by combining cognitive perception and psychophysics, we executed a user study with six primitive visual tasks to verify if they could be performed pre-attentively. The tasks were to find: horizontal and vertical positions, length and slope of lines, size of areas, and color luminance intensity. Users were presented with very simple visualizations, with one encoded value at a time, allowing us to assess the accuracy and response time. Our results showed that horizontal position identification is the most accurate and fastest task to do, and the color luminance intensity identification task is the worst. We believe our study is the first step into a fresh field called Incidental Visualizations, where visualizations are meant to be seen at-a-glance, and with little effort. © 2020 ACM.

2019

VisMillion: A novel interactive visualization technique for real-time big data

Autores
Pires, G; Mendes, D; Goncalves, D;

Publicação
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2019)

Abstract
The rapid increase of connected devices causes more and more data to be generated and, in some cases, this data needs to be analyzed as it is received. As such, the challenge of presenting streaming data in such way that changes in the regular flow can be detected needs to be tackled, so that timely and informed decisions can be made. This requires users to be able to perceive the information being received in the moment in detail, while maintaining the context. In this paper, we propose VisMillion, a visualization technique for large amounts of streaming data, following the concept of graceful degradation. It is comprised of several different modules positioned side by side, corresponding to different contiguous time spans, from the last few seconds to a historical view of all data received in the stream so far. Data flows through each one from right to left and, the more recent the data, the more detailed it is presented. To this end, each module uses a different technique to aggregate and process information, with special care to ensure visual continuity between modules to facilitate the analysis. VisMillion was validated through a usability evaluation with 21 participants, as well as performance tests. Results show that it fulfills its objective, successfully aiding users to detect changes, patterns and anomalies in the information being received.

2019

WARPING DEIXIS: Distorting Gestures to Enhance Collaboration

Autores
Sousa, M; dos Anjos, RK; Mendes, D; Billinghurst, M; Jorge, J;

Publicação
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS

Abstract
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer's perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.