Cookies
Usamos cookies para melhorar nosso site e a sua experiência. Ao continuar a navegar no site, você aceita a nossa política de cookies. Ver mais
Fechar
  • Menu
Sobre

Sobre

Professor Associado com Agregação da UTAD e Investigador Sénior do INESC TEC.

Doutorou-se, na UTAD, em 2002, em Engenharia Eletrotécnica e realizou, em 2007, as provas Públicas de Agregação em Informática/Acessibilidade. Passou a Professor Associado da UTAD em dezembro de 2012.

Foi Pró-reitor para a Inovação e Gestão da Informação da UTAD, de 23 Julho de 2010 a 29 Julho de 2013.

Produziu mais de 150 trabalhos académicos, entre capítulos de livros, artigos em revistas e artigos em atas de eventos ciêntificos. Orientou 40 trabalhos de pós-graduação (mestrados e doutoramentos).

Participou em 35 projetos de investigação e desenvolvimento (foi investigador principal em 15 destes projetos).

Participou na organização de vários encontros científicos de natureza internacional, em 2006 coordenou a equipa que criou a conferência “Software Development for Enhancing Accessibility and Fighting Info-exclusion (www.dsai.ws/2016) e em 2016 a conferência Technology and Innovation is Sports, Health and Wellbeing (www.tishw.ws/2016).

As áreas principais de investigação são: Processamento Digital de Imagem, Acessibilidade e Interação pessoa Computador.

Google Scholar: http://scholar.google.com/citations?user=HBVvNYQAAAAJ&hl=en

SCOPUS: http://www.scopus.com/authid/detail.url?authorId=20435746800

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    João Barroso
  • Cluster

    Informática
  • Cargo

    Investigador Coordenador
  • Desde

    01 outubro 2012
006
Publicações

2019

Submitted to the WorldCIST'17: The AppVox mobile application, a tool for speech and language training sessions

Autores
Rocha, T; Goncalves, C; Fernandes, H; Reis, A; Barroso, J;

Publicação
Expert Systems

Abstract
AppVox is a mobile application that provides support for children with speech and language impairments in their speech therapy sessions, while also allowing autonomous training at home. The application simulates a vocalizer with an audio stimulus feature, which can be used to train and amend the pronunciation of specific words through repetition. In this paper, we aim to present the development of the application as an assistive technology option, by adding new features to the vocalizer as well as assessing it as a usable option for daily training interaction for children with speech and language impairments. In this regard, we invited 15 children with speech and language impairments and 20 with no impairments to perform training activities with the application. Likewise, we asked three speech therapists and three usability experts to interact, assess, and give their feedback. In this assessment, we include the following parameters: successful conclusion of the training tasks (effectiveness); number of errors made, as well as number and type of difficulties found (efficiency); and the acceptance and level of comfort in completing the requested tasks (satisfaction). Overall, the results showed that children conclude the training tasks successfully and helped to improve their language and speech capabilities. Therapists and children gave positive feedback to the AppVox interface. © 2019 John Wiley & Sons, Ltd

2019

A review of assistive spatial orientation and navigation technologies for the visually impaired

Autores
Fernandes, H; Costa, P; Filipe, V; Paredes, H; Barroso, J;

Publicação
Universal Access in the Information Society

Abstract
The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility posed by visual impairment. This work presents an “umbrella review.” Visually impaired people often want more than just information about their location and often need to relate their current location to the features existing in the surrounding environment. Extensive research has been dedicated into building assistive systems. Assistive systems for human navigation, in general, aim to allow their users to safely and efficiently navigate in unfamiliar environments by dynamically planning the path based on the user’s location, respecting the constraints posed by their special needs. Modern mobile assistive technologies are becoming more discrete and include a wide range of mobile computerized devices, including ubiquitous technologies such as mobile phones. Technology can be used to determine the user’s location, his relation to the surroundings (context), generate navigation instructions and deliver all this information to the blind user. © 2017 Springer-Verlag GmbH Germany

2019

Classification of Physical Exercise Intensity Based on Facial Expression Using Deep Neural Network

Autores
Khanal, SR; Sampaio, J; Barroso, J; Filipe, V;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
If done properly, physical exercise can help maintain fitness and health. The benefits of physical exercise could be increased with real time monitoring by measuring physical exercise intensity, which refers to how hard it is for a person to perform a specific task. This parameter can be estimated using various sensors, including contactless technology. Physical exercise intensity is usually synchronous to heart rate; therefore, if we measure heart rate, we can define a particular level of physical exercise. In this paper, we proposed a Convolutional Neural Network (CNN) to classify physical exercise intensity based on the analysis of facial images extracted from a video collected during sub-maximal exercises in a stationary bicycle, according to standard protocol. The time slots of the video used to extract the frames were determined by heart rate. We tested different CNN models using as input parameters the individual color components and grayscale images. The experiments were carried out separately with various numbers of classes. The ground truth level for each class was defined by the heart rate. The dataset was prepared to classify the physical exercise intensity into two, three, and four classes. For each color model a CNN was trained and tested. The model performance was presented using confusion matrix as metrics for each case. The most significant color channel in terms of accuracy was Green. The average model accuracy was 100%, 99% and 96%, for two, three and four classes classification, respectively. © 2019, Springer Nature Switzerland AG.

2019

Creating Weather Narratives

Autores
Reis, A; Liberato, M; Paredes, H; Martins, P; Barroso, J;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Information can be conveyed to the user by means of a narrative, modeled according to the user’s context. A case in point is the weather, which can be perceived differently and with distinct levels of importance according to the user’s context. For example, for a blind person, the weather is an important element to plan and move between locations. In fact, weather can make it very difficult or even impossible for a blind person to successfully negotiate a path and navigate from one place to another. To provide proper information, narrated and delivered according to the person’s context, this paper proposes a project for the creation of weather narratives, targeted at specific types of users and contexts. The proposal’s main objective is to add value to the data, acquired through the observation of weather systems, by interpreting that data, in order to identify relevant information and automatically create narratives, in a conversational way or with machine metadata language. These narratives should communicate specific aspects of the evolution of the weather systems in an efficient way, providing knowledge and insight in specific contexts and for specific purposes. Currently, there are several language generator’ systems, which automatically create weather forecast reports, based on previously processed and synthesized information. This paper, proposes a wider and more comprehensive approach to the weather systems phenomena, proposing a full process, from the raw data to a contextualized narration, thus providing a methodology and a tool that might be used for various contexts and weather systems. © 2019, Springer Nature Switzerland AG.

2019

Interactive audio novel: A Story and Usability preliminary study

Autores
Rocha, T; Reis, A; Paredes, H; Barroso, J;

Publicação
2019 14TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI)

Abstract
In this article we present a game interface, using audio input and output, aiming to provide the concept of interactive narrative to users with visual or motor disability. The solution lets users choose the direction of the story, triggering several alternate endings and thus creating a dynamic and creative narrative. The application development process is described here from the design, implementation and evaluation. In the evaluation phase, we performed user tests with five participants with visual and motor disability. Thus, we record three metrics: effectiveness, success of the task (reaching one of the possible endings); efficiency, time needed to complete the story; and satisfaction, comfort and wellbeing of the user during the interaction. The result was positive, all participants successfully completed the application, and there were no withdrawals. Four in five wanted to repeat the experience and try to reach another end of the story.

Teses
supervisionadas

2017

E-Mentoring a evolução do mentoring com novas tecnologias

Autor
Pedro Henrique Pelúcia Samuel

Instituição
UTAD

2017

Democratização de ordens profissionais através da votação eletrónica

Autor
José Manuel da Cunha Ferreira

Instituição
UTAD

2017

IDENTIFICAÇÃO DE PATOLOGIAS PULMONARES EM EXAMES DE TOMOGRAFIA COMPUTORIZADA

Autor
Verónica Maria Marques Carreiro Silva Vasconcelos

Instituição
UTAD

2017

COMPUTER VISION TO ASSIST VISUALLY IMPAIRED PEOPLE´S NAVIGATION

Autor
Paulo Manuel Almeida Costa

Instituição
UTAD

2017

Facial image processing to monitor physical exercise intensity

Autor
Salik Ram Khanal

Instituição
UTAD