Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

Associate Professor with Habilitation at University of Trás-os-Montes e Alto Douro (UTAD) and Senior Researcher at INESC TEC.

He earned a doctorate in UTAD in 2002 in Electrical Engineering and held in 2008 the Habilitation in Informatics/Accessibility. I was Associate Professor in December 2012.

He was Pro-Rector for Innovation and Information Management at UTAD, from 23 July 2010 to 29 July 2013.

He produced over 150 scientific papers, including book chapters, journal articles and articles in proceedings of scientific events. He supervised 40 postgraduate students (masters and doctorates).
He was member of the research team in 35 research and development projects.

He was member of several organizing committees of the international scientific meetings. In 2006 he directed the team that created the conference "Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (www.dsai.ws/2016) and in 2016 the conference Technology and Innovation is Sports, Health and Wellbeing (www.tishw.ws/2016).
The main research interests are: Digital Image Processing, Accessibility and Human Computer Interaction.

Google Scholar: http://scholar.google.com/citations?user=HBVvNYQAAAAJ&hl=en

SCOPUS: http://www.scopus.com/authid/detail.url?authorId=20435746800

Interest
Topics
Details

Details

  • Name

    João Barroso
  • Cluster

    Computer Science
  • Role

    Research Coordinator
  • Since

    01st October 2012
006
Publications

2019

Submitted to the WorldCIST'17: The AppVox mobile application, a tool for speech and language training sessions

Authors
Rocha, T; Goncalves, C; Fernandes, H; Reis, A; Barroso, J;

Publication
Expert Systems

Abstract
AppVox is a mobile application that provides support for children with speech and language impairments in their speech therapy sessions, while also allowing autonomous training at home. The application simulates a vocalizer with an audio stimulus feature, which can be used to train and amend the pronunciation of specific words through repetition. In this paper, we aim to present the development of the application as an assistive technology option, by adding new features to the vocalizer as well as assessing it as a usable option for daily training interaction for children with speech and language impairments. In this regard, we invited 15 children with speech and language impairments and 20 with no impairments to perform training activities with the application. Likewise, we asked three speech therapists and three usability experts to interact, assess, and give their feedback. In this assessment, we include the following parameters: successful conclusion of the training tasks (effectiveness); number of errors made, as well as number and type of difficulties found (efficiency); and the acceptance and level of comfort in completing the requested tasks (satisfaction). Overall, the results showed that children conclude the training tasks successfully and helped to improve their language and speech capabilities. Therapists and children gave positive feedback to the AppVox interface. © 2019 John Wiley & Sons, Ltd

2019

A review of assistive spatial orientation and navigation technologies for the visually impaired

Authors
Fernandes, H; Costa, P; Filipe, V; Paredes, H; Barroso, J;

Publication
Universal Access in the Information Society

Abstract
The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility posed by visual impairment. This work presents an “umbrella review.” Visually impaired people often want more than just information about their location and often need to relate their current location to the features existing in the surrounding environment. Extensive research has been dedicated into building assistive systems. Assistive systems for human navigation, in general, aim to allow their users to safely and efficiently navigate in unfamiliar environments by dynamically planning the path based on the user’s location, respecting the constraints posed by their special needs. Modern mobile assistive technologies are becoming more discrete and include a wide range of mobile computerized devices, including ubiquitous technologies such as mobile phones. Technology can be used to determine the user’s location, his relation to the surroundings (context), generate navigation instructions and deliver all this information to the blind user. © 2017 Springer-Verlag GmbH Germany

2019

Classification of Physical Exercise Intensity Based on Facial Expression Using Deep Neural Network

Authors
Khanal, SR; Sampaio, J; Barroso, J; Filipe, V;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
If done properly, physical exercise can help maintain fitness and health. The benefits of physical exercise could be increased with real time monitoring by measuring physical exercise intensity, which refers to how hard it is for a person to perform a specific task. This parameter can be estimated using various sensors, including contactless technology. Physical exercise intensity is usually synchronous to heart rate; therefore, if we measure heart rate, we can define a particular level of physical exercise. In this paper, we proposed a Convolutional Neural Network (CNN) to classify physical exercise intensity based on the analysis of facial images extracted from a video collected during sub-maximal exercises in a stationary bicycle, according to standard protocol. The time slots of the video used to extract the frames were determined by heart rate. We tested different CNN models using as input parameters the individual color components and grayscale images. The experiments were carried out separately with various numbers of classes. The ground truth level for each class was defined by the heart rate. The dataset was prepared to classify the physical exercise intensity into two, three, and four classes. For each color model a CNN was trained and tested. The model performance was presented using confusion matrix as metrics for each case. The most significant color channel in terms of accuracy was Green. The average model accuracy was 100%, 99% and 96%, for two, three and four classes classification, respectively. © 2019, Springer Nature Switzerland AG.

2019

Creating Weather Narratives

Authors
Reis, A; Liberato, M; Paredes, H; Martins, P; Barroso, J;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Information can be conveyed to the user by means of a narrative, modeled according to the user’s context. A case in point is the weather, which can be perceived differently and with distinct levels of importance according to the user’s context. For example, for a blind person, the weather is an important element to plan and move between locations. In fact, weather can make it very difficult or even impossible for a blind person to successfully negotiate a path and navigate from one place to another. To provide proper information, narrated and delivered according to the person’s context, this paper proposes a project for the creation of weather narratives, targeted at specific types of users and contexts. The proposal’s main objective is to add value to the data, acquired through the observation of weather systems, by interpreting that data, in order to identify relevant information and automatically create narratives, in a conversational way or with machine metadata language. These narratives should communicate specific aspects of the evolution of the weather systems in an efficient way, providing knowledge and insight in specific contexts and for specific purposes. Currently, there are several language generator’ systems, which automatically create weather forecast reports, based on previously processed and synthesized information. This paper, proposes a wider and more comprehensive approach to the weather systems phenomena, proposing a full process, from the raw data to a contextualized narration, thus providing a methodology and a tool that might be used for various contexts and weather systems. © 2019, Springer Nature Switzerland AG.

2019

Interactive audio novel: A Story and Usability preliminary study

Authors
Rocha, T; Reis, A; Paredes, H; Barroso, J;

Publication
2019 14TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI)

Abstract
In this article we present a game interface, using audio input and output, aiming to provide the concept of interactive narrative to users with visual or motor disability. The solution lets users choose the direction of the story, triggering several alternate endings and thus creating a dynamic and creative narrative. The application development process is described here from the design, implementation and evaluation. In the evaluation phase, we performed user tests with five participants with visual and motor disability. Thus, we record three metrics: effectiveness, success of the task (reaching one of the possible endings); efficiency, time needed to complete the story; and satisfaction, comfort and wellbeing of the user during the interaction. The result was positive, all participants successfully completed the application, and there were no withdrawals. Four in five wanted to repeat the experience and try to reach another end of the story.

Supervised
thesis

2017

Facial image processing to monitor physical exercise intensity

Author
Salik Ram Khanal

Institution
UTAD

2017

E-Mentoring a evolução do mentoring com novas tecnologias

Author
Pedro Henrique Pelúcia Samuel

Institution
UTAD

2017

Democratização de ordens profissionais através da votação eletrónica

Author
José Manuel da Cunha Ferreira

Institution
UTAD

2017

IDENTIFICAÇÃO DE PATOLOGIAS PULMONARES EM EXAMES DE TOMOGRAFIA COMPUTORIZADA

Author
Verónica Maria Marques Carreiro Silva Vasconcelos

Institution
UTAD

2017

COMPUTER VISION TO ASSIST VISUALLY IMPAIRED PEOPLE´S NAVIGATION

Author
Paulo Manuel Almeida Costa

Institution
UTAD