Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Salik Ram Khanal

2018

Physical exercise intensity monitoring through eye-blink and mouth's shape analysis

Authors
Khanal, SR; Fonseca, A; Marques, A; Barroso, J; Filipe, V;

Publication
PROCEEDINGS OF THE 2018 2ND INTERNATIONAL CONFERENCE ON TECHNOLOGY AND INNOVATION IN SPORTS, HEALTH AND WELLBEING (TISHW)

Abstract
The continuous use of the muscles in any kind of physical exercises results in muscular fatigue, which can be defined as the incapability of the muscle to perform with the same effectiveness over the course of time. The analysis of physical exercise intensity has great importance in various fields, including sports and physiotherapy. In this paper, the rate of blinking eyes and the change in shape of mouth throughout the physical exercise are analyzed using computer vision techniques, and compared with the perceived exertion. The experiments were done using the facial video of three athletes, grabbed during a stationary cycle of physical exercise, until maximal muscle activity was achieved. The perceived exertion was reported at the end of each minute. The blinking of the eyes and opening of the mouth were detected by counting the number of bright pixels in the region of interest of an eye and of the mouth. These regions were detected using the Viola and Jones algorithm. We have proved the existence of a correlation between the opening and closing of the mouth and the eye-blinking rates with the physical exercise intensity (i.e., the higher the exercise intensity, the higher the rate of eye-blinking and mouth opening and closing). We obtained 95% accuracy in blinking eye detection.

2018

Performance analysis of Microsoft's and Google's Emotion Recognition API using pose-invariant faces

Authors
Khanal, SR; Barroso, J; Lopes, N; Sampaio, J; Filipe, V;

Publication
PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT AND TECHNOLOGIES FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION (DSAI 2018)

Abstract
Many cloud vision APIs are available on the internet to recognize emotion from facial images and video analysis. The capacity to recognize emotions under various poses is a fundamental requirement in the area of emotion recognition. In this paper, the performance of two famous emotion recognition APIs is evaluated under the facial images of various poses. The experiments were done with the public dataset containing 980 images of each type of five poses [full left, half-left, straight, half-right, and full-right] with the seven emotions (Anger, Afraid, Disgust, Happiness, Neutral, Sadness, Surprise). It has been discovered that overall recognition accuracy is best in Microsoft Azure for straight images, whereas the face detection capability is better in Google. The Microsoft did not detect almost any of the images with full left and full right profile, but Google detected almost all of them. The Microsoft API presents an average true positive value up to 60%, whereas Google presents the maximum true positive value 45.25%.

2019

Student concentration evaluation index in an E-learning context using facial emotion analysis

Authors
Sharma, P; Esengönül, M; Khanal, SR; Khanal, TT; Filipe, V; Reis, MJCS;

Publication
Communications in Computer and Information Science

Abstract
Analysis of student concentration can help to enhance the learning process. Emotions are directly related and directly reflect students’ concentration. This task is particularly difficult to implement in an e-learning environment, where the student stands alone in front of a computer. In this paper, a prototype system is proposed to figure out the concentration level in real-time from the expressed facial emotions during a lesson. An experiment was performed to evaluate the prototype system that was implemented using a client-side application that uses the C# code available in Microsoft Azure Emotion API. We have found that the emotions expressed are correlated with the concentration of the students, and devised three distinct levels of concentration (high, medium, and low). © Springer Nature Switzerland AG 2019.

2019

Classification of Physical Exercise Intensity Based on Facial Expression Using Deep Neural Network

Authors
Khanal, SR; Sampaio, J; Barroso, J; Filipe, V;

Publication
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments - 13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II

Abstract
If done properly, physical exercise can help maintain fitness and health. The benefits of physical exercise could be increased with real time monitoring by measuring physical exercise intensity, which refers to how hard it is for a person to perform a specific task. This parameter can be estimated using various sensors, including contactless technology. Physical exercise intensity is usually synchronous to heart rate; therefore, if we measure heart rate, we can define a particular level of physical exercise. In this paper, we proposed a Convolutional Neural Network (CNN) to classify physical exercise intensity based on the analysis of facial images extracted from a video collected during sub-maximal exercises in a stationary bicycle, according to standard protocol. The time slots of the video used to extract the frames were determined by heart rate. We tested different CNN models using as input parameters the individual color components and grayscale images. The experiments were carried out separately with various numbers of classes. The ground truth level for each class was defined by the heart rate. The dataset was prepared to classify the physical exercise intensity into two, three, and four classes. For each color model a CNN was trained and tested. The model performance was presented using confusion matrix as metrics for each case. The most significant color channel in terms of accuracy was Green. The average model accuracy was 100%, 99% and 96%, for two, three and four classes classification, respectively. © 2019, Springer Nature Switzerland AG.

2019

"Express Your Feelings": An Interactive Application for Autistic Patients

Authors
Sharma, P; Upadhaya, MD; Twanabasu, A; Barroso, J; Khanal, SR; Paredes, H;

Publication
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments - 13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II

Abstract
Much effort is put into Information Technology (IT) to achieve better efficiency and quality of expressing communication between autistic children with the surrounding. This paper presents an application that aims to help the autistic child to interact and express their feeling with their loved ones in easy manner. The major objective of the project is to connect autistic children with their family and friends by providing tools that enable an easy way to express their feeling and emotions. To accomplish this goal an Android app has been developed through which, autistic child can express their emotion based on emoji. Child’s emotions are share by sending the emoji to their relatives. The project aims a high impact within the autistic child community by providing a mechanism to share emotions in an “emotionless world”. The project was developed under the Sustainable Development Goal (SDG) 3: good health and well-being in the society by making the meaningful impact in the life of autistic child. © 2019, Springer Nature Switzerland AG.

  • 2
  • 2