2022
Authors
da Silva, DEM; Filipe, V; Franco-Goncalo, P; Colaco, B; Alves-Pimenta, S; Ginja, M; Goncalves, L;
Publication
INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, ISDA 2021
Abstract
Hip dysplasia is a genetic disease that causes the laxity of the hip joint and is one of the most common skeletal diseases found in dogs. Diagnosis is performed through an X-ray analysis by a specialist and the only way to reduce the incidence of this condition is through selective breeding. Thus, there is a need for an automated tool that can assist the specialist in diagnosis. In this article, our objective is to develop models that allow segmentation of the femur and acetabulum, serving as a foundation for future solutions for the automated detection of hip dysplasia. The studied models present state-of-the-art results, reaching dice scores of 0.98 for the femur and 0.93 for the acetabulum.
2022
Authors
Filipe, V; Teixeira, P; Teixeira, A;
Publication
ALGORITHMS
Abstract
Diabetic foot is one of the main complications observed in diabetic patients; it is associated with the development of foot ulcers and can lead to amputation. In order to diagnose these complications, specialists have to analyze several factors. To aid their decisions and help prevent mistakes, the resort to computer-assisted diagnostic systems using artificial intelligence techniques is gradually increasing. In this paper, two different models for the classification of thermograms of the feet of diabetic and healthy individuals are proposed and compared. In order to detect and classify abnormal changes in the plantar temperature, machine learning algorithms are used in both models. In the first model, the foot thermograms are classified into four classes: healthy and three categories for diabetics. The second model has two stages: in the first stage, the foot is classified as belonging to a diabetic or healthy individual, while, in the second stage, a classification refinement is conducted, classifying diabetic foot into three classes of progressive severity. The results show that both proposed models proved to be efficient, allowing us to classify a foot thermogram as belonging to a healthy or diabetic individual, with the diabetic ones divided into three classes; however, when compared, Model 2 outperforms Model 1 and allows for a better performance classification concerning the healthy category and the first class of diabetic individuals. These results demonstrate that the proposed methodology can be a tool to aid medical diagnosis.
2022
Authors
Khanal, SR; Sampaio, J; Exel, J; Barroso, J; Filipe, V;
Publication
JOURNAL OF IMAGING
Abstract
The current technological advances have pushed the quantification of exercise intensity to new era of physical exercise sciences. Monitoring physical exercise is essential in the process of planning, applying, and controlling loads for performance optimization and health. A lot of research studies applied various statistical approaches to estimate various physiological indices, to our knowledge, no studies found to investigate the relationship of facial color changes and increased exercise intensity. The aim of this study was to develop a non-contact method based on computer vision to determine the heart rate and, ultimately, the exercise intensity. The method was based on analyzing facial color changes during exercise by using RGB, HSV, YCbCr, Lab, and YUV color models. Nine university students participated in the study (mean age = 26.88 +/- 6.01 years, mean weight = 72.56 +/- 14.27 kg, mean height = 172.88 +/- 12.04 cm, six males and three females, and all white Caucasian). The data analyses were carried out separately for each participant (personalized model) as well as all the participants at a time (universal model). The multiple auto regressions, and a multiple polynomial regression model were designed to predict maximum heart rate percentage (maxHR%) from each color models. The results were analyzed and evaluated using Root Mean Square Error (RMSE), F-values, and R-square. The multiple polynomial regression using all participants exhibits the best accuracy with RMSE of 6.75 (R-square = 0.78). Exercise prescription and monitoring can benefit from the use of these methods, for example, to optimize the process of online monitoring, without having the need to use any other instrumentation.
2022
Authors
Franco Goncalo, P; da Silva, DM; Leite, P; Alves Pimenta, S; Colaco, B; Ferreira, M; Goncalves, L; Filipe, V; McEvoy, F; Ginja, M;
Publication
ANIMALS
Abstract
Simple Summary Radiographic diagnosis is essential for the genetic control of canine hip dysplasia (HD). The Federation Cynologique Internationale (FCI) scoring HD scheme is based on objective and qualitative radiographic criteria. Subjective interpretations can lead to errors in diagnosis and, consequently, to incorrect selective breeding, which in turn impacts the gene pool of dog breeds. The aim of this study was to use a computer method to calculate the Hip Congruency Index (HCI) to objectively estimate radiographic hip congruency for future application in the development of computer vision models capable of classifying canine HD. The HCI measures the percentage of acetabular coverage that is occupied by the femoral head. Normal hips are associated with an even, parallel joint surface that translates into reduced acetabular free space, which increases with hip subluxation and becomes maximal in hip dislocation. We found statistically significant differences in mean HCI values among all five FCI categories. These results demonstrate that the HCI reliably reflects the different degrees of congruency associated with HD. Therefore, it is expected that when used in conjunction with other HD evaluation parameters, such as Norberg angle and assessment of osteoarthritic signs, it can improve the diagnosis by making it more accurate and unequivocal. Accurate radiographic screening evaluation is essential in the genetic control of canine HD, however, the qualitative assessment of hip congruency introduces some subjectivity, leading to excessive variability in scoring. The main objective of this work was to validate a method-Hip Congruency Index (HCI)-capable of objectively measuring the relationship between the acetabulum and the femoral head and associating it with the level of congruency proposed by the Federation Cynologique Internationale (FCI), with the aim of incorporating it into a computer vision model that classifies HD autonomously. A total of 200 dogs (400 hips) were randomly selected for the study. All radiographs were scored in five categories by an experienced examiner according to FCI criteria. Two examiners performed HCI measurements on 25 hip radiographs to study intra- and inter-examiner reliability and agreement. Additionally, each examiner measured HCI on their half of the study sample (100 dogs), and the results were compared between FCI categories. The paired t-test and the intraclass correlation coefficient (ICC) showed no evidence of a systematic bias, and there was excellent reliability between the measurements of the two examiners and examiners' sessions. Hips that were assigned an FCI grade of A (n = 120), B (n = 157), C (n = 68), D (n = 38) and E (n = 17) had a mean HCI of 0.739 +/- 0.044, 0.666 +/- 0.052, 0.605 +/- 0.055, 0.494 +/- 0.070 and 0.374 +/- 0.122, respectively (ANOVA, p < 0.01). Therefore, these results show that HCI is a parameter capable of estimating hip congruency and has the potential to enrich conventional HD scoring criteria if incorporated into an artificial intelligence algorithm competent in diagnosing HD.
2022
Authors
Capela, S; Pereira, V; Duque, J; Filipe, V;
Publication
Procedia Computer Science
Abstract
Nowadays, social networks are one of the biggest ways of sharing real time information. These networks, have several groups focused on sharing information about road incidents and other traffic events. The work here presented aims the creation of an AI model capable of identifying publications related to traffic events in a specific road, based on publications shared on social networks. A predictive model was obtained by training a deep learning model for the detection of publications related with road incidents with an average accuracy of 95%. The model deployed as a service is already fully functional and is operating in 24/7 while awaits a final integration with the road management system of a company where it will be used to support the Control Center team in the decision making. © 2022 Elsevier B.V.. All rights reserved.
2022
Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ; Oliveira, PM;
Publication
ROBOTICS
Abstract
Object identification, such as tree trunk detection, is fundamental for forest robotics. Intelligent vision systems are of paramount importance in order to improve robotic perception, thus enhancing the autonomy of forest robots. To that purpose, this paper presents three contributions: an open dataset of 5325 annotated forest images; a tree trunk detection Edge AI benchmark between 13 deep learning models evaluated on four edge-devices (CPU, TPU, GPU and VPU); and a tree trunk mapping experiment using an OAK-D as a sensing device. The results showed that YOLOR was the most reliable trunk detector, achieving a maximum F1 score around 90% while maintaining high scores for different confidence levels; in terms of inference time, YOLOv4 Tiny was the fastest model, attaining 1.93 ms on the GPU. YOLOv7 Tiny presented the best trade-off between detection accuracy and speed, with average inference times under 4 ms on the GPU considering different input resolutions and at the same time achieving an F1 score similar to YOLOR. This work will enable the development of advanced artificial vision systems for robotics in forestry monitoring operations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.