2025
Authors
Aguiar, JM; da Silva, JM; Fonseca, C; Marinho, J;
Publication
SENSORS
Abstract
Trigeminal somatosensory-evoked potentials (TSEPs) provide valuable insight into neural responses to oral stimuli. This study investigates TSEP recording methods and their impact on interpreting results in clinical settings to improve the development process of neurostimulation-based therapies. The experiments and results presented here aim at identifying appropriate stimulation characteristics to design an active dental prosthesis capable of contributing to restoring the lost neurosensitive connection between the teeth and the brain. Two methods of TSEP acquisition, traditional and occluded, were used, each conducted by a different volunteer. Traditional TSEP acquisition involves stimulation at different sites with varying parameters to achieve a control base. In contrast, occluded TSEPs examine responses acquired under low- and high-force bite conditions to assess the influence of periodontal mechanoreceptors and muscle activation on measurements. Traditional TSEPs demonstrated methodological feasibility with satisfactory results despite a limited subject pool. However, occluded TSEPs presented challenges in interpreting results, with responses deviating from expected norms, particularly under high force conditions, due to the simultaneous occurrence of stimulation and dental occlusion. While traditional TSEPs highlight methodological feasibility, the occluded approach highlights complexities in outcome interpretation and urges caution in clinical application. Previously unreported results were achieved, which underscores the importance of conducting further research with larger sample sizes and refined protocols in order to strengthen the reliability and validity of TSEP assessments.
2025
Authors
Guimaraes, V; Sousa, I; Cunha, R; Magalhaes, R; Machado, A; Fernandes, V; Reis, S; Correia, MV;
Publication
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
Abstract
Background and Objectives: Early detection of cognitive impairment is crucial for timely clinical interventions aimed at delaying progression to dementia. However, existing screening tools are not ideal for wide population screening. This study explores the potential of combining machine learning, specifically, one-class classification, with simpler and quicker motor-cognitive tasks to improve the early detection of cognitive impairment. Methods: We gathered data on gait, fingertapping, cognitive, and dual tasks from older adults with mild cognitive impairment and healthy controls. Using one-class classification, we modeled the behavior of the majority group (healthy controls), identifying deviations from this behavior as abnormal. To account for confounding effects, we integrated confound regression into the classification pipeline. We evaluated the performance of individual tasks, as well as the combination of features (early fusion) and models (late fusion). Additionally, we compared the results with those from two-class classification and a standard cognitive screening test. Results: We analyzed data from 37 healthy controls and 16 individuals with mild cognitive impairment. Results revealed that one-class classification had higher predictive accuracy for mild cognitive impairment, whereas two-class classification performed better in identifying healthy controls. Gait features yielded the best results for one-class classification. Combining individual models led to better performance than combining features from the different tasks. Notably, the one-class majority voting approach exhibited a sensitivity of 87.5% and a specificity of 75.7%, suggesting it may serve as a potential alternative to the standard cognitive screening test. In contrast, the two-class majority voting failed to improve the low sensitivities achieved by the individual models due to the underrepresentation of the impaired group. Conclusion: Our preliminary results support the use of one-class classification with confound control to detect abnormal patterns of gait, fingertapping, cognitive, and dual tasks, to improve the early detection of cognitive impairment. Further research is necessary to substantiate the method's effectiveness in broader clinical settings.
2025
Authors
Latif, I; Ashraf, MM; Haider, U; Reeves, G; Untaroiu, A; Coelho, F; Browne, D;
Publication
IEEE TRANSACTIONS ON CLOUD COMPUTING
Abstract
The growth in cloud computing, Big Data, AI and high-performance computing (HPC) necessitate the deployment of additional data centers (DC's) with high energy demands. The unprecedented increase in the Thermal Design Power (TDP) of the computing chips will require innovative cooling techniques. Furthermore, DC's are increasingly limited in their ability to add powerful GPU servers by power capacity constraints. As cooling energy use accounts for up to 40% of DC energy consumption, creative cooling solutions are urgently needed to allow deployment of additional servers, enhance sustainability and increase energy efficiency of DC's. The information in this study is provided from Start Campus' Sines facility supported by Alfa Laval for the heat exchanger and CO2 emission calculations. The study evaluates the performance and sustainability impact of various data center cooling strategies including an air-only deployment and a subsequent hybrid air/water cooling solution all utilizing sea water as the cooling source. We evaluate scenarios from 3 MW to 15+1 MW of IT load in 3 MW increments which correspond to the size of heat exchangers used in the Start Campus' modular system design. This study also evaluates the CO2 emissions compared to a conventional chiller system for all the presented scenarios. Results indicate that the effective use of the sea water cooled system combined with liquid cooled systems improve the efficiency of the DC, plays a role in decreasing the CO2 emissions and supports in achieving sustainability goals.
2025
Authors
Costa, L; Barbosa, S; Cunha, J;
Publication
CoRR
Abstract
2025
Authors
Freitas, T; Novo, C; Dutra, I; Soares, J; Correia, ME; Shariati, B; Martins, R;
Publication
SOFTWARE-PRACTICE & EXPERIENCE
Abstract
Background Intrusion Tolerant Systems (ITS) aim to maintain system security despite adversarial presence by limiting the impact of successful attacks. Current ITS risk managers rely heavily on public databases like NVD and Exploit DB, which suffer from long delays in vulnerability evaluation, reducing system responsiveness.Objective This work extends the HAL 9000 Risk Manager to integrate additional real-time threat intelligence sources and employ machine learning techniques to automatically predict and reassess vulnerability risk scores, addressing limitations of existing solutions.Methods A custom-built scraper collects diverse cybersecurity data from multiple Open Source Intelligence (OSINT) platforms, such as NVD, CVE, AlienVault OTX, and OSV. HAL 9000 uses machine learning models for CVE score prediction, vulnerability clustering through scalable algorithms, and reassessment incorporating exploit likelihood and patch availability to dynamically evaluate system configurations.Results Integration of newly scraped data significantly enhances the risk management capabilities, enabling faster detection and mitigation of emerging vulnerabilities with improved resilience and security. Experiments show HAL 9000 provides lower risk and more resilient configurations compared to prior methods while maintaining scalability and automation.Conclusions The proposed enhancements position HAL 9000 as a next-generation autonomous Risk Manager capable of effectively incorporating diverse intelligence sources and machine learning to improve ITS security posture in dynamic threat environments. Future work includes expanding data sources, addressing misinformation risks, and real-world deployments.
2025
Authors
Pires, PB; Santos, JD; Torres, AI;
Publication
Advances in Computational Intelligence and Robotics - Adapting Global Communication and Marketing Strategies to Generative AI
Abstract
This chapter examines how GenAI and predictive modelling strategies affect hyperpersonalised marketing. Through a comprehensive literature review and case studies, it examines hyper-p ersonalisation's theoretical frameworks, technical infrastructures, and ethical and governance issues. Large language models, generative adversarial networks, and diffusion models combined with advanced predictive analytics allow firms to scale real- time, highly individualised customer experiences. Effective implementation requires sophisticated data architectures, algorithmic transparency, and strong privacy protections. Integration complexity and ethical accountability are major barriers to consumer engagement and conversion, according to the research. Based on these findings, the chapter proposes an integrated framework that combines technological innovation with ethics and customer focus. This research advances marketing theory and provides practical advice for companies using AI- driven hyper-personalisation while maintaining consumer trust and regulatory compliance. © 2026, IGI Global Scientific Publishing. All rights reserved.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.