Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2024

ECP: Error-Aware, Cost-Effective and Proactive Network Slicing Framework

Autores
Aboeleneen, AE; Abdellatif, AA; Erbad, AM; Salem, AM;

Publicação
IEEE Open Journal of the Communications Society

Abstract

2024

LiDAR-Based Unmanned Aerial Vehicle Offshore Wind Blade Inspection and Modeling

Autores
Oliveira, A; Dias, A; Santos, T; Rodrigues, P; Martins, A; Almeida, J;

Publicação
DRONES

Abstract
The deployment of offshore wind turbines (WTs) has emerged as a pivotal strategy in the transition to renewable energy, offering significant potential for clean electricity generation. However, these structures' operation and maintenance (O&M) present unique challenges due to their remote locations and harsh marine environments. For these reasons, it is fundamental to promote the development of autonomous solutions to monitor the health condition of the construction parts, preventing structural damage and accidents. This paper explores the application of Unmanned Aerial Vehicles (UAVs) in the inspection and maintenance of offshore wind turbines, introducing a new strategy for autonomous wind turbine inspection and a simulation environment for testing and training autonomous inspection techniques under a more realistic offshore scenario. Instead of relying on visual information to detect the WT parts during the inspection, this method proposes a three-dimensional (3D) light detection and ranging (LiDAR) method that estimates the wind turbine pose (position, orientation, and blade configuration) and autonomously controls the UAV for a close inspection maneuver. The first tests were carried out mainly in a simulation framework, combining different WT poses, including different orientations, blade positions, and wind turbine movements, and finally, a mixed reality test, where a real vehicle performed a full inspection of a virtual wind turbine.

2024

Multibeam Multi-Frequency Characterization of Water Column Litter

Autores
Guedes, PA; Silva, H; Wang, S; Martins, A; Almeida, JM; Silva, E;

Publicação
OCEANS 2024 - SINGAPORE

Abstract
This paper explores the potential use of acoustic imaging and the use of a multi-frequency multibeam-echosounder (MBES) for monitoring marine litter in the water column. The main goal is to perform a test and validation setup using a simulation and actual experimental setup to determine if the MBES data can detect marine litter in a water column image (WCI) and if using multi-frequency MBES data will allow to better distinguish and characterize marine litter debris in detection applications. Results using simulated HoloOcean Environment and actual marine litter data revealed the successful detection of objects commonly found in ocean litter hotspots at various ranges and frequencies, enablingthe pursue of novel means of automatic detection and classification in MBES WCI data while using multi-frequency capabilities.

2024

Quality assessment of Low-cost retinal Videos for Glaucoma screening

Autores
Abay, SG; Lima, F; Geurts, L; Camara, J; Pedrosa, J; Cunha, A;

Publicação
Procedia Computer Science

Abstract
Low-cost smartphone-compatible portable ophthalmoscopes can capture visuals of the patient's retina to screen several ophthalmological diseases like glaucoma. The images captured have lower quality and resolution than standard retinography devices but enough for glaucoma screening. Small videos are captured to improve the chance of inspecting the eye properly; however, those videos may not always have enough quality for screening glaucoma, and the patient needs to repeat the inspection later. In this paper, a method for automatic assessment of the quality of videos captured using the D-Eye lens is proposed and evaluated with a personal dataset with 539 videos. Based on two methods developed for retina localization on the images/frames, the Circle Hough Transform method with a precision of 78,12% and the YOLOv7 method with a precision of 99,78%, the quality assessment method automatically decides on the quality of the video by measuring the number of frames of good-quality in each video, according to the chosen threshold. © 2024 Elsevier B.V.. All rights reserved.

2024

A Comprehensive Examination of User Experience in AI-Based Symptom Checker Chatbots

Autores
Ferreira, MC; Veloso, M; Tavares, JMRS;

Publicação
DECISION SUPPORT SYSTEMS XIV: HUMAN-CENTRIC GROUP DECISION, NEGOTIATION AND DECISION SUPPORT SYSTEMS FOR SOCIETAL TRANSITIONS, ICDSST 2024

Abstract
Recent advancements in digital technology have significantly impacted healthcare, with the rise of chatbots as a promising avenue for healthcare services. These chatbots aim to provide prevention, diagnosis, and treatment services, thereby reducing the workload on medical professionals. Despite this trend, limited research has explored the variables influencing user experiences in the design of healthcare chatbots. While the impact of visual representation within chatbot systems is recognized, existing studies have primarily focused on efficiency and accuracy, neglecting graphical interfaces and non-verbal visual communication tools. This research aims to delve into user experience aspects of symptom checker chatbots, including identity design, interface layout, and visual communication mechanisms. Data was collected through a comprehensive questionnaire involving three distinct chatbots (Healthily, Mediktor and Adele - a self-developed solution) and underwent meticulous analysis, yielding valuable insights to aid the decision process when designing effective chatbots for symptom checking.

2024

ODL: Opportunistic Distributed Learning for Intelligent IoT Systems

Autores
Abdellatif, AA; Khial, N; Helmy, M; Mohamed, A; Erbad, A; Shaban, K;

Publicação
IEEE Internet of Things Magazine

Abstract

  • 401
  • 4387