2024
Autores
Queirós, R;
Publicação
Communications in Computer and Information Science
Abstract
This paper introduces GERF, a Gamified Educational Virtual Escape Room Framework designed to enhance micro-learning and adaptive learning experiences in educational settings. The framework incorporates a user taxonomy based on the user type hexad, addressing the preferences and motivations of different learners profiles. GERF focuses on two key facets: interoperability and analytics. To ensure seamless integration of Escape Room (ER) platforms with Learning Management Systems (LMS), the Learning Tools Interoperability (LTI) specification is used. This enables smooth and efficient communication between ERs and LMS platforms. Additionally, GERF uses the xAPI specification to capture and transmit experiential data in the form of xAPI statements, which are then sent to a Learning Record Store (LRS). By leveraging these learning analytics, educators gain valuable insights into students’ interactions within the ER, facilitating the adaptation of learning content based on individual learning needs. Ultimately, GERF empowers educators to create personalized learning experiences within the ER environment, fostering student engagement and learning outcomes. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.
2024
Autores
Nakayama, LF; Matos, J; Quion, J; Novaes, F; Mitchell, WG; Mwavu, R; Hung, CJYJ; Santiago, APD; Phanphruk, W; Cardoso, JS; Celi, LA;
Publicação
PLOS DIGITAL HEALTH
Abstract
Over the past 2 decades, exponential growth in data availability, computational power, and newly available modeling techniques has led to an expansion in interest, investment, and research in Artificial Intelligence (AI) applications. Ophthalmology is one of many fields that seek to benefit from AI given the advent of telemedicine screening programs and the use of ancillary imaging. However, before AI can be widely deployed, further work must be done to avoid the pitfalls within the AI lifecycle. This review article breaks down the AI lifecycle into seven steps-data collection; defining the model task; data preprocessing and labeling; model development; model evaluation and validation; deployment; and finally, post-deployment evaluation, monitoring, and system recalibration-and delves into the risks for harm at each step and strategies for mitigating them.
2024
Autores
Santos, S; Saraiva, J; Ribeiro, F;
Publicação
2024 ACM/IEEE INTERNATIONAL WORKSHOP ON AUTOMATED PROGRAM REPAIR, APR 2024
Abstract
This paper introduces a new method of Automated Program Repair that relies on a combination of the GPT-4 Large Language Model and automatic type checking of Haskell programs. This method identifies the source of a type error and asks GPT-4 to fix that specific portion of the program. Then, QuickCheck is used to automatically generate a large set of test cases to validate whether the generated repair behaves as the correct solution. Our publicly available experiments revealed a success rate of 88.5% in normal conditions. However, more detailed testing should be performed to more accurately evaluate this form of APR.
2024
Autores
Coelho, F; Rodrigues, L; Mello, J; Villar, J; Bessa, R;
Publicação
2024 20TH INTERNATIONAL CONFERENCE ON THE EUROPEAN ENERGY MARKET, EEM 2024
Abstract
This paper proposes an original framework for a flexibility-centric value chain and describes the pre-specification of the Grid Data and Business Network (GDBN), a digital platform to provide support to the flexibility value chain activities. First, it outlines the structure of the value chain with the most important tasks and actors in each activity. Next, it describes the GDBN concept, including stakeholders' engagement and conceptual architecture. It presents the main GDBN services to support the flexibility value chain, including, matching consumers and assets and service providers, assets installation and operationalization to provide flexibility, services for energy communities and services, for consumers, aggregators, and distribution systems operators, to participate in flexibility markets. At last, it details the workflow and life cycle management of this platform and discusses candidate business models that could support its implementation in real-life scenarios.
2024
Autores
Patricio, C; Teixeira, LF; Neves, JC;
Publicação
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024
Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation.
2024
Autores
Klein, LC; Chellal, AA; Grilo, V; Braun, J; Gonçalves, J; Pacheco, MF; Fernandes, FP; Monteiro, FC; Lima, J;
Publicação
SENSORS
Abstract
The accurate measurement of joint angles during patient rehabilitation is crucial for informed decision making by physiotherapists. Presently, visual inspection stands as one of the prevalent methods for angle assessment. Although it could appear the most straightforward way to assess the angles, it presents a problem related to the high susceptibility to error in the angle estimation. In light of this, this study investigates the possibility of using a new approach to angle calculation: a hybrid approach leveraging both a camera and LiDAR technology, merging image data with point cloud information. This method employs AI-driven techniques to identify the individual and their joints, utilizing the cloud-point data for angle computation. The tests, considering different exercises with different perspectives and distances, showed a slight improvement compared to using YOLO v7 for angle calculation. However, the improvement comes with higher system costs when compared with other image-based approaches due to the necessity of equipment such as LiDAR and a loss of fluidity during the exercise performance. Therefore, the cost-benefit of the proposed approach could be questionable. Nonetheless, the results hint at a promising field for further exploration and the potential viability of using the proposed methodology.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.