2023
Authors
De Araujo Pistono, AMA; Santos, AMP; Baptista, RJV;
Publication
Iberian Conference on Information Systems and Technologies, CISTI
Abstract
Games with purposes beyond entertainment, the so-called serious games, have been useful tools in professional training, especially in engaging participants. However, their evaluation and, also, their adaptable characteristics to different scenarios, audiences and contexts remain challenges. This paper examines the application of serious games in professional training, their results and adaptable ways to achieve certain goals. Using the Design Science Research (DSR) methodology, a framework was built to develop and evaluate serious games to improve user experience, learning outcomes, knowledge transfer to work situations, and the application of the skills practised in the game in real professional settings. At this stage, the investigation presents a framework regarding the triangulation of data collected from a systematic literature review, focus groups and interviews. Following the DSR methodology, the next steps of this investigation, listed at the end of the paper, are the demonstration of the framework in serious game development and the evaluation and validation of this artefact. © 2023 ITMA.
2023
Authors
Costa, C; Ferreira, CA;
Publication
Intelligent Data Engineering and Automated Learning - IDEAL 2023 - 24th International Conference, Évora, Portugal, November 22-24, 2023, Proceedings
Abstract
Paint bases are the essence of the color palette, allowing for the creation of a wide range of tones by combining them in different proportions. In this paper, an Artificial Neural Network is developed incorporating a pre-trained Decoder to predict the proportion of each paint base in an ink mixture in order to achieve the desired color. Color coordinates in the CIELAB space and the final finish are considered as input parameters. The proposed model is compared with commonly used models such as Linear Regression, Random Forest and Artificial Neural Network. It is important to note that the Artificial Neural Network was implemented with the same architecture as the proposed model but without incorporating the pre-trained Decoder. Experimental results demonstrate that the Artificial Neural Network with a pre-trained Decoder consistently outperforms the other models in predicting the proportions of paint bases for color tuning. This model exhibits lower Mean Absolute Error and Root Mean Square Error values across multiple objectives, indicating its superior accuracy in capturing the complexities of color relationships. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
2023
Authors
Teixeira, B; Carvalhais, L; Pinto, T; Vale, Z;
Publication
2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI
Abstract
The structural changes in the energy sector caused by renewable sources and digitization have resulted in an increased use of Artificial Intelligence (AI), including Machine Learning (ML) models. However, these models' black-box nature and complexity can create issues with transparency and trust, thereby hindering their interpretability. The use of Explainable AI (XAI) can offer a solution to these challenges. This paper explores the application of an XAI-based framework to analyze and evaluate a photovoltaic energy generation forecasting problem and contribute to the trustworthiness of ML solutions.
2023
Authors
Palumbo, G; Guimaraes, M; Carneiro, D; Novais, P; Alves, V;
Publication
AMBIENT INTELLIGENCE-SOFTWARE AND APPLICATIONS-13TH INTERNATIONAL SYMPOSIUM ON AMBIENT INTELLIGENCE
Abstract
As the field of Machine Learning evolves, the number of available learning algorithms and their parameters continues to grow. On the one hand, this is positive as it allows for the finding of potentially more accurate models. On the other hand, however, it also makes the process of finding the right model more complex, given the number of possible configurations. Traditionally, data scientists rely on trial-and-error or brute force procedures, which are costly, or on their own intuition or expertise, which is hard to acquire. In this paper we propose an approach for algorithm recommendation based on meta-learning. The approach can be used in real-time to predict the best n algorithms (based on a selected performance metric) and their configuration, for a given ML problem. We evaluate it through cross-validation, and by comparing it against an Auto ML approach, in terms of accuracy and time. Results show that the proposed approach recommends algorithms that are similar to those of traditional approaches, in terms of performance, in just a fraction of the time.
2023
Authors
Graziani, M; Dutkiewicz, L; Calvaresi, D; Amorim, JP; Yordanova, K; Vered, M; Nair, R; Abreu, PH; Blanke, T; Pulignano, V; Prior, JO; Lauwaert, L; Reijers, W; Depeursinge, A; Andrearczyk, V; Müller, H;
Publication
ARTIFICIAL INTELLIGENCE REVIEW
Abstract
Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable, explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are weighted differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a-highly needed-standard for the communication among interdisciplinary areas of AI.
2023
Authors
Rodezno, DAQ; Vahid-Ghavidel, M; Javadi, MS; Feltrin, AP; Catalao, J;
Publication
2023 IEEE POWER & ENERGY SOCIETY INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE, ISGT
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.