Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por HumanISE

2026

An Explosion of the Uses of Immersive Learning Environments: A Mapping of Reviews Update

Autores
Beck, E; Morgado, LC; O’Shea, M;

Publicação
Communications in Computer and Information Science

Abstract
Since the publication of the 2020 paper, “Finding the Gaps About Uses of Immersive Learning Environments: A Survey of Surveys,” the landscape of immersive learning environments (ILEs) has continued to evolve rapidly. This update aims to revisit the gaps identified in that previous research and explore emerging trends. We conducted an extensive review of new surveys published after that paper’s cut date. Our findings reveal a significant amount of new published reviews (n?=?64), more than doubling the original corpus (n?=?47). The results highlighted novel themes of usage of immersive environments, helping bridge some 2020 research gaps. This paper discusses those developments and presents a consolidated perspective on the uses of immersive learning environments. © 2025 Elsevier B.V., All rights reserved.

2026

A framework for supporting the reproducibility of computational experiments in multiple scientific domains

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
Future Gener. Comput. Syst.

Abstract
In recent years, the research community, but also the general public, has raised serious questions about the reproducibility and replicability of scientific work. Since many studies include some kind of computational work, these issues are also a technological challenge, not only in computer science, but also in most research domains. Computational replicability and reproducibility are not easy to achieve due to the variety of computational environments that can be used. Indeed, it is challenging to recreate the same environment via the same frameworks, code, programming languages, dependencies, and so on. We propose a framework, known as SciRep, that supports the configuration, execution, and packaging of computational experiments by defining their code, data, programming languages, dependencies, databases, and commands to be executed. After the initial configuration, the experiments can be executed any number of times, always producing exactly the same results. Our approach allows the creation of a reproducibility package for experiments from multiple scientific fields, from medicine to computer science, which can be re-executed on any computer. The produced package acts as a capsule, holding absolutely everything necessary to re-execute the experiment. To evaluate our framework, we compare it with three state-of-the-art tools and use it to reproduce 18 experiments extracted from published scientific articles. With our approach, we were able to execute 16 (89%) of those experiments, while the others reached only 61%, thus showing that our approach is effective. Moreover, all the experiments that were executed produced the results presented in the original publication. Thus, SciRep was able to reproduce 100% of the experiments it could run. © 2025 The Authors

2025

LLM Prompt Engineering for Automated White-Box Integration Test Generation in REST APIs

Autores
Rincon, AM; Vincenzi, AMR; Faria, JP;

Publicação
2025 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW

Abstract
This study explores prompt engineering for automated white-box integration testing of RESTful APIs using Large Language Models (LLMs). Four versions of prompts were designed and tested across three OpenAI models (GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o) to assess their impact on code coverage, token consumption, execution time, and financial cost. The results indicate that different prompt versions, especially with more advanced models, achieved up to 90% coverage, although at higher costs. Additionally, combining test sets from different models increased coverage, reaching 96% in some cases. We also compared the results with EvoMaster, a specialized tool for generating tests for REST APIs, where LLM-generated tests achieved comparable or higher coverage in the benchmark projects. Despite higher execution costs, LLMs demonstrated superior adaptability and flexibility in test generation.

2025

Automated Social Media Feedback Analysis for Software Requirements Elicitation: A Case Study in the Streaming Industry

Autores
Silva, M; Faria, JP;

Publicação
Proceedings of the 20th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2025, Porto, Portugal, April 4-6, 2025.

Abstract

2025

Automatic Generation of Loop Invariants in Dafny with Large Language Models

Autores
Faria, JP; Trigo, E; Abreu, R;

Publicação
FUNDAMENTALS OF SOFTWARE ENGINEERING, FSEN 2025

Abstract
Recent verification tools aim to make formal verification more accessible for software engineers by automating most of the verification process. However, the manual work and expertise required to write verification helper code, such as loop invariants and auxiliary lemmas and assertions, remains a barrier. This paper explores the use of Large Language Models (LLMs) to automate the generation of loop invariants for programs in Dafny. We tested the approach on a curated dataset of 100 programs in Dafny involving arrays, strings, and numeric types. Using a multimodel approach that combines GPT-4o and Claude 3.5 Sonnet, correct loop invariants (passing the Dafny verifier) were generated at the first attempt for 92% of the programs, and in at most five attempts for 95% of the programs. Additionally, we developed an extension to the Dafny plugin for Visual Studio Code to incorporate automatic loop invariant generation into the IDE. Our work stands out from related approaches by handling a broader class of problems and offering IDE integration.

2025

Agile Processes in Software Engineering and Extreme Programming - Workshops - XP 2024 Workshops, Bozen-Bolzano, Italy, June 4-7, 2024, Revised Selected Papers

Autores
Marchesi, L; Goldman, A; Lunesu, MI; Przybylek, A; Aguiar, A; Morgan, L; Wang, X; Pinna, A;

Publicação
XP Workshops

Abstract

  • 1
  • 667