Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

João Pascoal Faria holds a PhD in Electrical and Computer Engineering from the Faculty of Engineering of the University of Porto in 1999, where he is currently Associate Professor at the Department of Informatics Engineering and Director of the Integrated Master in Informatics and Computing Engineering (MIEIC). He his a member of the Software Engineering Research Group (softeng.fe.up.pt) and researcher at INESC TEC, where he coordinates the Software Engineering area. He represents FEUP and INESC TEC in the Technical Comission for Health Informatics (CT 199) and FEUP as President of the Sectorial Comission for the Quality of Information and Communications Technology (CS/03), in the scope of the Portuguese Quality Institute (IPQ). In the past, he worked with several software companies (Novabase Saúde, Sidereus, Medidata) and was a co-founder of two other (Qualisoft and Strongstep). He has more than 25 years of experience in education, research, development and consultancy in several software engineering areas. He is the main author of a rapid application development tool (SAGA), based on domain specific languages, with more than 25 years of market presence and evolution (1989-present). He is currently involved in research projects, supervisions and consulting activities in the areas of model-based testing, software process improvement and model-driven development.

Interest
Topics
Details

Details

  • Name

    João Pascoal Faria
  • Role

    Research Coordinator
  • Since

    14th October 1985
Publications

2025

LLM Prompt Engineering for Automated White-Box Integration Test Generation in REST APIs

Authors
Rincon, AM; Vincenzi, AMR; Faria, JP;

Publication
2025 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW

Abstract
This study explores prompt engineering for automated white-box integration testing of RESTful APIs using Large Language Models (LLMs). Four versions of prompts were designed and tested across three OpenAI models (GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o) to assess their impact on code coverage, token consumption, execution time, and financial cost. The results indicate that different prompt versions, especially with more advanced models, achieved up to 90% coverage, although at higher costs. Additionally, combining test sets from different models increased coverage, reaching 96% in some cases. We also compared the results with EvoMaster, a specialized tool for generating tests for REST APIs, where LLM-generated tests achieved comparable or higher coverage in the benchmark projects. Despite higher execution costs, LLMs demonstrated superior adaptability and flexibility in test generation.

2025

Automated Social Media Feedback Analysis for Software Requirements Elicitation: A Case Study in the Streaming Industry

Authors
Silva, M; Faria, JP;

Publication
Proceedings of the 20th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2025, Porto, Portugal, April 4-6, 2025.

Abstract

2025

Automatic Generation of Loop Invariants in Dafny with Large Language Models

Authors
Faria, JP; Trigo, E; Abreu, R;

Publication
FUNDAMENTALS OF SOFTWARE ENGINEERING, FSEN 2025

Abstract
Recent verification tools aim to make formal verification more accessible for software engineers by automating most of the verification process. However, the manual work and expertise required to write verification helper code, such as loop invariants and auxiliary lemmas and assertions, remains a barrier. This paper explores the use of Large Language Models (LLMs) to automate the generation of loop invariants for programs in Dafny. We tested the approach on a curated dataset of 100 programs in Dafny involving arrays, strings, and numeric types. Using a multimodel approach that combines GPT-4o and Claude 3.5 Sonnet, correct loop invariants (passing the Dafny verifier) were generated at the first attempt for 92% of the programs, and in at most five attempts for 95% of the programs. Additionally, we developed an extension to the Dafny plugin for Visual Studio Code to incorporate automatic loop invariant generation into the IDE. Our work stands out from related approaches by handling a broader class of problems and offering IDE integration.

2025

Acceptance Test Generation with Large Language Models: An Industrial Case Study

Authors
Ferreira, M; Viegas, L; Faria, JP; Lima, B;

Publication
2025 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST, AST

Abstract
Large language model (LLM)-powered assistants are increasingly used for generating program code and unit tests, but their application in acceptance testing remains underexplored. To help address this gap, this paper explores the use of LLMs for generating executable acceptance tests for web applications through a two-step process: (i) generating acceptance test scenarios in natural language (in Gherkin) from user stories, and (ii) converting these scenarios into executable test scripts (in Cypress), knowing the HTML code of the pages under test. This two-step approach supports acceptance test-driven development, enhances tester control, and improves test quality. The two steps were implemented in the AutoUAT and Test Flow tools, respectively, powered by GPT-4 Turbo, and integrated into a partner company's workflow and evaluated on real-world projects. The users found the acceptance test scenarios generated by AutoUAT helpful 95% of the time, even revealing previously overlooked cases. Regarding Test Flow, 92% of the acceptance test cases generated by Test Flow were considered helpful: 60% were usable as generated, 8% required minor fixes, and 24% needed to be regenerated with additional inputs; the remaining 8% were discarded due to major issues. These results suggest that LLMs can, in fact, help improve the acceptance test process, with appropriate tooling and supervision.

2024

Report from the 14th International Workshop on Automating Test Case Design, Selection, and Evaluation (A-TEST 2023)

Authors
Faria, JP; Verbeek, F; Fasolino, AR;

Publication
ACM SIGSOFT Softw. Eng. Notes

Abstract

Supervised
thesis

2023

Task Prediction and Planning Tool for Complex Engineering Tasks

Author
Afonso Maria Rebordão Caiado de Sousa

Institution
UP-FEUP

2023

Assessing Accuracy of Low Cost Sensors in Sign Language Recognition

Author
Daniel Lima Fernandes Vieira

Institution
UP-FEUP

2023

Adoption of a BDD Framework and its Guidelines

Author
João Renato da Costa Pinto

Institution
UP-FEUP

2022

Integration of Fraud Detection Services in Payment Processing Systems

Author
Ana Margarida Ruivo Loureiro

Institution
UP-FEUP

2022

A Pattern-based Test Platform for IoT and eHealth

Author
Pedro José Brandão Almeida

Institution
UP-FEUP