Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Ana Paiva (publishes as Ana C. R. Paiva). Ana Paiva is Assistant Professor at the Informatics Engineering Department of the Faculty of Engineering of University of Porto (FEUP) where she works since 1999. She is a researcher at INESC TEC in the Software Engineering area and member of the Software Engineering research group which gathers researchers and post graduate students with common interests in software engineering. She teaches subjects like Software Testing, Formal Methods and Software Engineering, among others. She has a PhD in Electrical and Computer Engineering from FEUP with a thesis titled"Automated Specification Based Testing of Graphical User Interfaces". Her expertise is on the implementation and automation of the model based testing process. She has been developing research work in collaboration with Foundation of Software Engineering research group within Microsoft Research where she had the opportunity to extend Microsoft's model-based testing tool, Spec Explorer, for GUI testing. She is PI of a National Science Foundation funded project on Pattern-Based GUI Testing (PBGT). She is a member of the PSTQB (Portuguese Software Testing Qualification Board) board general assembly, member of TBok, Glossary, and the MBT Examination Working Groups of the ISTQB (International Software Testing Qualification Board), member of the Council of the Department of Informatics Engineering, and member of the Executive Committee of the Department of Informatics Engineering.

Interest
Topics
Details

Details

  • Name

    Ana Cristina Paiva
  • Cluster

    Computer Science
  • Role

    Senior Researcher
  • Since

    01st February 2014
001
Publications

2020

Teaching software engineering topics through pedagogical game design patterns: An empirical study

Authors
Flores, N; Paiva, ACR; Cruz, N;

Publication
Information (Switzerland)

Abstract
Teaching software engineering in its many different forms using traditional teaching methods is difficult. Serious games can help overcome these challenges because they allow real situations to be simulated. However, the development of serious games is not easy and, although there are good practices for relating game design patterns to teaching techniques, there is no methodology to support its use in a specific context such as software engineering. This article presents a case study to validate a methodology that links the Learning and Teaching Functions (LTF) to the Game Design Patterns (PIB) in the context of Software Engineering Education. A serious game was developed from scratch using this methodology to teach software estimation (a specific topic of software engineering). An experiment was carried out to validate the effectiveness of the game by comparing the results of two different groups of students. The results indicate that the methodology can help to develop effective educational games on specific learning topics. © 2020 by the authors.

2020

Test case generation based on mutations over user execution traces

Authors
Paiva, ACR; Restivo, A; Almeida, S;

Publication
Software Quality Journal

Abstract

2020

Experiences on Teaching Alloy with an Automated Assessment Platform

Authors
Macedo, N; Cunha, A; Pereira, J; Carvalho, R; Silva, R; Paiva, ACR; Ramalho, MS; Silva, DC;

Publication
Rigorous State-Based Methods - 7th International Conference, ABZ 2020, Ulm, Germany, May 27-29, 2020, Proceedings

Abstract

2019

Testing when mobile apps go to background and come back to foreground

Authors
Paiva, ACR; Gouveia, JMEP; Elizabeth, JD; Delamaro, ME;

Publication
Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2019

Abstract
Mobile applications have some specific characteristics not found on web and desktop applications. The mobile testing tools available may not be prepared to detect problems related to those specificities. So, it is important to assess the quality of the test cases generated/executed by mobile testing tools in order to check if they are able to find those specific problems. One way to assess the quality of a test suite is through mutation testing. This paper presents new mutation operators created to inject faults leading to known failures related to the non-preservation of users transient UI state when mobile applications go to background and then come back to foreground. A set of mutation operators is presented and the rational behind its construction is explained. A case study illustrates the approach to evaluate a mobile testing tool. In this study, the tool used is called iMPAcT tool, however any other mobile testing tool could be used. The experiments are performed over mobile applications publicly available on the Google Play store. The results are presented and discussed. Finally, some improvements are suggested for the iMPAcT tool in order to be able to generate test cases that can kill more mutants and so, hopefully, detect more failures in the future. © 2019 IEEE.

2019

Testing android incoming calls

Authors
Paiva, ACR; Goncalves, MA; Barros, AR;

Publication
Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation, ICST 2019

Abstract
Mobile applications are increasingly present in our daily lives. Being increasingly dependent on apps, we all want to make sure apps work as expected. One way to increase confidence and quality of software is through testing. However, the existing approaches and tools still do not provide sufficient solutions for testing mobile apps with features different from the ones found in desktop or web applications. In particular, there are guidelines that mobile developers should follow and that may be tested automatically but, as far as we know, there are no tools that are able do it. The iMPAcT tool combines exploration, reverse engineering and testing to check if mobile apps follow best practices to implement specific behavior called UI Patterns. Examples of UI Patterns within this catalog are: orientation, background-foreground, side drawer, tab-scroll, among others. For each of these behaviors (UI Patterns), the iMPAcT tool has a corresponding Test Pattern that checks if the UI Pattern implementation follows the guidelines. This paper presents an extension to iMPAcT tool. It enables to test if Android apps work properly after receiving an incoming call, i.e., if the state of the screen after the call is the same as before getting the call. It formalizes the problem, describes the overall approach, describes the architecture of the tool and reports an experiment performed over 61 public mobile apps. © 2019 IEEE.

Supervised
thesis

2019

Análise de Impacto das Alterações a Processos Descritos em BPMN

Author
José Pedro Teles da Silva Pereira

Institution
UP-FEUP

2019

Mutation-based Web Test Case Generation

Author
Sérgio Miguel Almeida Ferreira

Institution
UP-FEUP

2019

Android Crawler

Author
Marco António Fernandes Gonçalves

Institution
UP-FEUP

2019

Model Based Testing - From requirements to tests

Author
Daniel Ademar Magalhães Maciel

Institution
UP-FEUP

2019

Fault Injection in Android Applications

Author
Adélia Helena Valentim Gonçalves

Institution
UP-FEUP