Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2016

Usage-Driven Dublin Core Descriptor Selection A Case Study Using the Dendro Platform for Research Dataset Description

Authors
da Silva, JR; Ribeiro, C; Lopes, JC;

Publication
RESEARCH AND ADVANCED TECHNOLOGY FOR DIGITAL LIBRARIES, TPDL 2016

Abstract
Dublin Core schemas are the core metadata models of most repositories, and this includes recent repositories dedicated to datasets. DC descriptors are generic and are being adapted to the needs of different communities with the so-called Dublin Core Application Profiles. DCAPs rely on the agreement within user communities, in a process mainly driven by their evolving needs. In this paper, we propose a complementary automated process, designed to help curators and users discover the descriptors that better suit the needs of a specific research group. We target the description of datasets, and test our approach using Dendro, a prototype research data management platform, where an experimental method is used to rank and present DC Terms descriptors to the users based on their usage patterns. In a controlled experiment, we gathered the interactions of two groups as they used Dendro to describe datasets from selected sources. One of the groups had descriptor ranking on, while the other had the same list of descriptors throughout the whole experiment. Preliminary results show that 1. some DC Terms are filled in more often than others, with different distribution in the two groups, 2. selected descriptors were increasingly accepted by users in detriment of manual selection and 3. users were satisfied with the performance of the platform, as demonstrated by a post-study survey.

2016

A model for analyzing performance problems and root causes in the personal software process

Authors
Raza, M; Faria, JP;

Publication
JOURNAL OF SOFTWARE-EVOLUTION AND PROCESS

Abstract
High-maturity software development processes, such as the Team Software Process and the accompanying Personal Software Process (PSP), can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes, and devise improvement actions. However, there is a lack of tool support for automating that type of analysis, and hence diminish the manual effort and expert knowledge required. So, we propose in this paper a comprehensive performance model, addressing time estimation accuracy, quality, and productivity, to enable the automated (tool based) analysis of performance data produced by PSP developers, namely, identify and rank performance problems and their root causes. A PSP data set referring to more than 30000 projects was used to validate and calibrate the model. Copyright (c) 2015 John Wiley & Sons, Ltd.

2016

A Model-Based Approach for Product Testing and Certification in Digital Ecosystems

Authors
Lima, B; Faria, JP;

Publication
2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW)

Abstract
In a growing number of domains, such as ambient-assisted living (AAL) and e-health, the provisioning of end-to-end services to the users depends on the proper interoperation of multiple products from different vendors, forming a digital ecosystem. To ensure interoperability and the integrity of the ecosystem, it is important that candidate products are independently tested and certified against applicable interoperability requirements. Based on the experience acquired in the AAL4ALL project, we propose in this paper a model-based approach to systematize, automate and increase the assurance of such testing and certification activities. The approach encompasses the construction of several models: a feature model, an interface model, a product model, and unit and integration test models. The abstract syntax and consistency rules of these models are specified by means of metamodels written in UML and Alloy and automatically checked with Alloy Analyzer. Using the model finding capabilities of Alloy Analyzer, integration tests can be automatically generated from the remaining models, through the composition and instantiation of unit tests. Examples of concrete models from the AAL4ALL project are also presented.

2016

Automated Testing of Distributed and Heterogeneous Systems Based on UML Sequence Diagrams

Authors
Lima, B; Faria, JP;

Publication
SOFTWARE TECHNOLOGIES (ICSOFT 2015)

Abstract
The growing dependence of our society on increasingly complex software systems makes software testing ever more important and challenging. In many domains, several independent systems, forming a distributed and heterogeneous system of systems, are involved in the provisioning of end-to-end services to users. However, existing test automation techniques provide little tool support for properly testing such systems. Hence, we propose an approach and toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems, comprising a visual modeling environment, a test execution engine, and a distributed test monitoring and control infrastructure. The only manual activity required is the description of the participants and behavior of the services under test with UML sequence diagrams, which are translated to extended Petri nets for efficient test input generation and test output checking at runtime. A real world example from the Ambient Assisted Living domain illustrates the approach.

2016

Empirical Evaluation of the ProcessPAIR Tool for Automated Performance Analysis

Authors
Raza, M; Faria, JP; Salazar, R;

Publication
SEKE

Abstract
Software development processes can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes and devise improvement actions. However, conducting that analysis manually is challenging because of the potentially large amount of data to analyze and the effort and expertise required. ProcessPAIR is a novel tool designed to help developers analyze their performance data with less effort, by automatically identifying and ranking performance problems and potential root causes. The analysis is based on performance models derived from the performance data of a large community of developers. In this paper, we present the results of an experiment conducted in the context of Personal Software Process (PSP) training, to show that ProcessPAIR is able to accurately identify and rank performance problems and potential root causes of individual developers so that subsequent manual analysis for the identification of deeper causes and improvement actions can be properly focused.

2016

Towards the Online Testing of Distributed and Heterogeneous Systems with Extended Petri Nets

Authors
Lima, B; Faria, JP;

Publication
PROCEEDINGS 2016 10TH INTERNATIONAL CONFERENCE ON THE QUALITY OF INFORMATION AND COMMUNICATIONS TECHNOLOGY (QUATIC)

Abstract
The growing dependence of our society on increasingly complex software systems makes software testing ever more important and challenging. In many domains, such as healthcare and transportation, several independent systems, forming a heterogeneous and distributed system of systems, are involved in the provisioning of end-to-end services to users. However, existing testing techniques, namely in the model-based testing field, provide little support for properly testing such systems. To bridge the gaps identified in the state of the art we intend to develop a research work where the main goal is to significantly reduce the cost of testing distributed and heterogeneous systems, from the standpoint of time, resources and expertise required, as compared to existing approaches. For that, we propose a preliminary approach and a toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems. The tester interacts with a visual modeling frontend to describe key behavioral scenarios, invoke test generation and execution, and visualize test results and coverage information back in the model. The visual modeling notation is converted to a formal notation amenable for runtime interpretation in the backend. A distributed test monitoring and control infrastructure is responsible for interacting with the components of the system under test, as test driver, monitor and stub. At the core of the toolset, a test execution engine coordinates test execution and checks the conformance of the observed execution trace with the expectations derived from the visual model. A real world example from the Ambient Assisted Living domain is presented to illustrate the approach. As future work we intend to develop distributed and incremental algorithms for online testing of distributed and heterogeneous systems based on Extended Petri Nets at runtime and validate them in real world case studies.

  • 409
  • 694