2017
Autores
Almohammad, A; Ferreira, JF; Mendes, A; White, P;
Publicação
2017 IEEE 25TH INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS (REW)
Abstract
This paper presents REQCAP, an implementation of a new method that articulates hierarchical requirements modeling and test generation to assist in the process of capturing requirements for PLC-based control systems. REQCAP is based on a semi-formal graphical model that supports hierarchical modeling, thus enabling compositional specifications. The tool supports automated generation of test cases according to different coverage criteria. It can also import requirements directly from REQIF files and automatically generate Sequential Function Charts (SFCs). We use a real-world case study to show how REQCAP can be used to model realistic system requirements. We show how the automated generation of SFCs and test cases can support engineers (and clients) in visualizing and reviewing requirements. Moreover, all the tests listed in the original test document of the case study are also generated automatically by REQCAP, demonstrating that the tool can be used to effectively capture requirements and generate valid and useful test cases.
2017
Autores
Peixoto, C; Brito, C; Fontainhas, M; Peixoto, H; Machado, J; Abelha, A;
Publicação
2017 5TH INTERNATIONAL CONFERENCE ON FUTURE INTERNET OF THINGS AND CLOUD WORKSHOPS (FICLOUDW) 2017
Abstract
Continuous Ambulatory Peritoneal Dialysis (CAPD) is one of the many treatments for patients with advanced kidney disease. It is a treatment that needs regular monitoring and understanding of all the factors of blood and urine samples of each patient to understand if the treatment is going well. This article will explore data information from patients undergoing CAPD procedure. This data information helps to comprehend how interoperability acts in a Health Information System since this data contains patients' personal information but also patients' blood and urine samples' results, meaning all the services must be connected. In this work, it is used Business Intelligence process to prove that all the information available can be useful to understand the treatment above-mentioned and also how can several factors influence or not the number of patients going through kidney failure and CAPD by the study of indicators.
2017
Autores
Bahmani, R; Barbosa, M; Brasser, F; Portela, B; Sadeghi, AR; Scerri, G; Warinschi, B;
Publicação
Financial Cryptography and Data Security - 21st International Conference, FC 2017, Sliema, Malta, April 3-7, 2017, Revised Selected Papers
Abstract
In this paper we show how Isolated Execution Environments (IEE) offered by novel commodity hardware such as Intel’s SGX provide a new path to constructing general secure multiparty computation (MPC) protocols. Our protocol is intuitive and elegant: it uses code within an IEE to play the role of a trusted third party (TTP), and the attestation guarantees of SGX to bootstrap secure communications between participants and the TTP. The load of communications and computations on participants only depends on the size of each party’s inputs and outputs and is thus small and independent from the intricacies of the functionality to be computed. The remaining computational load– essentially that of computing the functionality – is moved to an untrusted party running an IEE-enabled machine, an attractive feature for Cloud-based scenarios. Our rigorous modular security analysis relies on the novel notion of labeled attested computation which we put forth in this paper. This notion is a convenient abstraction of the kind of attestation guarantees one can obtain from trusted hardware in multi-user scenarios. Finally, we present an extensive experimental evaluation of our solution on SGX-enabled hardware. Our implementation is open-source and it is functionality agnostic: it can be used to securely outsource to the Cloud arbitrary off-the-shelf collaborative software, such as the one employed on financial data applications, enabling secure collaborative execution over private inputs provided by multiple parties. © 2017, International Financial Cryptography Association.
2017
Autores
Barbosa, M; Catalano, D; Fiore, D;
Publicação
Computer Security - ESORICS 2017 - 22nd European Symposium on Research in Computer Security, Oslo, Norway, September 11-15, 2017, Proceedings, Part I
Abstract
In privacy-preserving processing of outsourced data a Cloud server stores data provided by one or multiple data providers and then is asked to compute several functions over it. We propose an efficient methodology that solves this problem with the guarantee that a honest-but-curious Cloud learns no information about the data and the receiver learns nothing more than the results. Our main contribution is the proposal and efficient instantiation of a new cryptographic primitive called Labeled Homomorphic Encryption (labHE). The fundamental insight underlying this new primitive is that homomorphic computation can be significantly accelerated whenever the program that is being computed over the encrypted data is known to the decrypter and is not secret—previous approaches to homomorphic encryption do not allow for such a trade-off. Our realization and implementation of labHE targets computations that can be described by degree-two multivariate polynomials. As an application, we consider privacy preserving Genetic Association Studies (GAS), which require computing risk estimates from features in the human genome. Our approach allows performing GAS efficiently, non interactively and without compromising neither the privacy of patients nor potential intellectual property of test laboratories. © 2017, Springer International Publishing AG.
2017
Autores
Barbosa, Manuel; Catalano, Dario; Fiore, Dario;
Publicação
IACR Cryptology ePrint Archive
Abstract
2016
Autores
Machado, N; Maia, F; Matos, M; Oliveira, R;
Publicação
2016 SEVENTH LATIN-AMERICAN SYMPOSIUM ON DEPENDABLE COMPUTING (LADC)
Abstract
A distributed system is often built on top of an overlay network. Overlay networks enable network topology transparency while, at the same time, can be designed to provide efficient data dissemination, load balancing, and even fault tolerance. They are constructed by defining logical links between nodes creating a node graph. In practice, this is materialized by a Peer Sampling Service (PSS) that provides references to other nodes to communicate with. Depending on the configuration of the PSS, the characteristics of the overlay can be adjusted to cope with application requirements and performance concerns. Unfortunately, overlay efficiency comes at the expense of dependability. To overcome this, one often deploys an application overlay focused on efficiency, along with a safety-net overlay to ensure dependability. However, this approach results in significant resource waste since safety-net overlays are seldom used. In this paper, we focus on safety-net overlay networks and propose an adaptable mechanism to minimize resource usage while maintaining dependability guarantees. In detail, we consider a random overlay network, known to be highly dependable, and propose BUZZPSS, a new Peer Sampling Service that is able to autonomously fine-tune its resource consumption usage according to the observed system stability. When the system is stable and connectivity is not at risk, BUZZPSS autonomously changes its behavior to save resources. Alongside, it is also able to detect system instability and act accordingly to guarantee that the overlay remains operational. Through an experimental evaluation, we show that BUZZPSS is able to autonomously adapt to the system stability levels, consuming up to 6x less resources than a static approach.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.