Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2019

Skin temperature of the foot: comparing transthyretin Familial Amyloid Polyneuropathy and Diabetic Foot patients

Authors
Seixas, A; Vilas Boas, MD; Carvalho, R; Coelho, T; Ammer, K; Vilas Boas, JP; Mendes, J; Cunha, JPS; Vardasca, R;

Publication
COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION

Abstract
Skin temperature regulation is dependent on the autonomic nervous system function, which may be impaired in patients with neuropathy. Literature reporting thermographic assessment of patients with established diagnosis of Diabetic Foot (DF) is scarce, but this information is completely absent in patients suffering from Transthyretin Familial Amyloid Polyneuropathy (TTR-FAP). The aim of this study is to compare skin temperature distribution in patients with DF and TTR-FAP. Thermograms of the dorsal and plantar surfaces of twelve neuropathic patients, six with DF and six with TTR-FAP, were assessed and compared. Skin temperature was significantly higher in the diabetic foot group, in both regions of interest. Thermal symmetry values were high, but similar in both groups. The bias between the right and left foot was smaller, with smaller limits of agreement in TTR-FAP patients, suggesting a lower agreement between the temperature of the right and left feet in DF patients.

2019

YouTube Timed Metadata Enrichment Using a Collaborative Approach

Authors
Pinto, JP; Viana, P;

Publication
MULTIMEDIA AND NETWORK INFORMATION SYSTEMS

Abstract
Although the growth of video content in online platforms has been happening for some time, searching and browsing these assets is still very inefficient as rich contextual data that describes the content is still not available. Furthermore, any available descriptions are, usually, not linked to timed moments of content. In this paper, we present an approach for making social web videos available on YouTube more accessible, searchable and navigable. By using the concept of crowdsourcing to collect the metadata, our proposal can contribute to easily enhance content uploaded in the YouTube platform. Metadata, collected as a collaborative annotation game, is added to the content as time-based information in the form of descriptions and captions using the YouTube API. This contributes for enriching video content and enabling navigation through temporal links.

2019

Conceptual framework for blockchain-based metering systems

Authors
Zanghi, E; Do Coutto, MB; de Souza, JCS;

Publication
MULTIAGENT AND GRID SYSTEMS

Abstract
The smart grid environment requires the enhancement of various computational tools, especially for routine tasks of data acquisition and system monitoring. This paper presents the building blocks of a conceptual framework to be used as the basis for the construction of novel distributed remote metering systems with utilization of the cutting edge Blockchain technology. The proposed methodology is suitable for processing a large volume of data aimed at monitoring modern electric power distribution grids. As a proof of concept, a collaborative metering system is conceived based on the Blockchain technology, being primarily capable of: dealing with the entirety of the collected data (conveniently stored and filtered); assuring data integrity by means of cryptography; optimizing implementation/operation costs of the telecommunication services involved. Simulation results concerning the reliability and performance of the designed distributed remote metering system are presented.

2019

Breaking MPC implementations through compression

Authors
Resende, JS; Sousa, PR; Martins, R; Antunes, L;

Publication
INTERNATIONAL JOURNAL OF INFORMATION SECURITY

Abstract
There are many cryptographic protocols in the literature that are scientifically and mathematically sound. By extension, cryptography today seeks to respond to numerous properties of the communication process beyond confidentiality (secrecy), such as integrity, authenticity, and anonymity. In addition to the theoretical evidence, implementations must be equally secure. Due to the ever-increasing intrusion from governments and other groups, citizens are now seeking alternatives ways of communication that do not leak information. In this paper, we analyze multiparty computation (MPC), which is a sub-field of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. This is a very useful method that can be used, for example, to carry out computations on anonymous data without having to leak that data. Thus, due to the importance of confidentiality in this type of technique, we analyze active and passive attacks using complexity measures (compression and entropy). We start by obtaining network traces and syscalls, then we analyze them using compression and entropy techniques. Finally, we cluster the traces and syscalls using standard clustering techniques. This approach does not need any deep specific knowledge of the implementations being analyzed. This paper presents a security analysis for four MPC frameworks, where three were identified as insecure. These insecure libraries leak information about the inputs provided by each party of the communication. Additionally, we have detected, through a careful analysis of its source code, that SPDZ-2's secret sharing schema always produces the same results.

2019

Simplifying the Algorithm Selection Using Reduction of Rankings of Classification Algorithms

Authors
Abdulrahman, SM; Brazdil, P; Zainon, WMNW; Adamu, A;

Publication
2019 8TH INTERNATIONAL CONFERENCE ON SOFTWARE AND COMPUTER APPLICATIONS (ICSCA 2019)

Abstract
The average ranking method (AR) is one of the simplest and effective algorithms selection methods. This method uses metadata in the form of test results of a given set of algorithms on a given set of datasets and calculates an average rank for each algorithm. The ranks are used to construct the average ranking. In this paper we investigate the problem of how the rankings can be reduced by removing non-competitive and redundant algorithms, thereby reducing the number of tests a user needs to conduct on a new dataset to identify the most suitable algorithm. The method proposed involves two phases. In the first one, the aim is to identify the most competitive algorithms for each dataset used in the past. This is done with the recourse to a statistical test. The second phase involves a covering method whose aim is to reduce the algorithms by eliminating redundant variants. The proposed method differs from one earlier proposal in various aspects. One important one is that it takes both accuracy and time into consideration. The proposed method was compared to the baseline strategy which consists of executing all algorithms from the ranking. It is shown that the proposed method leads to much better performance than the baseline.

2019

Generating Synthetic Missing Data: A Review by Missing Mechanism

Authors
Santos, MS; Pereira, RC; Costa, AF; Soares, JP; Santos, J; Abreu, PH;

Publication
IEEE ACCESS

Abstract
The performance evaluation of imputation algorithms often involves the generation of missing values. Missing values can be inserted in only one feature (univariate configuration) or in several features (multivariate configuration) at different percentages (missing rates) and according to distinct missing mechanisms, namely, missing completely at random, missing at random, and missing not at random. Since the missing data generation process defines the basis for the imputation experiments (configuration, missing rate, and missing mechanism), it is essential that it is appropriately applied; otherwise, conclusions derived from ill-defined setups may be invalid. The goal of this paper is to review the different approaches to synthetic missing data generation found in the literature and discuss their practical details, elaborating on their strengths and weaknesses. Our analysis revealed that creating missing at random and missing not at random scenarios in datasets comprising qualitative features is the most challenging issue in the related work and, therefore, should be the focus of future work in the field.

  • 1672
  • 4387