Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Filipe Antunes

2013

Entropy and compression: two measures of complexity

Authors
Henriques, T; Goncalves, H; Antunes, L; Matias, M; Bernardes, J; Costa Santos, C;

Publication
JOURNAL OF EVALUATION IN CLINICAL PRACTICE

Abstract
Rationale, aims and objectivesTraditional complexity measures are used to capture the amount of structured information present in a certain phenomenon. Several approaches developed to facilitate the characterization of complexity have been described in the related literature. Fetal heart rate (FHR) monitoring has been used and improved during the last decades. The importance of these studies lies on an attempt to predict the fetus outcome, but complexity measures are not yet established in clinical practice. In this study, we have focused on two conceptually different measures: Shannon entropy, a probabilistic approach, and Kolmogorov complexity, an algorithmic approach. The main aim of the current investigation was to show that approximation to Kolmogorov complexity through different compressors, although applied to a lesser extent, may be as useful as Shannon entropy calculated by approximation through different entropies, which has been successfully applied to different scientific areas. MethodsTo illustrate the applicability of both approaches, two entropy measures, approximate and sample entropy, and two compressors, paq8l and bzip2, were considered. These indices were applied to FHR tracings pertaining to a dataset composed of 48 delivered fetuses with umbilical artery blood (UAB) pH in the normal range (pH7.20), 10 delivered mildly acidemic fetuses and 10 moderate-to-severe acidemic fetuses. The complexity indices were computed on the initial and final segments of the last hour of labour, considering 5- and 10-minute segments. ResultsIn our sample set, both entropies and compressors were successfully utilized to distinguish fetuses at risk of hypoxia from healthy ones. Fetuses with lower UAB pH presented significantly lower entropy and compression indices, more markedly in the final segments. ConclusionsThe combination of these conceptually different measures appeared to present an improved approach in the characterization of different pathophysiological states, reinforcing the theory that entropies and compressors measure different complexity features. In view of these findings, we recommend a combination of the two approaches.

2017

Entropy and Compression Capture Different Complexity Features: The Case of Fetal Heart Rate

Authors
Monteiro Santos, J; Goncalves, H; Bernardes, J; Antunes, L; Nozari, M; Costa Santos, C;

Publication
ENTROPY

Abstract
Entropy and compression have been used to distinguish fetuses at risk of hypoxia from their healthy counterparts through the analysis of Fetal Heart Rate (FHR). Low correlation that was observed between these two approaches suggests that they capture different complexity features. This study aims at characterizing the complexity of FHR features captured by entropy and compression, using as reference international guidelines. Single and multi-scale approaches were considered in the computation of entropy and compression. The following physiologic-based features were considered: FHR baseline; percentage of abnormal long (% abLTV) and short (% abSTV) term variability; average short term variability; and, number of acceleration and decelerations. All of the features were computed on a set of 68 intrapartum FHR tracings, divided as normal, mildly, and moderately-severely acidemic born fetuses. The correlation between entropy/compression features and the physiologic-based features was assessed. There were correlations between compressions and accelerations and decelerations, but neither accelerations nor decelerations were significantly correlated with entropies. The % abSTV was significantly correlated with entropies (ranging between 0.54 and 0.62), and to a higher extent with compression (ranging between 0.80 and 0.94). Distinction between groups was clearer in the lower scales using entropy and in the higher scales using compression. Entropy and compression are complementary complexity measures.

2017

Sophistication vs Logical Depth

Authors
Antunes, L; Bauwens, B; Souto, A; Teixeira, A;

Publication
THEORY OF COMPUTING SYSTEMS

Abstract
Sophistication and logical depth are two measures that express how complicated the structure in a string is. Sophistication is defined as the minimal complexity of a computable function that defines a two-part description for the string that is shortest within some precision; the second can be defined as the minimal computation time of a program that is shortest within some precision. We show that the Busy Beaver function of the sophistication of a string exceeds its logical depth with logarithmically bigger precision, and that logical depth exceeds the Busy Beaver function of sophistication with logarithmically bigger precision. We also show that sophistication is unstable in its precision: constant variations can change its value by a linear term in the length of the string.

2016

Distinguishing Two Probability Ensembles with One Sample from each Ensemble

Authors
Antunes, L; Buhrman, H; Matos, A; Souto, A; Teixeira, A;

Publication
THEORY OF COMPUTING SYSTEMS

Abstract
We introduced a new method for distinguishing two probability ensembles called one from each method, in which the distinguisher receives as input two samples, one from each ensemble. We compare this new method with multi-sample from the same method already exiting in the literature and prove that there are ensembles distinguishable by the new method, but indistinguishable by the multi-sample from the same method. To evaluate the power of the proposed method we also show that if non-uniform distinguishers (probabilistic circuits) are used, the one from each method is not more powerful than the classical one, in the sense that does not distinguish more probability ensembles. Moreover we obtain that there are classes of ensembles, such that any two members of the class are easily distinguishable (a definition introduced in this paper) using one sample from each ensemble; there are pairs of ensembles in the same class that are indistinguishable by multi-sample from the same method.

2013

Proposal of a Secure Electronic Prescription System

Authors
Rodrigues, HAM; Antunes, L; Correia, ME;

Publication
INTERNATIONAL CONFERENCE ON INFORMATION SOCIETY (I-SOCIETY 2013)

Abstract
Since 2011, it's mandatory to prescribe through an electronic system in Portugal. Several third party companies start to develop prescribing software/interfaces that act as gateways to transmit the prescription data from the practitioners to the Health Ministry. The use of those companies in this circuit weakens the Prescription System's security levels and compromises the confidentiality and privacy of doctors and patients' personal data. Aim: The main aim is to propose a secure and safer Prescribing System that allows prescriptions' authentication and protects the patient data, keeping their identity confidential. Results: By protecting several system flaws, this proposed increases greatly the Prescription System security levels, protects patient data, and avoid its collection from Third Party Companies. Also the physical model of the electronic Prescription appears to have all the security and applicability requirements needed to function during a communication network dysfunction.

2017

On the rate of decrease in logical depth

Authors
Antunes, LF; Souto, A; Vitanyi, PMB;

Publication
THEORETICAL COMPUTER SCIENCE

Abstract
The logical depth with significance b of a string x is the shortest running time of a program for x that can be compressed by at most b bits. Another definition is based on algorithmic probability. We give a simple new proof for the known relation between the two definitions. We also prove the following: Given a string we can consider the maximal decrease in logical depth when the significance parameter increases by 1. There exists a sequence of strings of lengths n = 1,2,..., such that this maximal decrease as a function of n rises faster than any computable function but not as fast as the Busy Beaver function. This holds also for the computation times of the shortest programs of these strings.

  • 2
  • 16