2015
Authors
Teixeira, AAC; Guimaraes, L;
Publication
JOURNAL OF AFRICAN BUSINESS
Abstract
The relationship between FDI and corruption/institutional quality in host countries has been widely analyzed. However, the use of distinct samples and indicators for corruption tends to hinder the interpretation and outcomes of econometric assessments. The aims of this paper are to assess the extent to which the use of distinct proxies for corruption provides diverse evidence regarding the relationship between corruption and FDI, and to assess whether controlling for other indicators of institutional quality reinforces the effect of corruption indicators on FDI inflows. In order to accomplish these goals, we estimate a set of multivariate logistic models using 96 countries over the period 2000 to 2010. The results evidence that using distinct proxies for corruption variables, as well as controlling for other types of the countries' institutional quality, generate distinct outcomes. In isolation, a country's transparency and its citizens' corruption perceptions fail to impact on FDI whereas a bribe-free environment is conducive to FDI inflows. When we control for the human, social and economic development of the countries, the impact of a transparent and bribe-free context on FDI attraction is enhanced. Overall, it is clear that in order to become a large recipient of FDI a country has to guarantee a transparent and bribe-free environment, characterized by low income taxes, high literacy rates and generalized economic freedom (own labor and property control by citizens).
2015
Authors
Rodrigues, V; Akesson, B; Florido, M; de Sousa, SM; Pedroso, JP; Vasconcelos, P;
Publication
SCIENCE OF COMPUTER PROGRAMMING
Abstract
This article presents a semantics-based program verification framework for critical embedded real-time systems using the worst-case execution time (WCET) as the safety parameter. The verification algorithm is designed to run on devices with limited computational resources where efficient resource usage is a requirement For this purpose, the framework of abstract-carrying code (ACC) is extended with an additional verification mechanism for linear programming (LP) by applying the certifying properties of duality theory to check the optimality of WCET estimates. Further, the WCET verification approach preserves feasibility and scalability when applied to multicore architectural models. The certifying WCET algorithm is targeted to architectural models based on the ARM instruction set and is presented as a particular instantiation of a compositional data-flow framework supported on the theoretic foundations of denotational semantics and abstract interpretation. The data-flow framework has algebraic properties that provide algorithmic transformations to increase verification efficiency, mainly in terms of verification time. The WCET analysis/verification on multicore architectures applies the formalism of latency-rate (LR.) servers, and proves its correctness in the context of abstract interpretation, in order to ease WCET estimation of programs sharing resources.
2015
Authors
Velikova, M; Dutra, I; Burnside, ES;
Publication
Foundations of Biomedical Knowledge Representation - Methods and Applications
Abstract
The development and use of computerized decision-support systems in the domain of breast cancer has the potential to facilitate the early detection of disease as well as spare healthy women unnecessary interventions. Despite encouraging trends, there is much room for improvement in the capabilities of such systems to further alleviate the burden of breast cancer. One of the main challenges that current systems face is integrating and translating multi-scale variables like patient risk factors and imaging features into complex management recommendations that would supplement and/or generalize similar activities provided by subspecialty-trained clinicians currently. In this chapter, we discuss the main types of knowledge-objectattribute, spatial, temporal and hierarchical-present in the domain of breast image analysis and their formal representation using two popular techniques from artificial intelligence-Bayesian networks and first-order logic. In particular, we demonstrate (i) the explicit representation of uncertain relationships between low-level image features and high-level image findings (e.g., mass, microcalcifications) by probability distributions in Bayesian networks, and (ii) the expressive power of logic to generally represent the dynamic number of objects in the domain. By concrete examples with patient data we show the practical application of both formalisms and their potential for use in decision-support systems.
2015
Authors
Borel, C; Ferreira, PG; Santoni, F; Delaneau, O; Fort, A; Popadin, KY; Garieri, M; Falconnet, E; Ribaux, P; Guipponi, M; Padioleau, I; Carninci, P; Dermitzakis, ET; Antonarakis, SE;
Publication
American Journal of Human Genetics
Abstract
The study of gene expression in mammalian single cells via genomic technologies now provides the possibility to investigate the patterns of allelic gene expression. We used single-cell RNA sequencing to detect the allele-specific mRNA level in 203 single human primary fibroblasts over 133,633 unique heterozygous single-nucleotide variants (hetSNVs). We observed that at the snapshot of analyses, each cell contained mostly transcripts from one allele from the majority of genes; indeed, 76.4% of the hetSNVs displayed stochastic monoallelic expression in single cells. Remarkably, adjacent hetSNVs exhibited a haplotype-consistent allelic ratio; in contrast, distant sites located in two different genes were independent of the haplotype structure. Moreover, the allele-specific expression in single cells correlated with the abundance of the cellular transcript. We observed that genes expressing both alleles in the majority of the single cells at a given time point were rare and enriched with highly expressed genes. The relative abundance of each allele in a cell was controlled by some regulatory mechanisms given that we observed related single-cell allelic profiles according to genes. Overall, these results have direct implications in cellular phenotypic variability. © 2015 The American Society of Human Genetics.
2015
Authors
Oliveira, J; Boaventura Cunha, J; Oliveira, PM; Freire, HF;
Publication
CONTROLO'2014 - PROCEEDINGS OF THE 11TH PORTUGUESE CONFERENCE ON AUTOMATIC CONTROL
Abstract
This work presents a new approach to tune the parameters of the discontinuous component of the Sliding Mode Generalized Predictive Controller (SMGPC) subject to constraints. The strategy employs Particle Swarm Optimization (PSO) to minimize a second aggregated cost function. The continuous component is obtained by the standard procedure, by Sequential Quadratic Programming (SQP), thus yielding a dual optimization scheme. Simulations and performance indexes for a non minimum linear model result in a better performance, improving robustness and tracking accuracy.
2015
Authors
Nobre, R; Martins, LGA; Cardoso, JMP;
Publication
Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems, SCOPES 2015
Abstract
This paper presents a new approach to efficiently search for suitable compiler pass sequences, a challenge known as phase ordering. Our approach relies on information about the relative positions of compiler passes in compiler pass sequences previously generated for a set of functions when compiling for a specific processor. We enhanced two iterative compiler pass exploration schemes, one relying on simple sequential compiler pass insertion and other implementing an auto-tuned simulated annealing process, with a data structure that holds information about the relative positions of compiler sequences; in order to reduce the set of compiler passes considered for insertion in a given position of a given candidate compiler pass sequence to include only the passes that have a higher probability of performing well on that relative position in the compiler sequence, speeding up the exploration time as a result. We tested our approach with two different compilers and two different targets; the ReflectC and the LLVM compilers, targeting a MicroBlaze processor and a LEON3 processor, respectively. The experimental results show that we can considerably reduce the number of algorithm iterations by a factor of up to more than an order of magnitude when targeting the MicroBlaze or the LEON3, while finding compiler sequences that result in binaries that when executed on the target processor/simulator are able to outperform (i.e. use less CPU cycles) all the standard optimization levels (i.e., we compare against the most performing optimization level flag on each kernel, e.g. -O1, -O2 or -O3 in the case of LLVM) by a geometric mean performance improvement of 1.23x and 1.20x when targeting the MicroBlaze processor, and 1.94x and 2.65x when targetting the LEON3 processor; for each of the two exploration algorithms and two kernel sets considered. © 2015 ACM.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.