Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2020

Interconnect bypass fraud detection: a case study

Autores
Veloso, B; Tabassum, S; Martins, C; Espanha, R; Azevedo, R; Gama, J;

Publicação
ANNALS OF TELECOMMUNICATIONS

Abstract
The high asymmetry of international termination rates is fertile ground for the appearance of fraud in telecom companies. International calls have higher values when compared with national ones, which raises the attention of fraudsters. In this paper, we present a solution for a real problem called interconnect bypass fraud, more specifically, a newly identified distributed pattern that crosses different countries and keeps fraudsters from being tracked by almost all fraud detection techniques. This problem is one of the most expressive in the telecommunication domain, and it has some abnormal behaviours like the occurrence of a burst of calls from specific numbers. Based on this assumption, we propose the adoption of a new fast forgetting technique that works together with the Lossy Counting algorithm. We apply frequent set mining to capture distributed patterns from different countries. Our goal is to detect as soon as possible items with abnormal behaviours, e.g., bursts of calls, repetitions, mirrors, distributed behaviours and a small number of calls spread by a vast set of destination numbers. The results show that the application of different techniques improves the detection ratio and not only complements the techniques used by the telecom company but also improves the performance of the Lossy Counting algorithm in terms of run-time, memory used and sensibility to detect the abnormal behaviours. Additionally, the application of frequent set mining allows us to capture distributed fraud patterns.

2020

Performance limits of adaptive-optics/high-contrast imagers with pyramid wavefront sensors

Autores
Correia, CM; Fauvarque, O; Bond, CZ; Chambouleyron, V; Sauvage, JF; Fusco, T;

Publicação
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY

Abstract
Advanced adaptive-optics (AO) systems will likely utilize pyramid wavefront sensors (PWFSs) over the traditional Shack-Hartmann sensor in the quest for increased sensitivity, peak performance and ultimate contrast. Here, we explain and quantify the PWFS theoretical limits as a means to highlight its properties and applications. We explore forward models for the PWFS in the spatial-frequency domain: these prove useful because (i) they emanate directly from physical-optics (Fourier) diffraction theory; (ii) they provide a straightforward path to meaningful error breakdowns; (iii) they allow for reconstruction algorithms with O(n log(n)) complexity for large-scale systems; and (iv) they tie in seamlessly with decoupled (distributed) optimal predictive dynamic control for performance and contrast optimization. All these aspects are dealt with here. We focus on recent analytical PWFS developments and demonstrate the performance using both analytic and end-to-end simulations. We anchor our estimates on observed on-sky contrast on existing systems, and then show very good agreement between analytical and Monte Carlo performance estimates on AO systems featuring the PWFS. For a potential upgrade of existing high-contrast imagers on 10-m-class telescopes with visible or near-infrared PWFSs, we show, under median conditions at Paranal, a contrast improvement (limited by chromatic and scintillation effects) of 2×-5× when just replacing the wavefront sensor at large separations close to the AO control radius where aliasing dominates, and of factors in excess of 10× by coupling distributed control with the PWFS over most of the AO control region, from small separations starting with an inner working angle of typically 1-2 ?/D to the AO correction edge (here 20 ?/D).

2020

A Survey and Classification of Software-Defined Storage Systems

Autores
Macedo, R; Paulo, J; Pereira, J; Bessani, A;

Publicação
ACM COMPUTING SURVEYS

Abstract
The exponential growth of digital information is imposing increasing scale and efficiency demands on modern storage infrastructures. As infrastructure complexity increases, so does the difficulty in ensuring quality of service, maintainability, and resource fairness, raising unprecedented performance, scalability, and programmability challenges. Software-Defined Storage (SDS) addresses these challenges by cleanly disentangling control and data flows, easing management, and improving control functionality of conventional storage systems. Despite its momentum in the research community, many aspects of the paradigm are still unclear, undefined, and unexplored, leading to misunderstandings that hamper the research and development of novel SDS technologies. In this article, we present an in-depth study of SDS systems, providing a thorough description and categorization of each plane of functionality. Further, we propose a taxonomy and classification of existing SDS solutions according to different criteria. Finally, we provide key insights about the paradigm and discuss potential future research directions for the field.

2020

InDubio: A Combinator Library to Disambiguate Ambiguous Grammars

Autores
Macedo, JN; Saraiva, J;

Publicação
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS, ICCSA 2020, PART IV

Abstract
To infer an abstract model from source code is one of the main tasks of most software quality analysis methods. Such abstract model is called Abstract Syntax Tree and the inference task is called parsing. A parser is usually generated from a grammar specification of a (programming) language and it converts source code of that language into said abstract tree representation. Then, several techniques traverse this tree to assess the quality of the code (for example by computing source code metrics), or by building new data structures (e.g, flow graphs) to perform further analysis (such as, code cloning, dead code, etc). Parsing is a well established technique. In recent years, however, modern languages are inherently ambiguous which can only be fully handled by ambiguous grammars. In this setting disambiguation rules, which are usually included as part of the grammar specification of the ambiguous language, need to be defined. This approach has a severe limitation: disambiguation rules are not first class citizens. Parser generators offer a small set of rules that can not be extended or changed. Thus, grammar writers are not able to manipulate nor define a new specific rule that the language he is considering requires. In this paper we present a tool, name InDubio, that consists of an extensible combinator library of disambiguation filters together with a generalized parser generator for ambiguous grammars. InDubio defines a set of basic disambiguation rules as abstract syntax tree filters that can be combined into more powerful rules. Moreover, the filters are independent of the parser generator and parsing technology, and consequently, they can be easily extended and manipulated. This paper presents InDubio in detail and also presents our first experimental results.

2020

Determining the prevalence of palliative needs and exploring screening accuracy of depression and anxiety items of the integrated palliative care outcome scale - a multi-centre study

Autores
Antunes, B; Rodrigues, PP; Higginson, IJ; Ferreira, PL;

Publicação
BMC PALLIATIVE CARE

Abstract
Background patients with palliative needs often experience high symptom burden which causes suffering to themselves and their families. Depression and psychological distress should not be considered a "normal event" in advanced disease patients and should be screened, diagnosed, acted on and followed-up. Psychological distress has been associated with greater physical symptom severity, suffering, and mortality in cancer patients. A holistic, but short measure should be used for physical and non-physical needs assessment. The Integrated Palliative care Outcome Scale is one such measure. This work aims to determine palliative needs of patients and explore screening accuracy of two items pertaining to psychological needs. Methods multi-centred observational study using convenience sampling. Data were collected in 9 Portuguese centres. Inclusion criteria: >= 18 years, mentally fit to give consent, diagnosed with an incurable, potentially life-threatening illness. Exclusion criteria: patient in distress ("unable to converse for a period of time"), cognitively impaired. Descriptive statistics used for demographics. Receiving Operator Characteristics curves and Area Under the Curve for anxiety and depression discriminant properties against the Hospital Anxiety and Depression Scale. Results 1703 individuals were screened between July 1st, 2015 and February 2016. A total of 135 (7.9%) were included. Main reason for exclusion was being healthy (75.2%). The primary care centre screened most individuals, as they have the highest rates of daily patients and the majority are healthy. Mean age is 66.8 years (SD 12.7), 58 (43%) are female. Most patients had a cancer diagnosis 109 (80.7%). Items scoring highest (=4) were: family or friends anxious or worried (36.3%); feeling anxious or worried about illness (13.3%); feeling depressed (9.6%). Using a cut-off score of 2/3, Area Under the Curve for depression and anxiety items were above 70%. Conclusions main palliative needs were psychological, family related and spiritual. This suggests that clinical teams may better manage physical issues and there is room for improvement regarding non-physical needs. Using the Integrated Palliative care Outcome Scale systematically could aid clinical teams screening patients for distressing needs and track their progress in assisting patients and families with those issues.

2020

Cross-Sensor Quality Assurance for Marine Observatories

Autores
Diamant, R; Shachar, I; Makovsky, Y; Ferreira, BM; Cruz, NA;

Publicação
REMOTE SENSING

Abstract
Measuring and forecasting changes in coastal and deep-water ecosystems and climates requires sustained long-term measurements from marine observation systems. One of the key considerations in analyzing data from marine observatories is quality assurance (QA). The data acquired by these infrastructures accumulates into Giga and Terabytes per year, necessitating an accurate automatic identification of false samples. A particular challenge in the QA of oceanographic datasets is the avoidance of disqualification of data samples that, while appearing as outliers, actually represent real short-term phenomena, that are of importance. In this paper, we present a novel cross-sensor QA approach that validates the disqualification decision of a data sample from an examined dataset by comparing it to samples from related datasets. This group of related datasets is chosen so as to reflect upon the same oceanographic phenomena that enable some prediction of the examined dataset. In our approach, a disqualification is validated if the detected anomaly is present only in the examined dataset, but not in its related datasets. Results for a surface water temperature dataset recorded by our Texas A&M-Haifa Eastern Mediterranean Marine Observatory (THEMO)-over a period of 7 months, show an improved trade-off between accurate and false disqualification rates when compared to two standard benchmark schemes.

  • 1154
  • 4201