2020
Authors
Veloso, B; Martins, C; Espanha, R; Azevedo, R; Gama, J;
Publication
PROCEEDINGS OF THE 35TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING (SAC'20)
Abstract
The high asymmetry of international termination rates, where calls are charged with higher values, are fertile ground for the appearance of frauds in Telecom Companies. In this paper, we present three different and complementary solutions for a real problem called Interconnect Bypass Fraud. This problem is one of the most common in the telecommunication domain and can be detected by the occurrence of abnormal behaviours from specific numbers. Our goal is to detect as soon as possible numbers with abnormal behaviours, e.g. bursts of calls, repetitions and mirror behaviours. Based on this assumption, we propose: (i) the adoption of a new fast forgetting technique that works together with the Lossy Counting algorithm; (ii) the proposal of a single pass hierarchical heavy hitter algorithm that also contains a forgetting technique; and (iii) the application of the HyperLogLog sketches for each phone number. We used the heavy hitters to detect abnormal behaviours, e.g. burst of calls, repetition and mirror. The hierarchical heavy hitters algorithm is used to detect the numbers that make calls for a huge set of destinations and destination numbers that receives a huge set of calls to provoke a denial of service. Additionally, to detect the cardinality of destination numbers of each origin number we use the HyperLogLog algorithm. The results shows that these three approaches combined complements the techniques used by the telecom company and make the fraud task more difficult.
2020
Authors
Nogueira, AR; Gama, J; Ferreira, CA;
Publication
ADVANCES IN INTELLIGENT DATA ANALYSIS XVIII, IDA 2020
Abstract
The application of feature engineering in classification problems has been commonly used as a means to increase the classification algorithms performance. There are already many methods for constructing features, based on the combination of attributes but, to the best of our knowledge, none of these methods takes into account a particular characteristic found in many problems: causality. In many observational data sets, causal relationships can be found between the variables, meaning that it is possible to extract those relations from the data and use them to create new features. The main goal of this paper is to propose a framework for the creation of new supposed causal probabilistic features, that encode the inferred causal relationships between the target and the other variables. In this case, an improvement in the performance was achieved when applied to the Random Forest algorithm.
2019
Authors
Li, G; Gama, J; Yang, J;
Publication
Data Sci. Eng.
Abstract
2020
Authors
Cancela, B; Bolón Canedo, V; Alonso Betanzos, A; Gama, J;
Publication
KNOWLEDGE-BASED SYSTEMS
Abstract
Classic feature selection techniques remove irrelevant or redundant features to achieve a subset of relevant features in compact models that are easier to interpret and so improve knowledge extraction. Most such techniques operate on the whole dataset, but are unable to provide the user with useful information when only instance-level information is required; in other words, classic feature selection algorithms do not identify the most relevant information in a sample. We have developed a novel feature selection method, called saliency-based feature selection (SFS), based on deep-learning saliency techniques. Our algorithm works under any architecture that is trained by using gradient descent techniques (Neural Networks, SVMs, ...), and can be used for classification or regression problems. Experimental results show our algorithm is robust, as it allows to transfer the feature ranking result between different architectures, achieving remarkable results. The versatility of our algorithm has been also demonstrated, as it can work either in big data environments as well as with small datasets.
2020
Authors
Saadallah, A; Moreira Matias, L; Sousa, R; Khiari, J; Jenelius, E; Gama, J;
Publication
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Abstract
Massive data broadcast by GPS-equipped vehicles provide unprecedented opportunities. One of the main tasks in order to optimize our transportation networks is to build data-driven real-time decision support systems. However, the dynamic environments where the networks operate disallow the traditional assumptions required to put in practice many off-the-shelf supervised learning algorithms, such as finite training sets or stationary distributions. In this paper, we propose BRIGHT: a drift-aware supervised learning framework to predict demand quantities. BRIGHT aims to provide accurate predictions for short-term horizons through a creative ensemble of time series analysis methods that handles distinct types of concept drift. By selecting neighborhoods dynamically, BRIGHT reduces the likelihood of overfitting. By ensuring diversity among the base learners, BRIGHT ensures a high reduction of variance while keeping bias stable. Experiments were conducted using three large-scale heterogeneous real-world transportation networks in Porto (Portugal), Shanghai (China), and Stockholm (Sweden), as well as with controlled experiments using synthetic data where multiple distinct drifts were artificially induced. The obtained results illustrate the advantages of BRIGHT in relation to state-of-the-art methods for this task.
2019
Authors
Conceição, A; Gama, J;
Publication
DS
Abstract
Email Marketing is one of the most important traffic sources in Digital Marketing. It yields a high return on investment for the company and offers a cheap and fast way to reach existent or potential clients. Getting the recipients to open the email is the first step for a successful campaign. Thus, it is important to understand how marketers can improve the open rate of a marketing campaign. In this work, we analyze what are the main factors driving the open rate of financial email marketing campaigns. For that purpose, we develop a classification algorithm that can accurately predict if a campaign will be labeled as Successful or Failure. A campaign is classified as Successful if it has an open rate higher than the average, otherwise it is labeled as Failure. To achieve this, we have employed and evaluated three different classifiers. Our results showed that it is possible to predict the performance of a campaign with approximately 82% accuracy, by using the Random Forest algorithm and the redundant filter selection technique. With this model, marketers will have the chance to sooner correct potential problems in a campaign that could highly impact its revenue. Additionally, a text analysis of the subject line and preheader was performed to discover which keywords and keyword combinations trigger a higher open rate. The results obtained were then validated in a real setting through A/B testing.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.