Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2022

An intelligent energy-efficient approach for managing IoE tasks in cloud platforms

Authors
Javadpour, A; Nafei, AH; Ja’fari, F; Pinto, P; Zhang, W; Sangaiah, AK;

Publication
Journal of Ambient Intelligence and Humanized Computing

Abstract
Today, cloud platforms for Internet of Everything (IoE) are facilitating organizational and industrial growth, and have different requirements based on their different purposes. Usual task scheduling algorithms for distributed environments such as group of clusters, networks, and clouds, focus only on the shortest execution time, regardless of the power consumption. Network energy can be optimized if tasks are properly scheduled to be implemented in virtual machines, thus achieving green computing. In this research, Dynamic Voltage Frequency Dcaling (DVFS) is used in two different ways, to select a suitable candidate for scheduling the tasks with the help of an Artificial Intelligence (AI) approach. First, the GIoTDVFS_SFB method based on sorting processor elements in Cloud has been considered to handle Task Scheduling problem in the Clouds system. Alternatively, the GIoTDVFS_mGA microgenetic method has been used to select suitable candidates. The proposed mGA and SFB methods are compared with SLAbased suggested for Cloud environments, and it is shown that the Makespan and Gain in benchmarks 512 and 1024 are optimized in the proposed method. In addition, the Energy Consumption (EC) of Real PM (RPMs) against the numeral of Tasks has been considered with that of PAFogIoTDVFS and EnergyAwareDVFS methods in this area. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.

2022

Mapping and embedding infrastructure resource management in software defined networks

Authors
Javadpour, A; Ja'fari, F; Pinto, P; Zhang, WZ;

Publication
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS

Abstract
Software-Defined Networking (SDN) is one of the promising and effective approaches to establishing network virtualization by providing a central controller to monitor network bandwidth and transmission devices. This paper studies resource allocation in SDN by mapping virtual networks on the infrastructure network. Considering mapping as a way to distribute tasks through the network, proper mapping methodologies will directly influence the efficiency of infrastructure resource management. Our proposed method is called Effective Initial Mapping in SDN (EIMSDN), and it suggests writing a module in the controller to initialize mapping by arriving at a new request if a sufficient number of resources are available. This would prevent rewriting the rules on the switches when remapping is necessary for an n-time window. We have also considered optimizing resource allocation in network virtualization with dynamic infrastructure resources management. We have done it by writing a module in OpenFlow controller to initialize mapping when there are sufficient resources. EIMSDN is compared with SDN-nR, SSPSM, and SDN-VN in criteria such as acceptance rates, cost, average switches resource utilization, and average link resource utilization.

2022

A hybrid heuristics artificial intelligence feature selection for intrusion detection classifiers in cloud of things

Authors
Sangaiah, AK; Javadpour, A; Ja'fari, F; Pinto, P; Zhang, WZ; Balasubramanian, S;

Publication
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS

Abstract
Cloud computing environments provide users with Internet-based services and one of their main challenges is security issues. Hence, using Intrusion Detection Systems (IDSs) as a defensive strategy in such environments is essential. Multiple parameters are used to evaluate the IDSs, the most important aspect of which is the feature selection method used for classifying the malicious and legitimate activities. We have organized this research to determine an effective feature selection method to increase the accuracy of the classifiers in detecting intrusion. A Hybrid Ant-Bee Colony Optimization (HABCO) method is proposed to convert the feature selection problem into an optimization problem. We examined the accuracy of HABCO with BHSVM, IDSML, DLIDS, HCRNNIDS, SVMTHIDS, ANNIDS, and GAPSAIDS. It is shown that HABCO has a higher accuracy compared with the mentioned methods.

2022

GSAGA: A hybrid algorithm for task scheduling in cloud infrastructure

Authors
Pirozmand, P; Javadpour, A; Nazarian, H; Pinto, P; Mirkamali, S; Ja'fari, F;

Publication
JOURNAL OF SUPERCOMPUTING

Abstract
Cloud computing is becoming a very popular form of distributed computing, in which digital resources are shared via the Internet. The user is provided with an overview of many available resources. Cloud providers want to get the most out of their resources, and users are inclined to pay less for better performance. Task scheduling is one of the most important aspects of cloud computing. In order to achieve high performance from cloud computing systems, tasks need to be scheduled for processing by appropriate computing resources. The large search space of this issue makes it an NP-hard problem, and more random search methods are required to solve this problem. Multiple solutions have been proposed with several algorithms to solve this problem until now. This paper presents a hybrid algorithm called GSAGA to solve the Task Scheduling Problem (TSP) in cloud computing. Although it has a high ability to search the problem space, the Genetic Algorithm (GA) performs poorly in terms of stability and local search. It is therefore possible to create a stable algorithm by combining the general search capacities of the GA with the Gravitational Search Algorithm (GSA). Our experimental results indicate that the proposed algorithm can solve the problem with higher efficiency compared with the state-of-the-art.

2022

DMAIDPS: a distributed multi-agent intrusion detection and prevention system for cloud IoT environments

Authors
Javadpour, A; Pinto, P; Ja'fari, F; Zhang, WZ;

Publication
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS

Abstract
Cloud Internet of Things (CIoT) environments, as the essential basis for computing services, have been subject to abuses and cyber threats. The adversaries constantly search for vulnerable areas in such computing environments to impose their damages and create complex challenges. Hence, using intrusion detection and prevention systems (IDPSs) is almost mandatory for securing CIoT environments. However, the existing IDPSs in this area suffer from some limitations, such as incapability of detecting unknown attacks and being vulnerable to the single point of failure. In this paper, we propose a novel distributed multi-agent IDPS (DMAIDPS) that overcomes these limitations. The learning agents in DMAIDPS perform a six-step detection process to classify the network behavior as normal or under attack. We have tested the proposed DMAIDPS with the KDD Cup 99 and NSL-KDD datasets. The experimental results have been compared with other methods in the field based on Recall, Accuracy, and F-Score metrics. The proposed system has improved the Recall, Accuracy, and F-Scores metrics by an average of 16.81%, 16.05%, and 18.12%, respectively.

2022

Machine Learning Based Propagation Loss Module for Enabling Digital Twins of Wireless Networks in ns-3

Authors
Almeida, EN; Rushad, M; Kota, SR; Nambiar, A; Harti, HL; Gupta, C; Waseem, D; Santos, G; Fontes, H; Campos, R; Tahiliani, MP;

Publication
PROCEEDING OF THE 2022 WORKSHOP ON NS-3, WNS3 2022

Abstract
The creation of digital twins of experimental testbeds allows the validation of novel wireless networking solutions and the evaluation of their performance in realistic conditions, without the cost, complexity and limited availability of experimental testbeds. Current trace-based simulation approaches for ns-3 enable the repetition and reproduction of the same exact conditions observed in past experiments. However, they are limited by the fact that the simulation setup must exactly match the original experimental setup, including the network topology, the mobility patterns and the number of network nodes. In this paper, we propose the Machine Learning based Propagation Loss (MLPL) module for ns-3. Based on network traces collected in an experimental testbed, the MLPL module estimates the propagation loss as the sum of a deterministic path loss and a stochastic fast-fading loss. The MLPL module is validated with unit tests. Moreover, we test the MLPL module with real network traces, and compare the results obtained with existing propagation loss models in ns-3 and real experimental results. The results obtained show that the MLPL module can accurately predict the propagation loss observed in a real environment and reproduce the experimental conditions of a given testbed, enabling the creation of digital twins of wireless network environments in ns-3.

  • 71
  • 346