Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Coordinator professor at Computers Engineering Department, ESTG-Leiria (Polytechnic of Leiria) and reseracher at CRACS.

Holds a PHD in Computer Science by Universidade do Porto; MSc in Informatics, branch of systems and networks, also by Universidade do Porto; Degree in Computers Enginnering by Instituto Superior de Engenharia do Porto (Polytechnic of Porto).

Coordinates a MSc course in cybersecurity and digital forensics at Polytechnic of Leiria and is responsible by classes on networking, systems administration, cloud technology, networking security and datacenters infrastrucutres.

Main areas of research include immune-inspired algorithms applied to automatic detection of anomalies, ensemble based algorithms for classification and anomaly detection, learning on dynamic systems in a temporal basis.

Previously he was algo ICT project manager and system administrator in companies.

Interest
Topics
Details

Details

  • Name

    Mário João Antunes
  • Role

    Senior Researcher
  • Since

    01st January 2009
Publications

2024

Uncovering Manipulated Files Using Mathematical Natural Laws

Authors
Fernandes, P; Ciardhuáin, SO; Antunes, M;

Publication
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I

Abstract
The data exchange between different sectors of society has led to the development of electronic documents supported by different reading formats, namely portable PDF format. These documents have characteristics similar to those used in programming languages, allowing the incorporation of potentially malicious code, which makes them a vector for cyberattacks. Thus, detecting anomalies in digital documents, such as PDF files, has become crucial in several domains, such as finance, digital forensic analysis and law enforcement. Currently, detection methods are mostly based on machine learning and are characterised by being complex, slow and mainly inefficient in detecting zero-day attacks. This paper aims to propose a Benford Law (BL) based model to uncover manipulated PDF documents by analysing potential anomalies in the first digit extracted from the PDF document's characteristics. The proposed model was evaluated using the CIC Evasive PDFMAL-2022 dataset, consisting of 1191 documents (278 benign and 918 malicious). To classify the PDF documents, based on BL, into malicious or benign documents, three statistical models were used in conjunction with the mean absolute deviation: the parametric Pearson and the non-parametric Spearman and Cramer-Von Mises models. The results show a maximum F1 score of 87.63% in detecting malicious documents using Pearson's model, demonstrating the suitability and effectiveness of applying Benford's Law in detecting anomalies in digital documents to maintain the accuracy and integrity of information and promoting trust in systems and institutions.

2023

Benford's law applied to digital forensic analysis

Authors
Fernandes, P; Antunes, M;

Publication
FORENSIC SCIENCE INTERNATIONAL-DIGITAL INVESTIGATION

Abstract
Tampered digital multimedia content has been increasingly used in a wide set of cyberattacks, chal-lenging criminal investigations and law enforcement authorities. The motivations are immense and range from the attempt to manipulate public opinion by disseminating fake news to digital kidnapping and ransomware, to mention a few cybercrimes that use this medium as a means of propagation.Digital forensics has recently incorporated a set of computational learning-based tools to automatically detect manipulations in digital multimedia content. Despite the promising results attained by machine learning and deep learning methods, these techniques require demanding computational resources and make digital forensic analysis and investigation expensive. Applied statistics techniques have also been applied to automatically detect anomalies and manipulations in digital multimedia content by statisti-cally analysing the patterns and features. These techniques are computationally faster and have been applied isolated or as a member of a classifier committee to boost the overall artefact classification.This paper describes a statistical model based on Benford's Law and the results obtained with a dataset of 18000 photos, being 9000 authentic and the remaining manipulated.Benford's Law dates from the 18th century and has been successfully adopted in digital forensics, namely in fraud detection. In the present investigation, Benford's law was applied to a set of features (colours, textures) extracted from digital images. After extracting the first digits, the frequency with which they occurred in the set of values obtained from that extraction was calculated. This process allowed focusing the investigation on the behaviour with which the frequency of each digit occurred in comparison with the frequency expected by Benford's law.The method proposed in this paper for applying Benford's Law uses Pearson's and Spearman's corre-lations and Cramer-Von Mises (CVM) fitting model, applied to the first digit of a number consisting of several digits, obtained by extracting digital photos features through Fast Fourier Transform (FFT) method.The overall results obtained, although not exceeding those attained by machine learning approaches, namely Support Vector Machines (SVM) and Convolutional Neural Networks (CNN), are promising, reaching an average F1-score of 90.47% when using Pearson correlation. With non-parametric approaches, namely Spearman correlation and CVM fitting model, an F1-Score of 56.55% and 76.61% were obtained respec-tively. Furthermore, the Pearson's model showed the highest homogeneity compared to the Spearman's and CVM models in detecting manipulated images, 8526, and authentic ones, 7662, due to the strong correlation between the frequencies of each digit and the frequency expected by Benford's law.The results were obtained with different feature sets length, ranging from 3000 features to the totality of the features available in the digital image. However, the investigation focused on extracting 1000 features since it was concluded that increasing the features did not imply an improvement in the results.The results obtained with the model based on Benford's Law compete with those obtained from the models based on CNN and SVM, generating confidence regarding its application as decision support in a criminal investigation for the identification of manipulated images.& COPY; 2023 Elsevier Ltd. All rights reserved.

2022

Digital Forensics for the Detection of Deepfake Image Manipulations

Authors
Ferreira, S; Antunes, M; Correia, ME;

Publication
ERCIM NEWS

Abstract
Tampered multimedia content is increasingly being used in a broad range of cybercrime activities. The spread of fake news, misinformation, digital kidnapping, and ransomware-related crimes are among the most recurrent crimes in which manipulated digital photos are being used as an attacking vector. One of the linchpins of accurately detecting manipulated multimedia content is the use of machine learning and deep learning algorithms. This work proposed a dataset of photos and videos suitable for digital forensics, which has been used to benchmark Support Vector Machines (SVM) and Convolution Neural Networks algorithms (CNN). An SVM-based module for the Autopsy digital forensics open-source application has also been developed. This was evaluated as a very capable and useful forensic tool, winning second place on the OSDFCon international Autopsy modules competition.

2022

A Client-Centered Information Security and Cybersecurity Auditing Framework

Authors
Antunes, M; Maximiano, M; Gomes, R;

Publication
APPLIED SCIENCES-BASEL

Abstract
Information security and cybersecurity management play a key role in modern enterprises. There is a plethora of standards, frameworks, and tools, ISO 27000 and the NIST Cybersecurity Framework being two relevant families of international Information Security Management Standards (ISMSs). Globally, these standards are implemented by dedicated tools to collect and further analyze the information security auditing that is carried out in an enterprise. The overall goal of the auditing is to evaluate and mitigate the information security risk. The risk assessment is grounded by auditing processes, which examine and assess a list of predefined controls in a wide variety of subjects regarding cybersecurity and information security. For each control, a checklist of actions is applied and a set of corrective measures is proposed, in order to mitigate the flaws and to increase the level of compliance with the standard being used. The auditing process can apply different ISMSs in the same time frame. However, as these processes are time-consuming, involve on-site interventions, and imply specialized consulting teams, the methodology usually adopted by enterprises consists of applying a single ISMS and its existing tools and frameworks. This strategy brings overall less flexibility and diversity to the auditing process and, consequently, to the assessment results of the audited enterprise. In a broad sense, the auditing needs of Small and Medium-sized Enterprises (SMEs) are different from large companies and do not fit with all the existing ISMSs' frameworks, that is a set of controls of a particular ISMS is not suitable to be applied in an auditing process, in an SME. In this paper, we propose a generic and client-centered web-integrated cybersecurity auditing information system. The proposed system can be widely used in a myriad of auditing processes, as it is flexible and it can load a set of predefined controls' checklist assessment and their corresponding mitigation tasks' list. It was designed to meet both SMEs' and large enterprises' requirements and stores auditing and intervention-related data in a relational database. The information system was tested within an ISO 27001:2013 information security auditing project, in which fifty SMEs participated. The overall architecture and design are depicted and the global results are detailed in this paper.

2022

Benchmarking Deep Learning Methods for Behaviour-Based Network Intrusion Detection

Authors
Antunes, M; Oliveira, L; Seguro, A; Verissimo, J; Salgado, R; Murteira, T;

Publication
INFORMATICS-BASEL

Abstract
Network security encloses a wide set of technologies dealing with intrusions detection. Despite the massive adoption of signature-based network intrusion detection systems (IDSs), they fail in detecting zero-day attacks and previously unseen vulnerabilities exploits. Behaviour-based network IDSs have been seen as a way to overcome signature-based IDS flaws, namely through the implementation of machine-learning-based methods, to tolerate new forms of normal network behaviour, and to identify yet unknown malicious activities. A wide set of machine learning methods has been applied to implement behaviour-based IDSs with promising results on detecting new forms of intrusions and attacks. Innovative machine learning techniques have emerged, namely deep-learning-based techniques, to process unstructured data, speed up the classification process, and improve the overall performance obtained by behaviour-based network intrusion detection systems. The use of realistic datasets of normal and malicious networking activities is crucial to benchmark machine learning models, as they should represent real-world networking scenarios and be based on realistic computers network activity. This paper aims to evaluate CSE-CIC-IDS2018 dataset and benchmark a set of deep-learning-based methods, namely convolutional neural networks (CNN) and long short-term memory (LSTM). Autoencoder and principal component analysis (PCA) methods were also applied to evaluate features reduction in the original dataset and its implications in the overall detection performance. The results revealed the appropriateness of using the CSE-CIC-IDS2018 dataset to benchmark supervised deep learning models. It was also possible to evaluate the robustness of using CNN and LSTM methods to detect unseen normal activity and variations of previously trained attacks. The results reveal that feature reduction methods decreased the processing time without loss of accuracy in the overall detection performance.

Supervised
thesis

2017

Uma implementação open source de um serviço de cloud do tipo IaaS

Author
João Vitoria Santos

Institution
IPLeiria

2017

Using telemedicine WebRTC tests in hospital environment

Author
Dário Gabriel da Cruz Santos

Institution
IPLeiria