2021
Authors
Guimarães, V; Costa, VS;
Publication
CoRR
Abstract
2021
Authors
Nunes, P; Antunes, M; Silva, C;
Publication
INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS / INTERNATIONAL CONFERENCE ON PROJECT MANAGEMENT / INTERNATIONAL CONFERENCE ON HEALTH AND SOCIAL CARE INFORMATION SYSTEMS AND TECHNOLOGIES 2020 (CENTERIS/PROJMAN/HCIST 2020)
Abstract
The growing digitization of healthcare institutions and its increasing dependence on Internet infrastructure has boosted the concerns related to data privacy and confidentiality. These institutions have been challenged with specific issues, namely the sensitivity of data, the specificity of networked equipment, the heterogeneity of healthcare professionals (nurses, doctors, administrative staff and other) and the IT skills they have. In this paper we present the results obtained with a study made with healthcare professionals on evaluating their awareness level with the information security, namely by assessing their attitudes and behaviours in cybersecurity. The methodology consisted in translating, adjusting and applying two previously validated and already published Likert-type response scales, in a healthcare institution in Portugal, namely "Centro Hospitalar Barreiro Montijo" (CHBM). The scales used were cybersecurity risky behaviour (RScB) and cybersecurity and cybercrime in business attitudes (ATC-IB). Although there were no significant statistical differences between the sociodemographic factors and the scores obtained on both scales, the results showed a relationship between acquired behaviours and the attitudes of involvement with work and organizational commitment, establishing a bridge for the quantification in awareness.(C) 2021 The Authors. Published by Elsevier B. V.
2021
Authors
Antunes, M; Maximiano, M; Gomes, R; Pinto, D;
Publication
J. Cybersecur. Priv.
Abstract
2021
Authors
Carnaz, G; Antunes, M; Nogueira, VB;
Publication
DATA
Abstract
Criminal investigations collect and analyze the facts related to a crime, from which the investigators can deduce evidence to be used in court. It is a multidisciplinary and applied science, which includes interviews, interrogations, evidence collection, preservation of the chain of custody, and other methods and techniques of investigation. These techniques produce both digital and paper documents that have to be carefully analyzed to identify correlations and interactions among suspects, places, license plates, and other entities that are mentioned in the investigation. The computerized processing of these documents is a helping hand to the criminal investigation, as it allows the automatic identification of entities and their relations, being some of which difficult to identify manually. There exists a wide set of dedicated tools, but they have a major limitation: they are unable to process criminal reports in the Portuguese language, as an annotated corpus for that purpose does not exist. This paper presents an annotated corpus, composed of a collection of anonymized crime-related documents, which were extracted from official and open sources. The dataset was produced as the result of an exploratory initiative to collect crime-related data from websites and conditioned-access police reports. The dataset was evaluated and a mean precision of 0.808, recall of 0.722, and F1-score of 0.733 were obtained with the classification of the annotated named-entities present in the crime-related documents. This corpus can be employed to benchmark Machine Learning (ML) and Natural Language Processing (NLP) methods and tools to detect and correlate entities in the documents. Some examples are sentence detection, named-entity recognition, and identification of terms related to the criminal domain.
2021
Authors
Carnaz, G; Nogueira, VB; Antunes, M;
Publication
INFORMATICS-BASEL
Abstract
Organizations have been challenged by the need to process an increasing amount of data, both structured and unstructured, retrieved from heterogeneous sources. Criminal investigation police are among these organizations, as they have to manually process a vast number of criminal reports, news articles related to crimes, occurrence and evidence reports, and other unstructured documents. Automatic extraction and representation of data and knowledge in such documents is an essential task to reduce the manual analysis burden and to automate the discovering of names and entities relationships that may exist in a case. This paper presents SEMCrime, a framework used to extract and classify named-entities and relations in Portuguese criminal reports and documents, and represent the data retrieved into a graph database. A 5WH1 (Who, What, Why, Where, When, and How) information extraction method was applied, and a graph database representation was used to store and visualize the relations extracted from the documents. Promising results were obtained with a prototype developed to evaluate the framework, namely a name-entity recognition with an F-Measure of 0.73, and a 5W1H information extraction performance with an F-Measure of 0.65.
2021
Authors
Ferreira, S; Antunes, M; Correia, ME;
Publication
JOURNAL OF IMAGING
Abstract
Tampered multimedia content is being increasingly used in a broad range of cybercrime activities. The spread of fake news, misinformation, digital kidnapping, and ransomware-related crimes are amongst the most recurrent crimes in which manipulated digital photos and videos are the perpetrating and disseminating medium. Criminal investigation has been challenged in applying machine learning techniques to automatically distinguish between fake and genuine seized photos and videos. Despite the pertinent need for manual validation, easy-to-use platforms for digital forensics are essential to automate and facilitate the detection of tampered content and to help criminal investigators with their work. This paper presents a machine learning Support Vector Machines (SVM) based method to distinguish between genuine and fake multimedia files, namely digital photos and videos, which may indicate the presence of deepfake content. The method was implemented in Python and integrated as new modules in the widely used digital forensics application Autopsy. The implemented approach extracts a set of simple features resulting from the application of a Discrete Fourier Transform (DFT) to digital photos and video frames. The model was evaluated with a large dataset of classified multimedia files containing both legitimate and fake photos and frames extracted from videos. Regarding deepfake detection in videos, the Celeb-DFv1 dataset was used, featuring 590 original videos collected from YouTube, and covering different subjects. The results obtained with the 5-fold cross-validation outperformed those SVM-based methods documented in the literature, by achieving an average F1-score of 99.53%, 79.55%, and 89.10%, respectively for photos, videos, and a mixture of both types of content. A benchmark with state-of-the-art methods was also done, by comparing the proposed SVM method with deep learning approaches, namely Convolutional Neural Networks (CNN). Despite CNN having outperformed the proposed DFT-SVM compound method, the competitiveness of the results attained by DFT-SVM and the substantially reduced processing time make it appropriate to be implemented and embedded into Autopsy modules, by predicting the level of fakeness calculated for each analyzed multimedia file.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.