Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Inês Filipa Teixeira

2021

Emotion Identification in Movies through Facial Expression Recognition

Authors
Almeida, J; Vilaca, L; Teixeira, IN; Viana, P;

Publication
APPLIED SCIENCES-BASEL

Abstract
Understanding how acting bridges the emotional bond between spectators and films is essential to depict how humans interact with this rapidly growing digital medium. In recent decades, the research community made promising progress in developing facial expression recognition (FER) methods. However, no emphasis has been put in cinematographic content, which is complex by nature due to the visual techniques used to convey the desired emotions. Our work represents a step towards emotion identification in cinema through facial expressions' analysis. We presented a comprehensive overview of the most relevant datasets used for FER, highlighting problems caused by their heterogeneity and to the inexistence of a universal model of emotions. Built upon this understanding, we evaluated these datasets with a standard image classification models to analyze the feasibility of using facial expressions to determine the emotional charge of a film. To cope with the problem of lack of datasets for the scope under analysis, we demonstrated the feasibility of using a generic dataset for the training process and propose a new way to look at emotions by creating clusters of emotions based on the evidence obtained in the experiments.

2022

Photo2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content

Authors
Viana, P; Andrade, MT; Carvalho, P; Vilaca, L; Teixeira, IN; Costa, T; Jonker, P;

Publication
JOURNAL OF IMAGING

Abstract
Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.

2022

Improving word embeddings in Portuguese: increasing accuracy while reducing the size of the corpus

Authors
Pinto, JP; Viana, P; Teixeira, I; Andrade, M;

Publication
PEERJ COMPUTER SCIENCE

Abstract
The subjectiveness of multimedia content description has a strong negative impact on tag-based information retrieval. In our work, we propose enhancing available descriptions by adding semantically related tags. To cope with this objective, we use a word embedding technique based on the Word2Vec neural network parameterized and trained using a new dataset built from online newspapers. A large number of news stories was scraped and pre-processed to build a new dataset. Our target language is Portuguese, one of the most spoken languages worldwide. The results achieved significantly outperform similar existing solutions developed in the scope of different languages, including Portuguese. Contributions include also an online application and API available for external use. Although the presented work has been designed to enhance multimedia content annotation, it can be used in several other application areas.

2023

Machine learning models based on clinical indices and cardiotocographic features for discriminating asphyxia fetuses-Porto retrospective intrapartum study

Authors
Ribeiro, M; Nunes, I; Castro, L; Costa-Santos, C; Henriques, TS;

Publication
FRONTIERS IN PUBLIC HEALTH

Abstract
IntroductionPerinatal asphyxia is one of the most frequent causes of neonatal mortality, affecting approximately four million newborns worldwide each year and causing the death of one million individuals. One of the main reasons for these high incidences is the lack of consensual methods of early diagnosis for this pathology. Estimating risk-appropriate health care for mother and baby is essential for increasing the quality of the health care system. Thus, it is necessary to investigate models that improve the prediction of perinatal asphyxia. Access to the cardiotocographic signals (CTGs) in conjunction with various clinical parameters can be crucial for the development of a successful model. ObjectivesThis exploratory work aims to develop predictive models of perinatal asphyxia based on clinical parameters and fetal heart rate (fHR) indices. MethodsSingle gestations data from a retrospective unicentric study from Centro Hospitalar e Universitario do Porto de Sao Joao (CHUSJ) between 2010 and 2018 was probed. The CTGs were acquired and analyzed by Omniview-SisPorto, estimating several fHR features. The clinical variables were obtained from the electronic clinical records stored by ObsCare. Entropy and compression characterized the complexity of the fHR time series. These variables' contribution to the prediction of asphyxia perinatal was probed by binary logistic regression (BLR) and Naive-Bayes (NB) models. ResultsThe data consisted of 517 cases, with 15 pathological cases. The asphyxia prediction models showed promising results, with an area under the receiver operator characteristic curve (AUC) >70%. In NB approaches, the best models combined clinical and SisPorto features. The best model was the univariate BLR with the variable compression ratio scale 2 (CR2) and an AUC of 94.93% [94.55; 95.31%]. ConclusionBoth BLR and Bayesian models have advantages and disadvantages. The model with the best performance predicting perinatal asphyxia was the univariate BLR with the CR2 variable, demonstrating the importance of non-linear indices in perinatal asphyxia detection. Future studies should explore decision support systems to detect sepsis, including clinical and CTGs features (linear and non-linear).