Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by João Vinagre

2019

Incremental Multi-Dimensional Recommender Systems: Co-Factorization vs Tensors

Authors
Ramalho, MS; Vinagre, J; Jorge, AM; Bastos, R;

Publication
2nd Workshop on Online Recommender Systems and User Modeling, ORSUM@RecSys 2019, 19 September 2019, Copenhagen, Denmark

Abstract
The present paper sets a milestone on incremental recommender systems approaches by comparing several state-of-the-art algorithms with two different mathematical foundations - matrix and tensor factorization. Traditional Pairwise Interaction Tensor Factorization is revisited and converted into a scalable and incremental option that yields the best predictive power. A novel tensor inspired approach is described. Finally, experiments compare contextless vs context-aware scenarios, the impact of noise on the algorithms, discrepancies between time complexity and execution times, and are run on five different datasets from three different recommendation areas - music, gross retail and garment. Relevant conclusions are drawn that aim to help choosing the most appropriate algorithm to use when faced with a novel recommender tasks. © 2019 M.S. Ramalho, J. Vinagre, A.M. Jorge & R. Bastos.

2019

2nd Workshop on Online Recommender Systems and User Modeling, ORSUM@RecSys 2019, 19 September 2019, Copenhagen, Denmark

Authors
Vinagre, J; Jorge, AM; Bifet, A; Ghossein, MA;

Publication
ORSUM@RecSys

Abstract

2020

ORSUM - Workshop on Online Recommender Systems and User Modeling

Authors
Vinagre, J; Jorge, AM; Ghossein, MA; Bifet, A;

Publication
RecSys 2020: Fourteenth ACM Conference on Recommender Systems, Virtual Event, Brazil, September 22-26, 2020

Abstract
Modern online web-based systems continuously generate data at very fast rates. This continuous flow of data encompasses web content - e.g. posts, news, products, comments -, but also user feedback - e.g. ratings, views, reads, clicks, thumbs up -, as well as context information - device used, geographic info, social network, current user activity, weather. This is potentially overwhelming for systems and algorithms design to train in offline batches, given the continuous and potentially fast change of content, context and user preferences. Therefore it is important to investigate online methods to be able to transparently adapt to the inherent dynamics of online systems. Incremental models that learn from data streams are gaining attention in the recommender systems community, given their natural ability to deal with data generated in dynamic, complex environments. User modeling and personalization can particularly benefit from algorithms capable of maintaining models incrementally and online, as data is generated. The objective of this workshop is to foster contributions and bring together a growing community of researchers and practitioners interested in online, adaptive approaches to user modeling, recommendation and personalization, as well as other related tasks, such as evaluation, reproducibility, privacy and explainability. © 2020 Owner/Author.

2016

Online Bagging for Recommendation with Incremental Matrix Factorization

Authors
Vinagre, J; Jorge, AM; Gama, J;

Publication
Proceedings of the Workshop on Large-scale Learning from Data Streams in Evolving Environments (STREAMEVOLV 2016) co-located with the 2016 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2016), Riva del Garda, Italy, September 23, 2016.

Abstract
Online recommender systems often deal with continuous, potentially fast and unbounded ows of data. Ensemble methods for recommender systems have been used in the past in batch algorithms, however they have never been studied with incremental algorithms, that are capable of processing those data streams on the y. We propose online bagging, using an incremental matrix factorization algorithm for positiveonly data streams. Using prequential evaluation, we show that bagging is able to improve accuracy more than 20% over the baseline with small computational overhead.

2020

Proceedings of the 3rd Workshop on Online Recommender Systems and User Modeling co-located with the 14th ACM Conference on Recommender Systems (RecSys 2020), Virtual Event, September 25, 2020

Authors
Vinagre, J; Jorge, AM; Ghossein, MA; Bifet, A;

Publication
ORSUM@RecSys

Abstract

2021

Partially Monotonic Learning for Neural Networks

Authors
Trindade, J; Vinagre, J; Fernandes, K; Paiva, N; Jorge, A;

Publication
ADVANCES IN INTELLIGENT DATA ANALYSIS XIX, IDA 2021

Abstract
In the past decade, we have witnessed the widespread adoption of Deep Neural Networks (DNNs) in several Machine Learning tasks. However, in many critical domains, such as healthcare, finance, or law enforcement, transparency is crucial. In particular, the lack of ability to conform with prior knowledge greatly affects the trustworthiness of predictive models. This paper contributes to the trustworthiness of DNNs by promoting monotonicity. We develop a multi-layer learning architecture that handles a subset of features in a dataset that, according to prior knowledge, have a monotonic relation with the response variable. We use two alternative approaches: (i) imposing constraints on the model's parameters, and (ii) applying an additional component to the loss function that penalises non-monotonic gradients. Our method is evaluated on classification and regression tasks using two datasets. Our model is able to conform to known monotonic relations, improving trustworthiness in decision making, while simultaneously maintaining small and controllable degradation in predictive ability.

  • 4
  • 10