Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Alípio Jorge

2022

ORSUM 2022 - 5th Workshop on Online Recommender Systems and User Modeling

Autores
Vinagre, J; Jorge, AM; Ghossein, MA; Bifet, A;

Publicação
RecSys

Abstract
Modern online systems for user modeling and recommendation need to continuously deal with complex data streams generated by users at very fast rates. This can be overwhelming for systems and algorithms designed to train recommendation models in batches, given the continuous and potentially fast change of content, context and user preferences or intents. Therefore, it is important to investigate methods able to transparently and continuously adapt to the inherent dynamics of user interactions, preferably for long periods of time. Online models that continuously learn from such flows of data are gaining attention in the recommender systems community, given their natural ability to deal with data generated in dynamic, complex environments. User modeling and personalization can particularly benefit from algorithms capable of maintaining models incrementally and online. The objective of this workshop is to foster contributions and bring together a growing community of researchers and practitioners interested in online, adaptive approaches to user modeling, recommendation and personalization, and their implications regarding multiple dimensions, such as evaluation, reproducibility, privacy, fairness and transparency.

2022

LMMS reloaded: Transformer-based sense embeddings for disambiguation and beyond

Autores
Loureiro, D; Mário Jorge, A; Camacho Collados, J;

Publicação
ARTIFICIAL INTELLIGENCE

Abstract
Distributional semantics based on neural approaches is a cornerstone of Natural Language Processing, with surprising connections to human meaning representation as well. Recent Transformer-based Language Models have proven capable of producing contextual word representations that reliably convey sense-specific information, simply as a product of self supervision. Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task. Still, there remains much to understand on how to use these Neural Language Models (NLMs) to produce sense embeddings that can better harness each NLM's meaning representation abilities. In this work we introduce a more principled approach to leverage information from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We also emphasize the versatility of these sense embeddings in contrast to task-specific models, applying them on several sense-related tasks, besides WSD, while demonstrating improved performance using our proposed approach over prior work focused on sense embeddings. Finally, we discuss unexpected findings regarding layer and model performance variations, and potential applications for downstream tasks.& nbsp;

2021

Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling

Autores
Oliveira, S; Loureiro, D; Jorge, A;

Publicação
CoRR

Abstract

2019

Preference rules for label ranking: Mining patterns in multi-target relations

Autores
de Sá, CR; Azevedo, PJ; Soares, C; Jorge, AM; Knobbe, AJ;

Publicação
CoRR

Abstract

2017

An Overview of Data Mining Applications in Oil and Gas Exploration: Structural Geology and Reservoir Property-Issues

Autores
Jahromi, HN; Jorge, AM;

Publicação
CoRR

Abstract

2017

Mind the Gap: A Well Log Data Analysis

Autores
Lopes, RL; Jorge, A;

Publicação
CoRR

Abstract

  • 45
  • 46