Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Sérgio Nunes

2025

Cross-Lingual Information Retrieval in Tetun for Ad-Hoc Search

Authors
Araújo, A; de Jesus, G; Nunes, S;

Publication
Lecture Notes in Computer Science - Progress in Artificial Intelligence

Abstract

2025

User Behavior in Sports Search: Entity-Centric Query and Click Log Analysis

Authors
Damas, J; Nunes, S;

Publication
Lecture Notes in Computer Science - Progress in Artificial Intelligence

Abstract

2025

Evaluating Dense Model-based Approaches for Multimodal Medical Case Retrieval

Authors
Catarina Pires; Sérgio Nunes; Luís Filipe Teixeira;

Publication
Information Retrieval Research

Abstract
Medical case retrieval plays a crucial role in clinical decision-making by enabling healthcare professionals to find relevant cases based on patient records, diagnostic images, and textual descriptions. Given the inherently multimodal nature of medical data, effective retrieval requires models that can bridge the gap between different modalities. Traditional retrieval approaches often rely on unimodal representations, limiting their ability to capture cross-modal relationships. Recent advances in dense model-based techniques have shown promise in overcoming these limitations by encoding multimodal information into a shared latent space, facilitating retrieval based on semantic similarity. This paper investigates the potential of dense models to enhance multimodal search systems. We evaluate various dense model-based approaches to assess which model characteristics have the greatest impact on retrieval effectiveness, using the medical case-based retrieval task from ImageCLEFmed 2013 as a benchmark. Our findings indicate that different dense model approaches substantially impact retrieval effectiveness, and that applying the CombMAX fusion methodto combine their output results further improves effectiveness. Extending context length, however, yielded mixed results depending on the input data. Additionally, domain-specific models—those trained on medical data—outperformed general models trained on broad, non-specialized datasets within their respective fields. Furthermore, when text is the dominant information source, text-only models surpassed multimodal models

  • 14
  • 14