Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Ricardo Campos é professor auxiliar do Departamento de Informática da Universidade da Beira Interior (UBI) e Professor convidado da Porto Business School. É investigador sénior do LIAAD-INESC TEC, Laboratório de Inteligência Artificial e Apoio à Decisão da Universidade do Porto, e colaborador do Ci2.ipt, Centro de Investigação em Cidades Inteligentes do Instituto Politécnico de Tomar. É doutorado em Ciências da Computação pela Universidade do Porto (U. Porto), mestre e licenciado pela Universidade da Beira Interior (UBI). Possui mais de 10 anos de experiência de investigação nas áreas de recuperação de informação e processamento da linguagem natural, período durante o qual o seu trabalho foi distinguido com vários prémios de mérito científico em conferências internacionais e competições científicas. É autor do software de extração de keywords YAKE!, do projeto Conta-me Histórias e Arquivo Público, entre outros. Participou em vários projetos de investigação financiados pela FCT. A sua investigação foca-se no desenvolvimento de métodos relacionados com o processo de extração de narrativas a partir de textos, em particular na identificação e no relacionamento entre entidades, eventos e os seus aspetos temporais. Co-organizou conferências e workshops internacionais na área da recuperação de informação, e é regularmente membro do comité científico de várias conferências internacionais. É também membro do editorial board do International Journal of Data Science and Analytics (Springer) e do Information Processing and Management Journal (Elsevier). É membro do fórum de aconselhamento científico da Portulan Clarin - Infraestrutura de Investigação para a Ciência e Tecnologia da Linguagem, que pertence ao Roteiro Nacional de Infraestruturas de Investigação de Relevância Estratégica. Para mais informações clique aqui.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Ricardo Campos
  • Cargo

    Investigador Sénior
  • Desde

    01 julho 2012
004
Publicações

2025

MedLink: Retrieval and Ranking of Case Reports to Assist Clinical Decision Making

Autores
Cunha, LF; Guimarães, N; Mendes, A; Campos, R; Jorge, A;

Publicação
Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6-10, 2025, Proceedings, Part V

Abstract
In healthcare, diagnoses usually rely on physician expertise. However, complex cases may benefit from consulting similar past clinical reports cases. In this paper, we present MedLink (http://medlink.inesctec.pt), a tool that given a free-text medical report, retrieves and ranks relevant clinical case reports published in health conferences and journals, aiming to support clinical decision-making, particularly in challenging or complex diagnoses. To this regard, we trained two BERT models on the sentence similarity task: a bi-encoder for retrieval and a cross-encoder for reranking. To evaluate our approach, we used 10 medical reports and asked a physician to rank the top 10 most relevant published case reports for each one. Our results show that MedLink’s ranking model achieved NDCG@10 of 0.747. Our demo also includes the visualization of clinical entities (using a NER model) and the production of a textual explanation (using a LLM) to ease comparison and contrasting between reports. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2025

Preface

Autores
Campos, R; Jorge, M; Jatowt, A; Bhatia, S; Litvak, M;

Publicação
CEUR Workshop Proceedings

Abstract
[No abstract available]

2025

The 8th International Workshop on Narrative Extraction from Texts: Text2Story 2025

Autores
Campos, R; Jorge, A; Jatowt, A; Bhatia, S; Litvak, M;

Publicação
Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6-10, 2025, Proceedings, Part V

Abstract
For seven years, the Text2Story Workshop series has fostered a vibrant community dedicated to understanding narrative structure in text, resulting in significant contributions to the field and developing a shared understanding of the challenges in this domain. While traditional methods have yielded valuable insights, the advent of Transformers and LLMs have ignited a new wave of interest in narrative understanding. The previous iteration of the workshop also witnessed a surge in LLM-based approaches, demonstrating the community’s growing recognition of their potential. In this eighth edition we propose to go deeper into the role of LLMs in narrative understanding. While LLMs have revolutionized the field of NLP and are the go-to tools for any NLP task, the ability to capture, represent and analyze contextual nuances in longer texts is still an elusive goal, let alone the understanding of consistent fine-grained narrative structures in text. Consequently, this iteration of the workshop will explore the issues involved in using LLMs to unravel narrative structures, while also examining the characteristics of narratives generated by LLMs. By fostering dialogue on these emerging areas, we aim to continue the workshop's tradition of driving innovation in narrative understanding research. Text2Story encompasses sessions covering full research papers, work-in-progress, demos, resources, position and dissemination papers, along with one keynote talk. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2025

Leveraging LLMs to Improve Human Annotation Efficiency with INCEpTION

Autores
Cunha, LF; Yu, N; Silvano, P; Campos, R; Jorge, A;

Publicação
Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6-10, 2025, Proceedings, Part V

Abstract
Manual text annotation is a complex and time-consuming task. However, recent advancements demonstrate that such a task can be accelerated with automated pre-annotation. In this paper, we present a methodology to improve the efficiency of manual text annotation by leveraging LLMs for text pre-annotation. For this purpose, we train a BERT model for a token classification task and integrate it into the INCEpTION annotation tool to generate span-level suggestions for human annotators. To assess the usefulness of our approach, we conducted an experiment where an experienced linguist annotated plain text both with and without our model’s pre-annotations. Our results show that the model-assisted approach reduces annotation time by nearly 23%. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2025

Human Experts vs. Large Language Models: Evaluating Annotation Scheme and Guidelines Development for Clinical Narratives

Autores
Fernandes, AL; Silvano, P; Guimarães, N; Silva, RR; Munna, TA; Cunha, LF; Leal, A; Campos, R; Jorge, A;

Publicação
Proceedings of Text2Story - Eighth Workshop on Narrative Extraction From Texts held in conjunction with the 47th European Conference on Information Retrieval (ECIR 2025), Lucca, Italy, April 10, 2025.

Abstract
Electronic Health Records (EHRs) contain vast amounts of unstructured narrative text, posing challenges for organization, curation, and automated information extraction in clinical and research settings. Developing effective annotation schemes is crucial for training extraction models, yet it remains complex for both human experts and Large Language Models (LLMs). This study compares human- and LLM-generated annotation schemes and guidelines through an experimental framework. In the first phase, both a human expert and an LLM created annotation schemes based on predefined criteria. In the second phase, experienced annotators applied these schemes following the guidelines. In both cases, the results were qualitatively evaluated using Likert scales. The findings indicate that the human-generated scheme is more comprehensive, coherent, and clear compared to those produced by the LLM. These results align with previous research suggesting that while LLMs show promising performance with respect to text annotation, the same does not apply to the development of annotation schemes, and human validation remains essential to ensure accuracy and reliability. © 2025 Copyright for this paper by its authors.