Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Research Opportunity
Apply now Final Selection Minute View Formal Call
Research Opportunity

Engineering

[Closed]

Work description

Several interpretability methods were proposed for deep learning methods, consisting of saliency maps, natural language descriptions, and rule-based and case-based explanations. From these, case-based explanations arise as one of the most intuitive for human beings, as learning by example is our natural way of reasoning. Nonetheless, case-based explanations are sometimes prohibited due to privacy issues. In applications where there is a person exposed in the image, particularly, when those images are acquired for sensitive purposes, as is the case of medical images, the use of case-based explanations is completely inhibited. Therefore, in order to use the intuitive case-based explanations to justify and understand the deep learning model's behavior, one should be able to wash away the identity before presenting those cases to the consumer of the explanations. In this project, we intend to promote a causal design for the generation of privacy-preserving case-based explanations, starting from the explicit disentanglement between medical and identity features and moving towards a causal model in which the interventions are produced in terms of high-level semantic features.

Minimum profile required

Knowledge about Machine Learning or Computer Vision.

Preference factors

Experience in research projects, and writing of scientific papers.

Application Period

Since 14 Sep 2023 to 27 Sep 2023

[Closed]

Centre

Telecommunications and Multimedia

Scientific Advisor

Luís Filipe Teixeira