New tool will make Artificial Intelligence more explainable, transparent and reliable
08th March 2021
How can doctors rely on a system that tells them the right time to operate on a patient with a rare tumour? How can a retailer be sure that the algorithm did not favour a supplier over one of the competitors? And what about the consumers? Don't they have the right to know how the energy consumption forecasting models decide how much they pay? The TRUST-AI project, coordinated by the Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), seeks to explain AI systems, making them more transparent and reliable.
“Since Artificial Intelligence techniques are gradually influencing decisions, their reasoning goes through stages that are too abstract and complex to be understood by users. We are talking about 'black boxes', which find excellent factual solutions, without being able to explain how they get them. This raises ethical questions as AI influences more and more decisions. Understanding the selection of a certain option brings confidence and helps improving the decision-making process. This project will develop AI solutions that are more transparent, fair and explainable and, therefore, more suitable", said Gonçalo Figueira, INESC TEC researcher and project coordinator.
The approach consists of making AI and humans work together towards better solutions (that is, models that are effective, understandable and generalizable), through the use of symbolic models and learning algorithms explainable by the project, as well as through the adoption of an human-centred empirical learning process, which integrates cognition, machine learning and human-machine interaction.
The result will be a transparent, smart, reliable and unbiased tool, applied to three case studies - in the fields of healthcare (treatment of tumours), online retail (to select delivery times for orders) and energy (to support the forecasting of buildings). However, the project may also be applicable to other sectors: banking, insurance, industry and public administration.
In addition to INESC TEC (coordinator), the TRUST-AI project (Transparent, Reliable and Unbiased Smart Tool for AI) comprehends LTPLabs and five other partners, from five different countries: Tartu Ulikool (Estonia), Institut National De Recherche Eninformatique Et Automatique (France), Stichting Nederlandse Wetenschappelijk Onderzoek Instituten (Netherlands), Applied Industrial Technologies (Cyprus) and TAZI Bilisim Teknolojileri AS (Turkey).
The project received a €4M budget through the European Union's research and innovation programme, Horizon 2020, under the agreement no. 952060.
Porto - March 8, 2021
For further inquiries:
Rua Dr Roberto Frias
M +351 934 224 331