Description

In this project we want to contribute to the development and the deployment of human-centred Artificial Intelligent systems. Despite tremendous progress in the past few years, the field is currently facing some of the most important challenges at the interface between theoretical computer science, applied data science, and sociotechnical studies. Indeed, the most successful machine learning models (deep neural networks) are non-transparant, highly non-intuitive, and therefore difficult to understand. Here, we join forces to tackle this problem on three different levels. First, we want to use different techniques to overcome the traditional interpretability vs learning performance trade-off, e.g., through the identification of novel neural network building blocks and through the implementation of causal inference techniques in neural networks. Then, we want to develop tools to interrogate true black box systems, i.e., systems where only the outcome is observed (and nothing of the internal process). This will be done through test data set optimization and the training of a meta-neural network, an algorithm that learns to predict the internal structure of a black box. Finally, we want to find efficient ways to valorise the academic insights in society through the development of recommendations and tools that can be used to train and validate trustworthy neural networks in a real-life environment. The integration of these actions will ideally ensure that Artificial Intelligence will be applied in a more conscious way and more aligned with our values.
AcronymIRP15
StatusActive
Effective start/end date1/11/1931/10/24

    Flemish discipline codes

  • Human-centered design

    Research areas

  • artificial intelligent systems, interdisciplinary perspectives, human-centred artificial intelligence

ID: 47589098