Description

A lot of decisions that affect us personally —will we get the job, should we receive treatment, are we guilty— are made by other people. Throughout history, society has trained and trusted people such as managers, doctors and judges to make these important decisions in the best possible way. Recently, scientists discovered how we can train artificially intelligent computer programs to assist us in this process. But should we also trust these programs? We already knew that humans can be racist, sexist, biased, or simply mistaken. It turns out intelligent computer programs can be too. Here we propose research into methods that help us better understand, control and correct those programs. We want to devise methods that shed light on the inner workings of so-called artificial neural networks, for example by trying to understand the complex variants of these networks in terms of how they differ from more simple networks we fully understand. In addition, we propose to conduct research on the different forms of memory these networks can possess, and on how
we can manipulate this memory such that specific information, e.g., private personal data, can be erased. Finally, we propose to correct unfair automated decisions even before the programs are applied in society, and this by allowing the programs to ask questions to an expert about the data it is learning from. We believe the development of these methods will leverage the beneficial use of intelligent programs in society.
AcronymFWOTM918
StatusActive
Effective start/end date1/10/1830/09/20

    Flemish discipline codes

  • Other social sciences not elsewhere classified

    Research areas

  • Algorithms and computational mathematics

ID: 39336289