In this age of ubiquitous digital interconnectivity, we may envisage that humans will increasingly delegate their social, economic or data-related transactions to an autonomous agent, for reasons of convenience or complexity. Although the scientific knowledge to create such systems appears to be available, this transformation does not appear to become commonplace soon, except maybe the use of basic digital assistants.
We aim to explore if this is due to the lack of knowledge about human trust and acceptance of artificial autonomous delegates that make decisions in their place or even how these delegates should be designed. We study these questions using computational agents models that are validated in a series of behavioural experiments defined around the public goods game. We investigate when and how the autonomous agent may evolve from observer, over decision support to a delegate with full autonomy in decision-making. Using VR and AR technologies, we will investigate if the representation in which the agent is experienced influences trust. All the technology-oriented research is checked against socio-technology acceptance theories through an intricate collaboration with experts in social sciences. The results of this fundamental research will allow us to explore important questions related to the intelligence and interface of the envisioned agents, and lay the foundation for new types of online markets that brings autonomous agents into real-world applications.

Effective start/end date1/01/1931/12/22

    Flemish discipline codes

  • Other computer engineering, information technology and mathematical engineering not elsewhere classified

    Research areas

  • conflict theory, Decisions

ID: 44929476