Strategies for increasing the transparency of assistive systems are a key approach to improving their understandability and, thus, their trustworthiness. This project was undertaken to understand how people develop trust towards personalized assistive systems in repeated exposures. It studies the dynamics of trust restoration in case of system malfunctions and investigates the effectiveness of system transparency/explanation to support trust formation and restoration.

To gain understanding of the processes involved in trust formation, we first defined the key concepts and outlined the connection between them based on explainable AI and explanation science literature. In particular, we analysed how the explainability of system and trust-building are interrelated at different stages of an interaction. Furthermore, the role of third parties such as commercial companies is investigated as an enabling or hindering factor for individuals’ development of trust towards assertive system. Then, the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies were identified and discussed.

To test the hypotheses derived from our literature review, an experiment with an use case that features a personalized learning assistant (PLANT) was designed and conducted. We aimed to study how people develop trust towards personalized assistive systems over a period of time, by mimicking an abstracting system that supports participants with the task of learning new texts and taking quizzes. We developed a 2 x 2 study design, manipulating the system’s malfunction and transparency. A combined qualitative and quantitative methodological approach was used to analyze the data from 184 participants.

Project Partners