| News | Blog | The concept of trust in interdisciplinary research on human-AI interaction
The concept of trust in interdisciplinary research on human-AI interaction

The concept of trust in interdisciplinary research on human-AI interaction

Trust is a central concept in human-AI interaction, but its definition varies across disciplines. The working group “Trust and Acceptance” from the interdisciplinary research focus Human and Generative Artificial Intelligence: Trust in Co-Creation has developed a cross-project overview of the concept of trust and proposes an interdisciplinary, compatible working model with customisable elements.


Since the rise of ChatGPT in 2022, generative AI has found its way into numerous areas of application: In a matter of seconds, generative AI can generate text, audio, images and video and potentially revolutionise various fields of application – be it AI-generated news in journalism or in political election campaigns, as a tutor for acquiring programming skills or as a design aid for 3D designs. A global study by Gillespie et al. (2025) shows that despite this ubiquity, there is considerable ambivalence towards the use of AI systems. Even though the use of AI has increased since 2022, people’s reported trust in AI has decreased. But how do you even define trust in AI?

Although many disciplines deal with the topic, there is still no standardised understanding of trust in AI. Interdisciplinary collaborations are therefore faced with the challenge of bridging different understandings and methodological approaches. The term trust is often used in a simplified way. The use or acceptance rate of AI assistants is often referred to as trust. This equating of trust and acceptance, which is particularly common in disciplines such as software development, can be problematic (Baltes et al. 2025), as users can accept and use AI-generated content even though they do not trust the AI. Furthermore, acceptance as a measure of trust is not sufficient to determine whether trust was appropriate to the situation (Baltes et al. 2025).

Psychological definition of trust and trust models

From a psychological perspective, two established models are commonly used for the definition of trust:

The model of interpersonal trust by Mayer et al. (1995) defines trust as the willingness of the trust giver (trustor) to be vulnerable to the actions of a trust recipient (trustee), based on the expectation that the latter will perform actions important to the trust giver – regardless of the possibility that the trustor can control them. Three facets of perceived trustworthiness determine the trust of the trustor (here: the users of AI) in the trustee (the AI):

  • the perceived competence of the trustee in the relevant domain
  • the perceived benevolence (positive intentions) of the trustee towards the trustor
  • the trustee’s perceived integrity (adherence to ethical principles).

The higher people rate the trustworthiness of AI, the more they trust it. Trust then determines actual trust behaviour. It is important to emphasise that these are subjective attributions: The trust facets are applied to AI systems even though AI systems objectively have no intentions or ethical principles of their own.

Lee and See (2004) adapt the model to the automation context and supplement it with additional dimensions that go beyond perceived trustworthiness and relate to the relationship of trust to actual system capabilities:

  • (Trust) calibration: involves users adjusting their trust to the actual capabilities of a system. Deviations lead to misuse or disuse (overtrust/undertrust). Trust can be gradually adjusted, e.g. through information about the AI and experience with it.
  • Resolution describes how sensitively (proportionally) trust reacts to changes in system capabilities.
  • Specificity emphasises the context dependency of trust and refers to the fact that trust can relate to certain subsystems or specific situations

This extension is relevant in the field of human-AI interaction research, as the aim is not to increase the trust of users in AI applications across the board, but to design automated systems in such a way that misuse is reduced and cooperation between humans and systems is improved.

Trust vs. trust behaviour

While psychological research defines trust as an attitude of the user towards the system, trust behaviour describes an action as a consequence of trust. In the AI context, trust behaviour can be seen, for example, in the acceptance and use of AI-generated content. However, this is not a reliable indicator of genuine trust. Numerous contextual factors can have an influence: For example, a high cognitive load due to complex tasks or multitasking can tempt users to resort to automated support more quickly. The novelty of an AI tool and curiosity can also motivate users to use it without trusting it. Equating trust with acceptance and the use of AI-generated content fails to recognise other influences on trust behaviour.

We illustrate these differences with a brief practical example. Imagine you are asked at short notice to create a promotional video by the next day for a product that the company you work for manufactures. However, you have little expertise in creating promotional videos. To create the video, you use a generative AI tool that converts text into a video in a matter of seconds. The next morning, you upload the generated promotional video to your company’s website under your name.

The example shows: The use of AI does not necessarily reflect trust, but can be motivated by situational factors such as time pressure or a lack of expertise. Ideally, users should evaluate whether they consider the system to be competent, secure and trustworthy before using AI-generated content. Relevant characteristics can be taken into account here – such as the tool’s data protection practices, existing explanations for generating the content or options for revising the output.

Concept of trust and interdisciplinarity

Even if a common working definition of trust in AI is relevant for joint interdisciplinary research, trust can take on different roles in specific research projects. A rigid definition of the term may therefore be inappropriate to adequately reflect the range of projects. In order to take these differences into account and at the same time create comparability, we suggest that research projects on the topic of trust in human-AI interaction provide transparent answers to the following questions:

  1. Function: What function should trust in AI fulfil in the study context? For example, should trust contribute to the use of an AI or to the acceptance of the generated output?
  2. Which attributes attributed to the AI at the individual level play a role in the study context? What is the trustor’s assessment of the competence, benevolence or integrity of the AI system? These are the perceptions of the users, which are recorded using measurement scales. The specific scales used depend on the function (1) and the specific study context. In this context, it should also be specified whether the trust relates to the process, the interaction or the end product.
  3. Which cues at the object level of the AI (of the trustee) are considered for the evaluation of perceived trustworthiness? This refers to observable characteristics of the AI that influence perception at the individual level (2) and trust behaviour as signals. They are typically varied in experiments (e.g. the influence of interface design, explanations or interaction options on the subjective perception of the AI).
Working Model Trust in Human-AI Interaction

Conclusion

This article proposes an overarching working concept with customisable elements for trust, to facilitate interdisciplinary research and to harness the diversity of perspectives. To this end, it is essential for individual projects to reflect on at which points within their framework trust plays a role . Further it is important to be aware that acceptance alone is not solely relevant for successful co-creation between humans and AI, but rather trust behaviour appropriate to the situation is required for successful collaboration.

Sources

Baltes, S./Speith, T./Chiteri, B./Mohsenimofidi, S./Chakraborty, S./Buschek, D. (2025). Rethinking Trust in AI Assistants for Software Development: A Critical Review. arXiv preprint arXiv:2504.12461.

Gillespie, N./Lockey, S./Ward, T./Macdade, A./Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI: 10.26188/28822919.

Lee, J. D./See, K. A. (2004). Trust in automation: Designing for appropriate reliance. In: Human factors 46(1), 50-80.

Mayer, R. C./Davis, J. H./Schoorman, F. D. (1995). An integrative model of organisational trust. In: Academy of management review 20(3), 709-734.

The blog posts published by bidt reflect the views of the authors; they do not reflect the position of the Institute as a whole.