Project description
Digital, tactile sensors are increasingly being coupled with artificial intelligence to support people in their daily and professional activities, e.g. with lane-keeping assistants in cars or robots that assist with precision operations. In view of the importance of artificial assistance systems, the project investigated their inclusion in the training process from the perspective of cognitive science, computer science, and philosophy. This is because cooperative learning will also include hybrid pairs of human and artificial learners.
Using a novel interdisciplinary approach, the project team investigated hybrid learning between humans and AI for increasingly innovative tactile augmentation and assistance. To this end, three different but complementary perspectives are integrated:
- The cognitive neuroscience of human biological learning through vision and touch,
- the philosophy of self-confidence and trust in digital tactile assistants,
- the computer science design of machine learning algorithms tailored to tactile learning with AI.
The project provided insights into how AI-assisted learning affects different sensory modalities and how trust and decision-making are shaped in these contexts:
- Learning Performance & Confidence:
Results showed that learning speed and accuracy remained consistent across sensory modalities. However, tactile learning led to faster reaction times, suggesting increased perceptual confidence in touch-based feedback. - Human-AI Interaction:
The development of the Maze Task, an innovative decision-making experiment, revealed similarities between human learning and AI-based decision models. The findings indicated that human learning could be modeled using DynaQ methods with multiple simulation runs. - Trust & Ethics in AI:
Philosophical analyses demonstrated that labeling AI as “trustworthy” led users to perceive the technology as more reliable and benevolent—even without objective justification. By contrast, the term “reliable” provided a neutral yet effective alternative, fostering engagement without inducing undue trust.
Contact

Project team

Dr. John Dorsch (Ph.D.)
Postdoctoral researcher, Faculty of Philosophy, Philosophy of Science and the Study of Religion | Ludwig-Maximilians-Universität in Munich

Dr. Isabelle Ripp (Ph.D.)
Postdoctoral researcher, Cognition, Values, Behaviour Research Lab; Philosophy of Mind | Ludwig-Maximilians-Universität in Munich

Prof. Dr. Maximilian Moll
Holder of the Endowed Junior Professorship, Operations Research – Prescriptive Analytics at the Faculty of Computer Science, Operations Research – Prescriptive Analytics at the Faculty of Computer Science | University of the Bundeswehr Munich

Prof. Dr. Merle Fairhurst
Chair, Head of Biological Psychology | University of the Bundeswehr Munich

Prof. Dr. Ophelia Deroy
Chair, Head of Philosophy of Mind | Ludwig-Maximilians-Universität in Munich

Aylin Borrmann
Phd Student, institute for computer science | University of the Bundeswehr Munich