| Glossary | Technologies | Augmented reality

Augmented reality

Definition and delimitation

In the reality-virtuality continuum [1] augmented reality (AR) and augmented virtuality are part of what is known as mixed reality. In contrast to virtual reality (VR) systems, which create a complete virtual environment, in AR systems the user’s real environment is enriched by virtually superimposed elements (text information, images, videos). According to Azuma [2] aR systems should (1) combine real and virtual objects in a real environment, (2) run interactively and in real time, and (3) register (align) real and virtual objects with each other.

AR uses computer vision (CV) in combination with sensors to help display relevant content correctly. To do this, sensor data is collected, combined and processed using a wide variety of algorithms.

To be able to insert a virtual object into the scene, the technology must be able to determine the distance and angle of the objects in space. In the process, the 3-D measurements are accumulated in a map of the surroundings. For this purpose, the camera perspective and, in more advanced AR systems, semantic object information are calculated. To provide the user with an immersive and believable AR experience, the data is processed using machine learning and artificial intelligence to recognise familiar objects, surfaces and geometries (e.g. walls or floor).

Finally, the extracted 3-D knowledge from the CV algorithms is used to display the desired virtual content using computer graphics. This requires a display device. By using projectors or data glasses (so-called head-mounted displays, HMD), holography and telepresence are possible, i.e. the virtual projection of objects or persons into the physical environment. Well-known examples of HMDs are Microsoft’s HoloLens or the Magic Leap. However, most AR platforms use simple cameras and common screens that insert digital content into a real-time video. Thus, AR is already an integral part of iPhones and Android smartphones today.

History

In the beginning, no distinction was made between augmented reality and virtual reality, and the first concepts were idealised before 1960. However, the technology itself found its beginning in 1968 when Ivan Sutherland introduced the first HMD system. He called it The Sword of Damocles. The term augmented reality then appeared in 1990 when Boeing researcher Tim Caudell first mentioned it; two years later, Louis Rosenburg developed Virtual Fixtures, the first fully functional AR system.

AR research has come a long way since then, and the list of use cases for AR continues to grow. From sports (Superbowl/football), NASA simulations to immersive marketing experiences, AR has arrived in the mass market and is now finding use in a wide variety of fields and markets, often unnoticed. At the latest since Google Glasses (2014), Microsoft HoloLens (2016) and Pokémon Go (2016), AR has become a household name for many.

Application and examples

Some AR platforms/systems offer their technology as a complete package including hardware (e.g. HoloLens), but many technologies also work on a standard smartphone. Well-known software for mobile devices are:

AR is now used in a variety of ways and is applied in many areas of everyday life. These include:

  • Entertainment, e.g. games, fashion & lifestyle (such as virtual glasses or make-up)
  • Visualisation, simulation, e.g. placing virtual furniture in one’s own home
  • Remote maintenance, training and further education, e.g. extension of traditional learning platforms through AR application
  • Medicine and health, e.g. surgical guidance
  • Fitness and rehabilitation, e.g. assistance systems
  • Marketing, e.g. embedding links in print media, or live interactions
  • Architecture, e.g. presentation and visualisation
  • Virtual travel and navigation, e.g. projection of navigation instructions in cars
  • Collaboration of distributed teams, e.g. through telepresence in video conferences
  • and much more.

Criticism and problems

Safety: On the one hand, training in an AR environment is seen as a promising method for practising various activities with real objects in a safe environment by eliminating potential risks in real situations. On the other hand, the use of AR applications in everyday life also poses significant risks, such as potential distraction in road traffic [3].

Privacy concerns: In connection with environmental recording and analysis, see privacy, cybercrime.

Terminology: There is inconsistent handling of the definition of mixed, augmented and virtual reality in the literature. The often even synonymous use of the terms “virtual” and “augmented” reality leads to confusion.

Research

The PhD project ‘Mixed Reality’ as a new rehabilitative approach for disorders of everyday actions after chronic neurological disease” investigates how patients with action disorders, e.g. after stroke or dementia, can be supported in their everyday actions with the help of AR technology. For this purpose, various holographic stimuli are being investigated to determine the most effective cues, as well as the perception of these externally generated stimuli on the sensorimotor system. Further information on the project can be found in the following publications:

  • Rohrbach, N. et al. (2019). An augmented reality approach for ADL support in Alzheimer’s disease: a crossover trial. Journal of neuroengineering and rehabilitation, 16(1), 1-11.
  • Rohrbach, N. et al. (2021). Fooling the size-weight illusion – Using augmented reality to eliminate the effect of size on perceptions of heaviness and sensorimotor prediction. Virtual Reality, 1-10.

The PhD project “Real-time scene understanding on mobile devices” investigates algorithms that can efficiently process visual environment information. The focus is on the development of methods for the visual understanding of static and dynamic 3-D scenes using mobile sensors. This includes the extraction of object information, for example, the pose, object categories and their spatial and semantic relationships to each other. Further information on the project can be found in the following publications:

  • J. Wald, H. Dhamo, N. Navab, F. Tombari, “Learning 3D Semantic Scene Graphs from 3D Indoor Reconstructions”, IEEE Computer Vision and Pattern Recognition 2020.
  • J. Wald, A. Avetisyan, N. Navab, F. Tombari, M. Niessner, “RIO: 3D Object Instance Re-Localization in Changing Indoor Environments”, International Conference on Computer Vision 2019.
  • J. Wald, K. Tateno, J. Sturm, N. Navab, F. Tombari, “Real-Time Fully Incremental Scene Understanding on Mobile Platforms”, IEEE Robotics and Automation Letters, October 2018.

Sources

[1] Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems, 77(12), 1321-1329.

[2] Azuma, R. T. (1997). A survey of augmented reality. Presence: Teleoperators & Virtual Environments, 6(4), 355-385.

[3] Faccio, M., & McConnell, J. J. (2020). Death by Pokémon GO: The economic and human cost of using apps while driving. Journal of Risk and Insurance, 87(3), 815-849.