| Glossary | Technologies | Artificial intelligence

Artificial intelligence

Definition and delimitation

A widely accepted definition of artificial intelligence (AI) is: Artificial intelligence is concerned with the development of computer algorithms for problems that humans are currently better at solving [1]. This definition deliberately bypasses the use of the term intelligence. The everyday psychological use of the term commonly associates intelligence with abstract, higher cognitive performance. We call a person intelligent if he or she plays chess very well or has a doctorate in physics. It is less impressive if someone recognises cats without mistakes, builds a tower of building blocks that does not collapse, summarises a newspaper article in one sentence or solves a text problem from a third-grade arithmetic book. However, the latter achievements are much more difficult to capture in algorithms than the former.

Artificial intelligence is a sub-field of computer science and is traditionally classified as applied computer science. The most important topics of AI include: Knowledge Representation and Inference, Heuristic Search and Planning, and Machine Learning. Major application areas are: Natural language processing, image and scene analysis, intelligent robotics and games [2].

Within computer science, AI research is related in particular to logic, complexity theory and declarative programming, as well as to neighbouring application areas such as image processing and robotics. Like computer science in general, AI research is both mathematical and engineering in nature. However, AI research has additional epistemological references: It provides formal methods for generative theories of human intelligent behaviour: just as physical theories are formalised mathematically, theories about human information processing can be made more precise by means of computer simulation [3]. But AI in the engineering sciences is also often oriented towards human intelligent behaviour and thus engages in “psychonics” analogous to bionics in the engineering sciences [4]. Insights into human problem solving are used as inspiration for the development of new algorithms. However, there is no claim that these algorithms function according to principles similar to those of humans.

Most AI research focuses on the development of algorithms and programmes for limited problem areas. This is also referred to as weak AI. In contrast, strong AI aims to replicate general intelligent behaviour. The everyday understanding of AI is often based on the erroneous interpretation of a weak AI system as strong. We automatically assume an AI system that can recognise different kinds of animals from camera images that it also knows something about these animals, and also that it can recognise other things, for example plants or cars. But this is not the case. We inaccurately attribute our kind of intelligence to an AI system, just as we do with our human counterpart [5]. Most researchers assume that intentionality and consciousness are necessary for general intelligence. Whether these ingredients of human intelligence will ever be understood well enough to be formulated as a computer programme is an open question.

In general, one should carefully analyse a problem before applying AI methods. If the problem can be solved with standard algorithms, such as those found in textbooks on algorithms and data structures, one should use such. With these algorithms, it is certain that a correct solution will be found. AI algorithms, on the other hand, are mostly only more or less good approximations to the desired solution. One needs AI algorithms either when a problem is so complex that a standard algorithm could not provide a solution in a reasonable time, or when it is not possible to describe a problem completely in formal terms. In the first case, one uses heuristic methods, in the second case machine learning.

History

The idea that humans themselves are capable of creating artificial beings that are human-like goes far back in human history – from the ancient Homunculus to Mary Shelley’s novel Frankenstein. Certainly the most important pioneer and mastermind of AI is Alan Turing. In his essay Computing machinery and intelligence [6] he formulated the Turing Test (originally imitation game): A human communicates with two invisible interlocutors, one of whom is a human and one a machine. If the human judge does not succeed in deciding who is a human and who is a machine, the test is considered to have been passed. However, the test only checks functional equivalence of human and AI system, i.e. the answers, but not the underlying information processing processes. This was criticised in particular in John Searle’s Chinese Room thought experiment [7]. In cognitive science research, on the other hand, algorithmic models are compared in detail with empirical data [3].

The beginning of AI research is dated to 1956. The computer science pioneer John McCarthy organised the Dartmouth Conference in the summer of 1956 – a meeting of scientists who were convinced that every aspect of human intelligence could be described so precisely that it could be simulated with a computer programme. Already in the early days of AI, three directions emerged, which are still evident in the various research directions today: aI based on formal logic with strong references to epistemology, represented by John McCarthy at Stanford; cognitive AI with the claim to emulate human intelligent behaviour, represented by Nobel laureates Allen Newell and Herbert Simon at Carnegie Mellon University in Pittsburgh; and engineering-oriented AI with a perspective on innovative applications, represented by Marvin Minsky at MIT (for the early proponents of AI, cf [8]). One of the first and most influential representatives of AI in Germany is Wolfgang Bibel, who prefers the term Intellektik.

The central goal of early AI research was the algorithmic implementation of typical human performances such as problem solving, learning, image comprehension, language comprehension, translation or chess playing. Computer programmes were quickly created for all areas, raising great expectations for further research results (a collection of early work can be found in [9]). However, the report published by James Lighthill for the British Science Council in 1972 soberly stated that existing work suggested that AI would never get beyond solving game problems. This triggered the first so-called AI winter. Public interest waned and AI projects had little chance of receiving third-party funding. In the 1980s, AI research had its second flight of fancy with research on expert systems. Efficient algorithms for drawing conclusions were created; Lisp and Prolog were further developed as special AI programming languages and special hardware for more efficient processing was developed, especially the Lisp Machine– as is the case today in the context of Deep Learning. The second AI winter was triggered primarily by the so-called knowledge engineering bottleneck – the realisation that human knowledge is only available explicitly in parts and can be represented formally. Large areas of human knowledge, especially perceptual knowledge and highly automated action routines, are implicit and cannot be captured or can only be captured inadequately with knowledge acquisition methods. So-called commom sense knowledge, which people almost always use automatically when drawing conclusions, can also hardly or only incompletely be represented in the form of explicit knowledge. For example, people know that picking up a pencil has no influence on the position of a pad that is also lying on the desk at a distance. The impossibility of explicitly considering all aspects that are not changed by an action is called the frame problem. Another example is the qualitative knowledge of physical laws already available to children. For example, a child of a certain age will not attempt to build a tower by placing a ball on a pyramid – an AI system, on the other hand, does not have this general knowledge [10]. Researchers in the field of knowledge-based, symbolic AI dominated research until the second AI winter. After all, a major success for AI research in 1994 was the victory of IBM’s chess computer Deep Blue against the then world chess champion Kasparov. This breakthrough in computer chess was partly due to more powerful hardware, but also to the AI algorithms used for heuristic search.

However, since the beginning of AI research, there have also been scientists working on the topic of machine learning. From the late 1990s onwards, some progress was made, especially in the field of statistical machine learning, and this subfield of AI gained more and more weight over symbolic, logic-based work. However, at this time, interest in AI waned sharply, and the so-called winter without end began. During this time, no more chairs were filled with this denomination, instead people preferred to talk about cognitive systems or intelligent systems. In 2011, IBM’s Watson won the quiz show Jeopardy. It used proven semantic information processing techniques with information retrieval methods (as used to search for web pages) to answer natural language questions. Watson was not referred to as an AI system, but as a smart machine that works with methods of cognitive computing [11]. A detailed account of the history of AI up to this time, written by AI researcher Nils Nilsson, can be found in [12].

Triggered by impressive successes of deep neural networks, especially in image classification and language processing, the field of AI has been experiencing a new summer since around 2015. Recent developments on so-called Explainable AI are bringing about a renaissance of symbolic AI in combination with data-intensive machine learning.

Application and examples

Every era of AI research has produced methods that are still being used and developed today. Many methods have become part of the standard repertoire of computer science. For example, search methods developed in AI are successfully applied in operations research and logistics. Algorithms for drawing conclusions can be found in today’s systems for automatic reasoning. Marvin Minsky’s definition of AI takes this observation into account: AI deals with computer problems that have not yet been solved. IT journalist and internet pioneer Esther Dyson noted in the 1990s that the most successful applications of AI technologies are those in which the AI methods are embedded in standard software like sultanas in a sultana loaf. The sultanas don’t take up much space, but they have great nutritional value. Among the best-known applications of the early years are: the chatbot ELIZA by Josef Weizenbaum (1966), which was based only on simple pattern recognition but gave the impression of understanding you [13]; the first mobile robot, Shakey, developed at Stanford in the 19060s [14]and the expert system Mycin for diagnosing bacterial diseases, which was implemented in the 1970s [15].

The robot football championships that have been taking place since 1997, the use of neural networks for document analysis since the end of the 1990s, for example at Deutsche Post [16]or the DARPA Grand Challenge, won in 2005 by the converted VW Touareg Stanley of Sebastian Thrun’s team. AI planning algorithms can be found in industrial production as well as in the [17] as well as components of the Mars robots [18]. Currently, applications of Deep Learning are attracting the most attention.

Criticism and problems

More than many other research fields, AI research is characterised by an interplay of over-promises by researchers and unrealistic ideas and exaggerated expectations by the public and industry, followed by disillusionment and a collapse of interest outside the narrow, own discipline. The current AI hype followed a long phase in which the topic was not taken seriously by other subfields of computer science and remained unnoticed by the public. AI research continued to take place during this time, but mainly as basic research. Never before have AI methods been put into practice as quickly as they are now. AI researchers – much like nuclear physicists in the middle of the last century – are confronted with the fact that their research results are suddenly being put to practical use. AI systems can contribute to relieving the workload of humans and to mastering complex tasks in many areas. However, careful consideration should be given to the way in which AI systems are implemented. For example, an AI system in geriatric care can relieve nursing staff and give them more time for care, or it can lead to further staff reductions and even delegate emotional interaction to robots. Accordingly, interdisciplinary research and a dialogue involving society as a whole are necessary in order to design future AI systems for the benefit of people (digital ethics).

On the one hand, people have rather unreal fears and, on the other, unfounded trust in the use of AI systems. For example, there is the fear that AI systems will become smarter than us humans and could then take over power. However, it is rather unlikely that an autonomous vehicle will suddenly develop a will of its own, become evil and steer the car into a wall. Much more real are dangers such as data security, for example that hackers take over the Car2Car communications and ensure that all cars drive through the intersection on red. With data-driven AI systems, as with other data-driven software, issues of privacy and security of data need to be sensibly addressed – so that useful applications can be implemented, but also so that data cannot be used or accessed unlawfully (data sovereignty).

Research

How the methodological focus of AI has developed over the decades can be traced through the most important scientific conferences and journals. The most important conferences, which are open to all areas of AI, are the International Joint Conference on Artificial Intelligence (IJCAI, since 1969, so far only once in Germany: 1983 in Karlsruhe) and the conference of the Association for the Advancement of Artificial Intelligence (since 1980). The AAAI was originally the American Association of Artificial Intelligence and was renamed in 2007 as it has become the international umbrella organisation for scientific AI. The most important journal is Artificial Intelligence.

At bidt, a hybrid AI model is being developed in the internal project “Human-AI Partnerships for Explanations in Complex Sociotechnical Systems” to identify causes for accidents and malfunctions in technical systems. In the process, logical causal models are adapted step by step through an interactive approach. For a given case, an actual root cause analysis is determined based on an event log and communicated to an expert via an explanation interface. These experts can inspect the model and correct it if necessary. This results in an increasingly accurate prediction model over time.

The project “Responsible Robotics (RRAI)” deals with ethical and social aspects of the use of AI-based robots in healthcare.

The project “Empowerment in Tomorrow’s Production: RethinkingMixed Skill Factories and Collaborative Robot Systems ” is located in the context of Industry 4.0: it investigates novel concepts for collaboration between humans and AI-based robots in factories.

The PhD project “Human-like Per ception in AI Systems” aims to explore human-like perception in AI and the improvement of human perception by means of AI.

The junior research group “AI Tools – Continuous Interaction with Computational Intelligence Tools” investigates how artificial intelligence can be designed in such a way that it is comprehensible to users and can be used by them for their own purposes.

Further links and literature

The KI Campus offers an introduction to the topic of artificial intelligence, from prerequisite-free offers to special technical topics.

A generally understandable presentation of what AI can and cannot do is given by: Gary Marcus, Ernest Davis. Rebooting AI: Building artificial intelligence we can trust. Vintage, 2019.

The most widely used textbook at university level is [2].

A broad overview of topics and methods in AI is: Günther Görz, Ute Schmid, Tanya Braun. Handbuch der Künstlichen Intelligenz. 6. Auflage. De Gruyter, 2021.

The professional representation for scientific AI in the German-speaking world is the Department of AI of the Gesellschaft für Informatik (German Informatics Society). In cooperation with the FB KI, the journal Künstliche Intelligenz (Artificial Intelligence ) is published and the KI Annual Conference is organised, which has been held since 1975.

Sources

[1] Elaine Rich. Artificial Intelligence. McGraw-Hill, 1983.

[2] Stuart Russell, Peter Norvig. Artificial Intelligence. A Modern Approach. 4. Edition. Pearson, 2020.

[3] Allen Collins, Edward E. Smith. A Perspective on Cognitive Science. In: A Collins, Smith (eds.). Readings in Cognitive Science. A Perspective from Psychology and Artificial Intelligence. Morgan Kaufmann, 1988.

[4] Ute Schmid. Cognition and AI. Künstliche Intelligenz, 22(1): 5–7, 2008.

[5] Daniel C. Dennett. Intentional systems. The Journal of Philosophy, 68(4), 87-106, 1972.

[6] Alan M. Turing. Computing Machinery and Intelligence. Mind, Volume LIX, Issue 236: 433-460, 1950.

[7] John R. Searle. Minds, Brains, and Programs. The Behavioral and Brain Sciences, (3), 417-457, 1980.

[8] Bruce G. Buchanan. A (Very) Brief History of Artificial Intelligence. AI Magazine 26(4): 53-60, 2005.

[9] Marvin Minsky (ed.). Semantic Information Processing. MIT Press, 1968.

[10] Kenneth D. Forbus. Qualitative Reasoning about Physical Processes. 7th International Joint Conference on Artificial Intelligence: 326-330, 1981.

[11] John Kelly III and Steve Hamm. Smart Machine. IBM’s Watson and the Era of Cognitive Computing. Columbia University Press, 2013.

[12] Nils J. Nilsson. The quest for artificial intelligence. Cambridge University Press, 2009.

[13] Joseph Weizenbaum. ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine. Communications of the ACM, 9(1): 36-45, 1966.

[14] Benjamin Kuipers, Edward A. Feigenbaum, Peter E. Hart, Nils J. Nilsson (2017). Shakey: from conception to history. AI Magazine, 38(1), 88-103, 2017.

[15] Edward H. Shortliffe: Computer-Based Medical Consultations: MYCIN. Elsevier, New York, 1976.

[16] Marcus Pfister, Sven Behnke, Raúl Rojas. Recognition of Handwritten ZIP Codes in a Real-World Non-Standard-Letter Sorting System. Applied Intelligence, 12(1-2): 95-114, 2000.

[17] Christoph Legat, Birgit Vogel-Heuser. A configurable partial-order planning approach for field level operation strategies of PLC-based industry 4.0 automated manufacturing systems. Engineering Applications of Artificial Intelligence, 66, 128-144, 2017.

[18] John L. Bresina, Ari K. Jónsson, Paul H. Morris, Kanna Rajan. Activity Planning for the Mars Exploration Rovers. International Conference on Automated Planning and Scheduling (ICAPS): 40-49, 2005.