There global agreement among modern Artificial Intelligence professionals that Artificial Intelligence falls short of human capabilities in some critical sense, even though AI algorithms have beaten humans in many specific domains such as chess. It has been suggested by some that as soon as AI researchers figure out how to do something, that capability ceases to be regarded as intelligent, chess was considered the epitome of intelligence until Deep Blue won the world championship from Kasparov, but even these researchers agree that something important is missing from modern AIs.

As this subdivion of Artificial Intelligence is only just coalescing, “Artificial General Intelligence” (AGI) is the emerging term of art used to denote “real” AI. As the name implies, the emerging consensus is that the missing characteristic is generality. Current AI algorithms with human-equivalent or superior performance are characterized by a deliberately programmed competence only in a single, restricted domain. Deep Blue became the world champion at chess, but it cannot even play checkers, let alone drive a car or make a scientific discovery. Such modern AI algorithms resemble all biological life with the sole exception of Homo sapiens. A bee exhibits competence at building hives; a beaver exhibits competence at building dams; but a bee doesn’t build dams, and a beaver can’t learn to build a hive. A human, watching, can learn to do both; but this is a unique ability among biological lifeforms. It is debatable whether human intelligence is truly general, we are certainly better at some cognitive tasks than others but human intelligence is surely significantly more generally applicable than nonhominid intelligence.

It is usually easy to envisage the sort of safety issues that may result from AI operating only within a specific domain. It is a qualitatively different class of problem to handle an AGI operating across many novel contexts that cannot be predicted in advance.

When human engineers build a nuclear reactor, they envision the specific events that could go on inside it valves failing, computers failing, cores increasing in temperature and engineer the reactor to render these events noncatastrophic. Or, on a more mundane level, building a toaster involves envisioning bread and envisioning the reaction of the bread to the toasters heating element. The toaster itself does not know that its purpose is to make toast the purpose of the toaster is represented within the designer’s mind, but is not explicitly represented in computations inside the toaster—and so if you place cloth inside a toaster, it may catch fire, as the design executes in an unenvisioned context with an unenvisioned side effect.

Even task-specific AI algorithms throw us outside the abover-paradigm, the domain of locally preprogrammed, specifically envisioned behavior. Consider Deep Blue, the chess algorithm that beat Kasparov for the world championship of chess. Were it the case that machines can only do exactly as they are told, the programmers would have had to manually preprogram a database containing moves for every possible chess position that Deep Blue could encounter. But this was not an option for Deep Blue’s programmers. First, the space of possible chess positions is unmanageably large. Second, if the programmers had manually input what they considered a good move in each possible situation, the resulting system would not have been able to make stronger chess moves than its creators. Since the programmers themselves were not world champions, such a system would not have been able to defeat Kasparov.

The discipline of AI ethics, especially as applied to AGI, is likely to differ fundamentally from the ethical discipline of noncognitive technologies, in that:

  • The local, specific behavior of the AI may not be predictable apart from its safety, even if the programmers do everything right;
  • Verifying the safety of the system becomes a greater challenge because we must verify what the system is trying to do, rather than being able to verify the system’s safe behavior in all operating contexts;
  • Ethical cognition itself must be taken as a subject matter of engineering