The Present State of AI
When you hear the term “Artificial Intelligence or AI,” it really refers to narrow AI—a system which may have superhuman “mental” abilities, but only in a narrow area of expertise. AIs now have expert credentials at many games and can find data correlations and patterns that no human mind could ever uncover. But even with these achievements, nothing approaching human’s common sense has emerged.
What will it take for AI to evolve into truly intelligent, thinking machines, or Artificial General Intelligence (AGI)? To take the next step on the road to genuine intelligence, consider emulating a child playing with blocks. Children learn through multiple senses and through interaction with objects over time.
A child, of course, has the advantage over AI in that he or she learns everything in the context of everything else. Today’s AI has none of this context. Images of blocks are just different arrangements of pixels. Neither an AI which is specifically image-based nor an AI which is primarily word-based will have the context of a “thing” which exists in reality, is more-or-less permanent, and is susceptible to basic laws of physics.
Building an Artificial Three-Year-Old
How to store the diverse information needed to evolve from AI to AGI is part of the challenge and how your brain does this is not fully understood. The Universal Knowledge Store (UKS), however, represents one possible explanation. The UKS is an example of what is called a “Graph” (from mathematics and computer science) which consists of many “nodes” connected by “edges” or “links”.
The UKS is loosely analogous to the brain’s neurons connected by synapses, but there are significant differences. Nodes in the UKS are completely abstract and have meaning only in relation to other nodes, just as individual neurons have no meaning. The UKS represents information collectively in the links connecting the nodes. These links are directional and may have a “weight”.
Consider that a child (or an AGI) sees a blue square. Visual processing identifies the properties of the object, one of which is its “blueness” and another of which is its “squareness”. There is no technical limit to the number of properties an object might have. These are links to other abstract nodes.
In the UKS, all blue objects link to the blue node, and the weight of the link indicates the importance of blueness in identifying the object. In the UKS, links can also be followed in reverse so seeing one blue thing can “bring to mind” many other blue things.
Adding language to the mix illustrates some complexity. The abstract “blue” node, for example, must be distinct from the node of the word “blue.” In fact, we might have multiple words related to any node, including synonyms and multiple languages. We hear speech as a continuous stream of phonemes (or syllables) and, over time, the repetition of the phoneme sequence blu (using the International Phonetic Alphabet) at the same time as activating the abstract blue node by seeing blue strengthens the link between the word and the abstract concept. Eventually, hearing the word blu has a similar effect in activating the abstract blue node as seeing a blue object.
On learning to read, a parallel system of nodes builds up. Instead of hearing phonemes, one sees the shapes of characters. Recognized sequences of characters can eventually also activate the word nodes, so seeing a blue object, reading about a blue object, and hearing the word blue all evoke a similar reaction.
The structure of the UKS can store any kind of data. Instead of sequences of syllables, sequences of more generalized sounds can represent music and sequences of words can represent poetry. When the poem is retrieved from the UKS, all the contexts of the multiple nuances of the words and the images they evoke can likewise be retrieved. Further, the nodes can represent behaviors or landmarks, and be used to navigate mazes or learn complex behaviors to achieve goals.
It is likely that the brain contains some analogous structure because people can answer similar types of questions to the UKS. But the UKS model is built around emulating capabilities common to humans, and the brain may achieve similar results with different structures. Further, there is no specific brain area equivalent to the UKS. Any equivalent structure is distributed among many brain areas and activities.
That being said, the simulation of the UKS showed that a single UKS node might require as many as a hundred biological neurons. Given that the neocortex’s 16 billion neurons are doing many things in addition to UKS-like information processing leads to the conclusion that the human brain may be limited to knowing only 100 million things. While this is a lot of information, it is well within the processing power of computers in the next decade or two. Further, we don’t know what portion of that 100 million things is necessary to emulate general intelligence, which may make AGI even closer.
Comments on this article are closed.