Research suggests that there are three levels of human consciousness, which could be taken as a roadmap toward “conscious AI”.
Could machines ever exhibit a similar grasp of consciousness to humans? To answer this question, we have no alternative but to approach what we know of our own consciousness through our knowledge of the human brain.
As we study how the human brain works, we are able to apply what we learn to the systems that we build to resemble the brain. For example, deep learning neural networks take advantage of our understanding of neural communication between our synapses.We model AI after our own neural networksClick To Tweet
Machines: “Cogitant Ergo Sunt”*
*They think, therefore they are.
We cannot consider the nature of “consciousness” without philosophy heavily involved.
Etymologically, “consciousness” comes from the Latin “con scire” (to know together), and here we already find a certain synthesis. Consciousness is the immediate inner knowledge that a human possesses of their thoughts, feelings, and deeds.
It was with René Descartes’ “Cartesian doubt” that the notion of consciousness received a technical description. Consciousness is treated as an entity apart from the external world, endowed with a special ability to access “the self”. If there’s a “thinking” process, there must be an entity doing it.
Could Descartes’ famous adage “cogito ergo sum” (I think, therefore I am), apply to AI? Is the “cogito” enough for consciousness to be there?
There is quite a debate about whether “singularity” could, or should, ever arrive. Some, like Ray Kurzweil, believe that machines will take the same evolution path of humans, until they become super intelligent and conscious. Others think that the idea of “superhuman” AI is nothing but, well, a myth.
AI could be sophisticated and powerful, but in a way dissimilar to that of humans, or as Wired magazine’s Kevin Kelly puts it “The results of accelerating technology will most likely not be super-human, but extra-human.”
In other words, if AI is better than us at some things, they probably be perceived the same as when one person is markedly better than another at programming, for example.
While reasoning can be traced back to some chemical and electrochemical reactions, the human “mind”, both material and immaterial, is more than the sum of its neurological networks (the brain).
Three Levels of Human (and AI) Consciousness
Hakwan Lau is an associate professor of Cognitive Psychology and Behavioral Neuroscience who’s been testing philosophical theories about consciousness empirically. Lau and his teammates tried to find if AI could ever develop consciousness and to answer that question, he explored how the human consciousness comes to exist in the first place.
“Human consciousness is not just about recognizing patterns and crunching numbers quickly,” Lau told Live Science. “Figuring out how to bridge the gap between human and artificial intelligence would be the holy grail.”
The researchers defined three levels of human consciousness that computers should pass through before they reach full consciousness.
By the way, some experts have mentioned that current AI haven’t been designed to attain all three levels of consciousness. You can read more about that here.
Here’s the proverbial ladder that researchers think “conscious AI” will have to climb.
1. C0: Autopilot
This is level zero of the human consciousness, associated with most unconscious calculations and operations, and that we don’t even pay attention to, such as face and speech recognition.
According to researchers, most AI systems currently available function on this level, able to carry out C0 computations like image–including facial–recognition.
2. C1: Trains of Thought and Pools of Info
The second level of consciousness is related to decision-making following thought and external information processing. This ability can be seen in infants and animals.
Researchers cited an example of a thirsty elephant able to locate a waterhole, dozens of miles away, and go in a straight line towards it.
Lau said that by analyzing neural circuits in the prefrontal cortex of the brain (region responsible for, information processing), scientists could derive the underlying principles of C1 operations “and code them into computers”.
3. C2: Metacognition
The third level, C2, calls to mind the famous aphorism “Know yourself”. This “metacognition”, or self-awareness is the ability to think about one’s own thoughts.
Reflecting on one’s own processes of thought or creation pushes us to correct ourselves, to adapt and enrich our experience.
While some AI systems can monitor their progress, they do so unconsciously.
“… despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain,” said the researchers in the paper published in Science magazine. “We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.”
For now, machines don’t “think” about their own “thoughts” the way that we do. Some say until that’s fixed we will never have true AI; some say we should never strive to create that.