Several companies are working to develop Artificial General Intelligence (AGI), a theoretical artificial intelligence (AI) model with human-like intelligence and reasoning that can learn and improve itself. However, since there is no single definition of AGI, it may be difficult to tell when (or if) we eventually build one.
While most believe that it would take humanity a lot more time to reach AGI, ASI Alliance founder Ben Goertzel says the alpha version of OpenCog Hyperon is already “self-aware” – albeit to some extent. Here’s everything we know about the system and whether AGI could become a reality soon.
Ben Goertzel Says Its System Is Becoming ‘Self-Aware’
For context, the ASI Alliance, or Artificial Superintelligence Alliance was formed earlier this year and brings Goertzel’s SingularityNET project, Humayun Sheikh’s FetchAI, and Ocean Protocol together.
According to the ASI Alliance, “As the largest open-sourced, independent entity in AI research and development, this alliance aims to accelerate the advancement of decentralized Artificial General Intelligence and, ultimately, Artificial Superintelligence.”
Ben Goertzel, founder of @ASI_Alliance , has revealed that the Alpha version of OpenCog Hyperon is now self-aware!
This AGI system goes beyond chatbots like GPT-4 — it’s an autonomous agent with its own goals and awareness of its environment.And big news: 96% of voters… pic.twitter.com/4gPHSkuyzM
— SophiaVerse (@SophiaVerse_AI) September 27, 2024
Speaking with Coin Telegraph, Goertzel said, that while the Alpha version of OpenCog is currently quite slow, the team is “massively speeding it up.” He added, “I think that should be completed this fall. And so then, which means next year, we’ll be setting about trying to build toward AGI on the new Hyperloop infrastructure.”
Goertzel stressed, “A Hyperon system is not just a chatbot. It’s architected as a sort of autonomous agent which has its own goals and its own self-awareness and tries to know who it is and who you are, what it’s trying to accomplish in the given situation. So, it’s very much an autonomous, self-aware agent rather than just a question-answering system.”
What is a Self-Aware LLM System?
A truly self-aware Large Language Model (LLM) would be aware of its existence and surroundings and have feelings. As some may recall, in 2022, Google engineer Blake Lemoine claimed that the company’s LaMDA 2 LLM showed signs of sentience and consciousness. However, Google and droves of tech critics not only ridiculed his findings but he was also fired from the company.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
Earlier this year, Anthropic’s Claude 3 Opus model also showed some signs of self-awareness when it caught a trick question from researchers. However, it’s impossible to tell whether it is just imitating self-awareness.
As for Goertzel’s comments about its system – which he says has a different approach to LLMs like ChatGPT – becoming self-aware, more research and substantiation is warranted. Given the rapid pace of advancements in the field, we cannot rule out or ridicule the possibility as in Lemoine’s case. However, his claims should be taken with a grain of salt as it would be a millennium-defining achievement.
LLM systems becoming self-aware or sentient could open up new risks as AI regulations are still a work in progress.
How’s AGI Different from LLMs?
Here we also need to distinguish between AGI and LLMs. LLMs are data-fed pattern recognition systems basically just predict the next word over and over again, kind of like the typing suggestions you get on your phone. LLMs have many deficiencies and they are only as good as the limited (even if quite vast) data that they are fed with.
It just doesn’t make sense for an LLM to be actually self-aware as it’s really just using an algorithm using a massive database to suggest the most likely answer. AGI, on the other hand, can self-teach itself and has human-like (or better) intelligence. Then again, we still don’t know the exact mechanisms of sentience in humans so it’s possible that some form of an LLM could become somewhat self-aware.
Andrew Ng Believes It Would Take Decades to Develop AGI
Andrew Ng, a luminary in the field of AI and machine learning, has been advocating AI regulations believes that AGI can do “any intellectual tasks that a human can.” This would include flying a plane or writing a thesis.
Goertzel believes that “baby AGI” is possible by 2025. “I think we can call it a fetal AGI if you want to pursue that metaphor,” said the former chief scientist at Hanson Robotics, the company behind “Sophia” humanoid.
However, Ng, who is the founder of Coursera and Deeplearning.ai believes that it would take decades to develop AGI, if at all it becomes a reality. “Some companies are using very non-standard definitions of AGI, and if you redefine AGI to be a lower bar, then of course we could get there in 1 to 2 years,” he said in an interview (featured below.)
He added, “Some companies are using very non-standard definitions of AGI, and if you redefine AGI to be a lower bar, then of course we could get there in 1 to 2 years.”
Notably, developing AGI would require billions of dollars for any company and OpenAI’s CEO Sam Altman believes that the company would need to raise $100 billion as it strives to achieve AGI.
Incidentally, OpenAI is reportedly considering restructuring itself into a for-profit company to make it more attractive to potential investors as it needs billions of dollars – or perhaps trillions if Altman decides to invest in chipmaking facilities also.