• Home
  • Tech News
  • AI’s New Frontier: How Connectionist-Symbolic Hybrid Model Can Stop Hallucinations and Bad Math

AI’s New Frontier: How Connectionist-Symbolic Hybrid Model Can Stop Hallucinations and Bad Math

Ai hybrid models

Based on an evaluation of the current AI models, the next version of Large Language Models(LLM) such as GPT-5, which are expected to be a hybrid of connectionist models with symbolic reasoning, might be able to solve the problems of poor performance on math problems and hallucinations that are facing the current LLMs.

Hybrid AI models

Models such as GPT-4 and the other previous versions of LLMs all suffer from a tendency to make things up, the difficulty to explain how the model generates answers along with challenges in solving mathematical problems.

These are all issues that “connectionist” neural networks, which are based on beliefs about how the brain functions, are known for having.

As a result, OpenAI, the creator of ChatGPT, has been rolling out add-ons and extensions to provide particular new features in order to improve the model’s strengths and mitigate its drawbacks.

A solution to this problem would be “symbolic” learning artificial intelligence (AI) which does not have these flaws since it is a fact-based reasoning system. As such, it is possible that OpenAI is integrating the connectionist LLMs and symbolic reasoning into a hybrid AI model, initially through plug-ins.

As of now, at least one of the new ChatGPT plug-ins includes AI with symbolic reasoning. The Wolfram|Alpha plug-in provides a knowledge engine known for its precision and reliability that can be used to provide answers to a variety of questions.

Therefore, combining these two AI methods successfully results in a more reliable system that would lessen ChatGPT’s hallucinations and, more crucially, could provide a more thorough explanation of the system’s decision-making process.

While it might seem like a novel concept, integrating various AI models is currently in use. For instance, AlphaGo, a DeepMind deep learning system designed to outperform the best Go players, uses a neural network to learn the game of Go while simultaneously utilizing symbolic AI to understand its laws.

Although successfully merging these methodologies comes with a new set of challenges, further integrating them could be a step towards AI that is more comprehensible, more powerful, and more accurate.

This technique could not only improve the capabilities of the GPT-4 as it is now, but it might also tackle some of the more urgent issues with the current LLM generation.

AI Development Pause

The rate of advancement of AI especially since the launch of ChatGPT in November last year has raised concerns over its safety resulting in many demanding that the technology be regulated.

Leading this appeal is the Future of Life Institute which wrote an open letter, that has been signed by hundreds of AI experts and other concerned individuals, asking that the developments occurring in AI be regulated.

A few of the noteworthy signees include Elon Musk, the CEO of SpaceX, Tesla, and Twitter, Yuval Noah Harari, the co-founder of Apple, Steve Wozniak, Emad Mostaque, the founder of Stability AI, and Yoshua Bengio.

The letter demanded a six-month pause on the creation of anything more powerful than GPT-4, citing “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

According to the letter:

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This additional time would therefore allow ethical, regulatory, and safety concerns to be taken into consideration.

However, not everyone concurs with the open letter with most in the AI industry deeming the pause unnecessary and ineffective.

For instance, Pedro Domingos, a professor at the University of Washington and author of the seminal AI book The Master Algorithm, stated that he thinks the letter overstates the urgency and fear about existential risk by giving these systems far more power than they actually possess.

However, following the industry discussion, OpenAI CEO Sam Altman stated that the firm is not currently testing GPT-5. Altman further stated that the age of enormous AI models has already passed and that the Transformer network technology that underlies GPT-4 and the current ChatGPT may have reached its limits.

Related articles:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops