Anthropic AI
Adobe Stock / ipopba

Anthropic introduced Claude, an advanced AI assistant designed to compete with OpenAI’s ChatGPT, on March 14, in an eventful week filled with developments in the AI field. How does Claude compare to ChatGPT and does it have a shot at dethroning GPT-4 as the most advanced conversational AI on the market?

With the release of Claude, GPT-4, and Google’s AI-based features for their various productivity tools such as Docs, Gmail, Sheets, and Slides, March 2023 is proving to be an eventful month for AI technology.

Claude, for one, has shown that it’s a real contender in the conversational AI space, with many users saying it is at least as good as ChatGPT. Like ChatGPT, Claude is an advanced large language model (LLM) that generates text, writes code, and operates as an AI assistant for a variety of applications.

This new bot’s creation stems from concerns over AI safety, leading to Anthropic developing an AI model training technique they call “Constitutional AI” as an alternative to OpenAI’s training methods.

The AI Constitution: Guiding Principles for a Safer AI Assistant

Constitutional AI refers to a technique for training AI systems to be safe and harmless without relying on human-provided labels for harmful outputs. Instead, the method employs a set of rules or principles to guide the AI’s behavior. The process consists of two main phases: supervised learning and reinforcement learning.

In the supervised learning phase, the AI generates self-critiques and revisions using an initial model. The model is then refined based on the revised responses. During the reinforcement learning phase, the AI samples from the refined model and evaluates which samples are better.

A preference model is developed from the dataset of AI preferences, and reinforcement learning is applied using the preference model as a reward signal, a technique known as “RL from AI Feedback” (RLAIF).

Constitutional AI enables the creation of a safe and non-evasive AI assistant that addresses harmful queries by explaining its objections.

By leveraging chain-of-thought reasoning, this method enhances the AI’s decision-making transparency and performance, as judged by humans. The ultimate goal of this approach is to achieve precise control over AI behavior with minimal human input.

This is in contrast to GPT-4’s training process called Reinforcement Learning with Human Feedback (RLHF) and Rule-Based Reward Models (RBRMs), which involves using different techniques to reduce the chances of generating harmful content and promote desired AI behaviors. Unlike Constitutional AI, GPT-4’s fine-tuning process relies on human input and the continuous refinement of the models to minimize undesirable outputs.

OpenAI engaged experts from various domains, such as AI alignment risks, cybersecurity, bio-risk, and international security, to conduct adversarial testing on GPT-4. This testing helped identify potential weaknesses and vulnerabilities in the model, leading to improvements in GPT-4’s output.

Claude vs. GPT-4: An AI Showdown

Claude comes in two versions: the standard Claude and “Claude Instant.” Both versions are available for a limited early access group and to commercial partners of Anthropic.

Users can access Claude through a chat interface in Anthropic’s developer console or via an application programming interface (API). The API enables developers to integrate Claude’s analysis and text completion capabilities into their applications.

Anthropic claims that their AI assistant, Claude, is “much less likely to produce harmful outputs, easier to converse with, and more steerable” than other AI chatbots while maintaining a high degree of reliability and predictability.

While unclear whether that claim is referring to GPT-3.5 or GPT-4, professional users who had the opportunity to test the AI assistant had positive feedback about Claude.

“There was nothing scary. That’s one of the reasons we liked Anthropic,” Richard Robinson, the CEO of Robin AI, a startup in London that employs AI to evaluate legal agreements, said during an interview. “If anything, the challenge was in getting it to loosen its restraints somewhat for genuinely acceptable uses.”

Robinson said Claude was more efficient in deciphering intricate legal jargon and less prone to producing bizarre outputs than OpenAI’s technology–though whether this was referring to GPT-3.5 or 4 is also unclear.

During the development of GPT-4, OpenAI also implemented improvements aimed at minimizing the potential for harmful content generation.

According to OpenAI’s GPT-4 Technical Report, GPT-4 is 82% less likely to produce prohibited content compared to GPT-3.5. Additionally, the frequency of AI hallucinations–instances where the neural network confidently asserts unfounded claims–has been reduced.

OpenAI emphasizes that GPT-4 continues to have accuracy limitations.

“It remains less than completely dependable, as it ‘hallucinates’ facts and commits reasoning errors,” Researchers warn. “Extreme caution should be exercised when utilizing language model outputs, especially in high-stakes situations.”

Potential use cases for Claude include search, summarization, collaborative writing, and coding. Like ChatGPT’s API, Claude can alter its personality, tone, or behavior depending on user preference.

To access Claude, Anthropic charges per million characters for both input and output. Claude Instant costs $0.42 per million characters for prompt input and $1.45 per million characters for output.

The larger model, “Claude-v1,” is priced at $2.90 per million characters input and $8.60 per million characters output. When compared to OpenAI’s ChatGPT API, which costs approximately $0.40 to $0.50 per million characters, Claude is generally more expensive.

Claude has already been integrated into several products offered by Anthropic’s partners. Examples include DuckAssist instant summaries from DuckDuckGo, a portion of Notion AI, and an AI chat app called Poe, developed by Quora.

Anthropic was founded in 2021 by former OpenAI VP of research, Dario Amodei, and his sister Daniela. The company was established as an AI safety and research organization after the founders disagreed with OpenAI’s increasingly commercial direction. Several OpenAI staff members, including Tom Brown, who led GPT-3 engineering, joined Anthropic.

The AI Assistant Arena: A Battle of the Bots

The Verge reports that Google invested $300 million in Anthropic in late 2022, acquiring 10% of the company. Despite this investment, Google is unlikely to rely on Anthropic for AI solutions in its products.

Instead, Google is developing its own AI assistant, Google Bard, an experimental conversational AI service powered by a lightweight version of LaMDA, Google’s language and conversational model.

While Google Bard currently has a limited number of “trusted testers,” the company plans to incorporate new AI-powered features into Google Search. It’s worth noting that Bard has real-time access to the web, giving it an edge over ChatGPT, which is limited to internet data from 2021.

Microsoft is also venturing into the AI market with its upgraded MS Bing, powered by an advanced model of ChatGPT called “Prometheus.” Bing includes a chat mode that pulls in web queries and allows users to ask contextual questions. Although still in limited preview, Bing will be free to use upon release.

Another AI assistant alternative is Chatsonic, which has expansive features and broader knowledge due to its access to the internet. Chatsonic allows users to talk to the AI through their microphones and can output voice responses.

The AI assistant also remembers conversations and features 16 different personas. Chatsonic offers a built-in image generator as well, along with a browser extension and an Android app. They offer a variety of plans starting at $12.67 per month.

Jasper Chat is another competitor in the AI content generation market, targeting businesses in advertising and marketing. Jasper Chat, based on GPT-3.5 and partnered with OpenAI, offers a free trial service with a Boss or Business plan, starting at $59 per month. The AI assistant can engage in medium to complex conversations, and users can toggle Google search data for added functionality.

Anthropic’s AI assistant, Claude, promises enhanced safety and user experience. The company plans to introduce more updates in the coming weeks, with efforts to make its AI assistant more helpful, honest, and harmless through safety research and deployment.

As the AI assistant market continues to expand, consumers and businesses alike will benefit from the increased competition, driving improvements in safety, user experience, and functionality. The higher cost of Claude may be justified by the promise of a safer and more reliable AI assistant, but with GPT-4 also offering safety improvements, only time will tell if users will be willing to pay a premium for these features in Claude.

Wall Street Memes (WSM) - Newest Meme Coin

Our Rating

Wall Street Memes
  • Community of 1 Million Followers
  • Experienced NFT Project Founders
  • Listed On OKX
  • Staking Rewards
Wall Street Memes