Google-backed AI company Anthropic is opening up access to the application programming interface (API) of its Claude AI model after several months of testing the tool with a handful of commercial enterprises.
According to the San Francisco-based tech company, two versions of the AI model will be available. The first is Claude, which is the most powerful option of the two, and the second is called Claude Instant. The latter is designed to be a “lighter, less expensive, and much faster” alternative.
Anthropic has designed its AI model with safety, trust, and reliability in mind following its ‘HHH’ principle – Helpful, Honest, and Harmless. Developers can tweak the tone of the AI’s responses from friendly to direct and they can easily integrate the tool into any existing product or application by using the API.
The practical uses of Claude are quite extensive. It can be integrated into e-commerce websites as a helpful shopping assistant, turned into a powerful tool for sales departments to summarize relevant info from a meeting or email, or it can be used to answer questions.
Claude reportedly supports multiple languages, although it has been trained mostly to respond to prompts in English. In addition, the model can be fine-tuned by developers as needed.
However, Anthropic believes that Claude has been trained appropriately to respond to queries in the best way possible.
Developers can opt to add additional context into the text prompts sent by the end user that can help refine the model’s tone, personality, response, and other aspects of its delivery.
Here’s How Much It Will Cost to Use the Claude API
For now, the maximum context that can be inputted to the AI model is 5,000 words. This is lower than the maximum 8,000 and 32,0000 words that can be given to the new GPT-4 model released by OpenAI this week.
Claude does not have access to the internet as it has been designed to be contained, although information that comes from the web can be fed into it.
The cost of using Claude is considerably higher than that of OpenAI’s newly released AI model GPT-4, which charges $0.03 for every 1,000 tokens for prompts and $0.06 for completion.
In the case of Claude-v1, the cost of using the solution stands at $2.90 for prompts and $8.60 for output for every 9,000 tokens. For comparison, this would be around 32 cents for every 1,000 tokens for prompts and 96 cents for output.
Meanwhile, the cost of Claude Instant stands at $0.43 for every 9,000 tokens for prompts and $1.45 for output. Again, for comparison, this results in a price of $0.05 for prompts and $0.16 for output for every 1,000 tokens.
The competing product that could deliver similar results to Claude – ChatGPT – only costs $0.002 per every 1,000 tokens for both input and output.
Are Google and Anthropic Struggling Not to Be Left Behind in the AI Race?
Both Anthropic and its backer – Alphabet (GOOG) – appear to have been rushed to roll out their AI-powered products as both Microsoft and its protege – OpenAI – have been flooding the market with new solutions and models.
Just yesterday, Google released new AI tools for its Workspace solution – however, these were only made available to a selected group of “Trusted Testers”.
The significant computing power required to run these AI-powered applications may be one of the hurdles that are preventing Alphabet to make these features available to the general public.
In the case of Microsoft, the company recently published a blog where it explained the large investments it has been making since 2019.
It invested in OpenAI to build large clusters of graphic processing units (GPUs) that have the capacity to process the heavy workloads that these models require to function properly.
Anthropic’s Claude model is reportedly able to offer dedicated capacity for large enterprise customers and the company says the product is ready to be scaled as needed.