OpenAI, the tech company behind the popular artificial intelligence solutions ChatGPT and Whisper, has officially launched commercial application interface programming (API) for these two models to allow developers to integrate these powerful tools into their products.

Even though some companies have already announced the launch of ChatGPT-powered products, these partnerships were accepted by OpenAI on a case-by-case basis. Now, programmers can access the two models by paying a fixed fee depending on how many characters the artificial intelligence has to process.

According to the official blog post published today by OpenAI, the firm has managed to reduce the cost of operating ChatGPT by as much as 90% so it can pass those savings onto API users.

Computing costs have been among the most relevant concerns that the company has had when it comes to scaling as artificial intelligence models demand significant computing power to process the text and speech prompts sent by users, go through the knowledge base, and come up with the corresponding answer.

More Details About the ChatGPT API – Price & Technical Specifications

In the case of ChatGPT, developers will pay a flat fee of $0.002 per every 1,000 tokens they use. A token is approximately 4 characters or a quarter of a word in English meaning that those $0.002 will allow you to put on 750 words or so into the solution.

ChatGPT has upgraded its existing AI model to GPT-3.5 turbo, which is an improved version of the previous GPT-3.5. Some of the most interesting improvements made to the model include that developers can accompany user prompts with other relevant instructions that prevent the AI model from drifting off course or hallucinating.

For example, the developer can provide the necessary context to the model that the user may omit. Moreover, the developer can add style and tone instructions that the artificial intelligence can use to make sure it comes up with appropriately phrased answers.

OpenAI also announced that it will now offer dedicated support to developers who opt to pay for that service. What this means is that an enterprise could opt to run its ChatGPT requests in a separate infrastructure within the Microsoft (MSFT) Azure cloud platform to get faster response times.

By default, users that don’t opt for this option will have their requests pooled together with those of other clients.

“Dedicated instances can make economic sense for developers running beyond ~450M tokens per day. Additionally, it enables directly optimizing a developer’s workload against hardware performance, which can dramatically reduce costs relative to shared infrastructure”, OpenAI emphasized.

The ChatGPT API has been one of the most awaited product releases by OpenAI as the popularity of the software has enticed thousands of software companies to incorporate the technology into their products and services.

However, thus far, only a handful of software-as-a-service (SaaS) firms and social media platforms have been able to experiment with ChatGPT as is the case of Snapchat (SNAP), which recently launched a tool called “My AI” and Quora, whose Poe artificial intelligence hub has included ChatGPT among its available models for users to try out.

Whisper is Also Getting its Own API

Whisper is the ChatGPT-equivalent platform for speech-to-text prompts. This model was launched in September. The automatic speech recognition (ASR) software has been trained with over 680,000 hours of multilingual and multitask data collected from web sources.

The cost of using the Whisper API will be $0.006 per every minute of input. Typically, the software processes 30-second chunks of audio that are encoded and then decoded so the audio’s transcript can be processed and fed to the artificial intelligence to come up with a text response.

Other Related Articles: