Startup firm OpenAI has unveiled GPT-4, the latest incarnation of its artificial intelligence software, which is now capable of understanding text and imagery.
ChatGPT Plus subscribers can already access the new platform, and there is a waiting list for developers to enrol on to get their hands on the API.
The Future Was Already Here
It turns out that GPT-4 has been hiding in plain sight. Today, Microsoft announced that the chatbot technology it co-developed with OpenAI, Bing Chat, is powered by GPT-4.
GPT-4 performs at a “human level” on a variety of professional and academic benchmarks, can generate text, and can accept both text and image inputs, an upgrade above GPT-3.5, which only accepted text. For instance, GPT-4 successfully completes a mock bar exam with a score in the top 10% of test takers, but GPT-3.5 received a score in the bottom 10%.
According to the business, OpenAI spent six months “iteratively aligning” GPT-4 utilizing lessons from ChatGPT and an internal adversarial testing program. This process produced “best-ever results” on factuality, steerability, and refusing to cross guardrails. Similar to earlier GPT models, GPT-4 was trained using data that was both publicly accessible and licensed by OpenAI. GPT-4 was trained using a “supercomputer” that OpenAI and Microsoft built from the ground up on the Azure cloud.
In a blog post introducing GPT-4, OpenAI stated that the differences between GPT-3.5 and GPT-4 “may be modest in casual conversation”. “When the task’s complexity reaches a certain threshold, the difference emerges — GPT-4 is more dependable, inventive, and able to handle considerably more sophisticated instructions than GPT-3.5.”
He appealed to the audience to share ideas: “to defeat that ubiquitous technology, if you have any good ideas, we’d be happy to hear about them afterwards.”
Not Quite Perfected Yet
OpenAI agrees that GPT-4 is far from ideal even with system messages and other changes. It still “hallucinates” things and commits logical mistakes, occasionally with a lot of conviction. A clear error was made by GPT-4 when it referred to Elvis Presley as the “son of an actor” in one of the examples provided by OpenAI.
As the vast bulk of its data is cut off (September 2021), “GPT-4 generally lacks knowledge of events that have happened and does not learn from its experience,” OpenAI noted. It occasionally exhibits simple reasoning flaws that do not seem to be consistent with its proficiency in so many other areas, or it may be unduly trusting when accepting blatantly fraudulent claims from a user. However, it occasionally makes the same mistakes in solving complex problems as people do, such as creating security flaws in the code it generates.
But, OpenAI does acknowledge that it has made progress in some areas. For instance, GPT-4 is now less likely to reject requests for instructions on how to create hazardous substances. According to the business, GPT-4 is 29% more likely to react to sensitive requests, such as those for medical advice and information about self-harm, in accordance with OpenAI’s policies, and is 82% less likely overall to answer to requests for “disallowed” content than GPT-3.5.
Clearly, there is a lot to learn about GPT-4. Nonetheless, OpenAI is moving forward at full speed, clearly confident in the improvements it has made.
Related News
- Is Meta Pivoting From The Metaverse To AI In Admission Of Huge Straregic Error?
- Anthropic Opens Up Access to Commercial API of its AI Model Claude
- Google Rolls Out New AI Tools for its Workspace Solution
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops