While AI is expected to ramp up efficiency for businesses they would also need to figure out rebalancing their workloads to maximize efficiency.
To begin with, there is a widespread consensus that generative AI could be the biggest disruptor for multiple business functions.
AI has been the buzzword in the US corporate world and according to FactSet, 110 S&P 500 companies used the word “AI” in their Q1 2023 earnings call – which is over thrice the ten-year average and the highest level since 2010.
Watch out for an increasing number of S&P 500 companies mentioning AI during their earnings calls as we enter the second quarter earnings season.
Incidentally, Goldman Sachs believes that AI would increase productivity by 1.5% annually until the end of this decade and boost S&P 500 earnings by 30% or higher.
Analysts foresee a massive market potential for AI and Next Move Strategy Consulting believes that the market size for the industry is expected to reach $2 trillion in 2030. To put that in perspective, the industry size was below $100 billion in 2021.
The report says, “As these workloads become more embedded throughout enterprises, IT leaders will have to grapple with not only where to run workloads but how to do so for maximum efficiency” and labels generative AI as both an opportunity as well as a challenge for IT leaders.
IT Leaders Would Need to Manage Workloads as Generative AI Gains Traction
One challenge that businesses would face is to put protective barriers that stop the exposure of proprietary data as their employees consume these tools.
Notably, Cybersecurity is a key concern for companies and a recent Verizon Data Breach Investigations Report (DBIR) which was released last month offers some interesting insights.
It found that 83% of the breaches were due to external actors while internal actors were responsible for 19% of the cybersecurity incidents.
While currently, most generative AI tools run in the public cloud, in the future companies might consider running some applications in-house to protect sensitive and proprietary data.
Companies might have varying motives to run the tools internally including fear of IP leakage and other compliance and security concerns.
The Forbes reports add that “by running apps internally, IT staffers have more fine-grained control over how often they update and tune their secret sauce—the large language models (LLMs) that fuel the predictions.”
Since the algorithms are crucial for generative AI, how companies run and store these might be a differentiator between them.
Artificial Intelligence is Gaining Traction
AI is gaining traction and earlier this week Anthropic launched the latest version of its chatbot named Claude 2 and is only currently offering the service for free in the US and UK.
Claude 2 has shown meaningful improvement as compared to the previous version and scored 76.5% on the Bar exam’s multiple-choice section as compared to 73% in the previous version.
In the Python coding test, Claude scored 71% – up 15 percentage points from the earlier version. Similarly, the model improved its middle school math quiz grade to 88% from 85.2%.
While the previous version could analyze a prompt of only about 75,000 words, the new model can handle twice that.
As generative AI makes more strides and an increasing number of companies integrate the technology into their business functions, managing workloads would become all the more important especially as companies try to prevent the misuse of data.
Related Stock News and Analysis
- AI is Set to Shake Up the Tech World – Analysts Expect Microsoft to Join the $3 Trillion Club Amid AI Success
- Best AI Crypto Tokens & Projects to Invest in 2023
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops