Microsoft’s intentions to become a leading force in the up-and-coming field of artificial intelligence were obvious from the get-go starting with the $1 billion it poured into OpenAI in 2019.
However, the stakes are perhaps higher than most have noticed as the company founded by Bill Gates and headed by Satya Nadella has also spent hundreds of millions in building the required infrastructure to support this technology.
NVIDIA Supplies the GPUs that Microsoft Needs for Its AI Infrastructure
In a blog post shared by John Roach, Microsoft’s Chief Technology Officer (CTO) for the firm’s Digital Advisory Services unit, the company provided details about the investments it has made to create the supercomputers needed to “build AI systems that would forever change how people interact with computers”.
Scaling up its cloud infrastructure was the first priority for Microsoft (MSFT) according to Roach, as they had to make sure that customers would need significant computing resources to fully exploit the potential of the company’s upcoming AI tools.
The company identified this situation since the beginning as the workloads that OpenAI was sending its way were quickly exhausting its capabilities at the time to process the large number of complex operations required for their AI models to work properly.
The two companies understood that to create powerful models they needed to train them with extensive amounts of data. The larger the data sets, the more accurate the model would be. Hence, they worked together to create supercomputers by using graphic processing units (GPUs) from NVIDIA (NVIDIA).
Microsoft Explored Unchartered Territory as its GPU Clusters Just Kept Growing
Microsoft deemed the scale needed in terms of infrastructure for OpenAI’s solutions to function as “unprecedented” as the clusters of GPUs they had to put together to make things work were “exponentially larger” than what anyone in the industry has built.
In fact, not even the manufacturers of the GPUs had tested their products on the scale that the Redmond-based company has been building them. This made it quite a challenging endeavor in terms of maintenance and design as “nobody knew for sure if the hardware could be pushed that far without breaking”, explained Nidhi Chappel, the company head of product for Azure’s high performance computing.
Enlarging its infrastructure has allowed the company to launch new AI-powered products that are rapidly being made available to millions of customers including its new smart version of the Bing search engine and a chatbot for its Viva Sales marketing solution.
In addition, Microsoft has been progressively launching new products from OpenAI within its Azure cloud service, which facilitates the tasks of integrating these solutions into customers’ existing back-end architecture.
“That shift from large-scale research happening in labs to the industrialization of AI allowed us to get the results we’re starting to see today”, commented Phil Waymouth, the tech company’s Senior Director of Strategic Partnerships and the person in charge of the OpenAI negotiations.
Aside from these supercomputers, Microsoft also had to ensure that they had the networking capabilities to transmit the data as fast as needed both between the company’s own devices and to external destinations.
Meanwhile, the company has not stopped pushing the boundaries of its data centers to the limits as they have been adding more GPU clusters to their infrastructure along with everything needed to keep the systems up and running including the required cooling systems, power supplies, and backup generators.
This lengthy blog post dissipates some of the concerns that have been brought up by IT professionals and even companies that are tapping into AI and incorporating the technology into their suite of products and services.
It seems that Microsoft has been planning this all along and they have flawlessly executed their strategy to first make ChatGPT and other similar solutions a popular mainstream topic to then offer access to the solution to developers who can already rely on a robust infrastructure to embrace the trend.
Other Related Articles:
Discuss This Article
Add a New Comment /Reply
Thanks for adding to the conversation!
Our comments are moderated. Your comment may not appear immediately.