At its Ignite conference, Microsoft announced that it has developed two new custom-designed chips to power AI workloads for its Azure cloud platform. The reveal of the in-house Azure Maia AI Accelerator and Azure Cobalt CPU marks Microsoft’s first foray into manufacturing its own data center silicon.

The chips are part of Microsoft’s push to optimize its Azure infrastructure from the bottom up without relying on outside silicon giants like Nvidia. By tailoring hardware to its software, Microsoft aims to improve performance, lower costs, and reduce dependence on GPUs for AI processing.

Maia Chip Targets Efficient AI Inferencing

The star of Microsoft’s unveiling is the Azure Maia 100 AI Accelerator. Maia is specifically engineered to handle AI inferencing for tasks like natural language processing.

Inferencing refers to taking a trained AI model and efficiently running real-world predictions on new data. This enables applications like chatbots, search engines, and customized recommendations.

Microsoft said today that Maia will function as the backbone of its largest internal AI workloads. That includes services like Bing, Microsoft 365, and Azure OpenAI.

The 5nm chip contains over 100 billion transistors optimized for AI work. Though full details are still scarce, Microsoft says that Maia is designed to maximize throughput and minimize latency when running massive AI models.

Much of Microsoft’s AI portfolio relies on systems co-developed with OpenAI, its AI research partner. OpenAI confirmed that it has been collaborating with Microsoft on refining Maia to handle its compute-intensive natural language models.

By tailoring the hardware to run OpenAI’s models efficiently, Microsoft (MSFT) is likely aiming to reduce its operating costs for Azure OpenAI services. More efficient inferencing directly translates to improved gross margins for Azure’s managed OpenAI offerings.

Transitioning more inferencing to Maia could also help Microsoft weather the current GPU shortage. AI-optimized chips like Maia reduce the load on scarce GPUs and AI accelerators such as those manufactured by Nvidia and other hardware giants. It should also allow Microsoft to serve more OpenAI customers with its existing capacity.

Arm-Based Cobalt CPU Brings Customization

Alongside Maia, Microsoft unveiled the Azure Cobalt 100 CPU based on an Arm Neoverse architecture. Cobalt is designed to handle general-purpose computing across Microsoft’s data centers.

Adopting an Arm-based design brings potential efficiency gains thanks to Arm’s reduced instruction-set architecture. Microsoft says that Cobalt delivers 40% better performance than the current Arm-based chips deployed in Azure.

Microsoft highlighted Cobalt’s voltage optimization as one of its top advantages. Fine-grained controls allow precise tuning to match the needs of each application. Cobalt also supports virtualization to securely share resources across workloads.

By transitioning internal services like Teams and Azure SQL to Cobalt, Microsoft can cut costs and achieve performance improvements. The company also plans to offer Cobalt-powered virtual machines for public cloud customers next year.

Bringing custom silicon in-house gives Microsoft more control over hardware innovation cycles. Instead of relying on third-party server CPUs, Microsoft can upgrade Cobalt on its own timeline.

Microsoft Expands Infrastructure Choice for Customers

Microsoft emphasized that these new chips are designed to complement, not replace existing partnerships with companies like AMD, Intel, and Nvidia. The goal is to provide more infrastructure options for customers.

Microsoft will continue deploying GPUs from Nvidia and others to support compute-heavy training of AI models. The company also recently announced an expansion of its relationship with Nvidia, including adopting the latest H100 and upcoming H200 GPUs.

But by adding its own custom silicon to the mix, Microsoft is reducing its reliance on any single vendor. Moreover, the new chips may optimize performance and costs for core Microsoft workloads in a way third-party hardware cannot match.

Bringing more of the hardware stack in-house may also give Microsoft pricing leverage with partners, helping them negotiate better deals on supplemental components. Microsoft’s scale provides some bargaining power that smaller cloud competitors lack.

In-House Hardware: A Growing Trend in the Cloud Industry

microsoft will use maia to power its AI datacenters

With its foray into data center chips, Microsoft joins other cloud titans in exerting more direct control over the hardware side of its services.

Amazon (AMZN) has been manufacturing its own Arm-based Graviton server CPUs since 2015. It has since expanded into building custom inferencing and training chips for AI as well.

Google also uses homegrown tensor processing units (TPUs) to accelerate AI workloads. The tech giant has also designed other proprietary chips to handle tasks like networking and data compression.

Chips tailored for a specific company’s services can outperform general-purpose components. However, designing chips from scratch requires enormous investments that only the largest cloud providers can afford.

By following competitors in producing its own silicon, Microsoft aims to match or exceed its rivals’ capabilities to handle massive AI workloads from customers. With cloud growth fueling record profits, Microsoft clearly feels that the substantial cost of designing chips will be justified by attaining future dominance of this growing market and higher margins.

Future Generations on the Horizon

Microsoft indicated that Maia and Cobalt represent the first of an ongoing family of custom chips. The company is already developing second-generation versions of each.

This implies that Microsoft is focusing on the long-term strategic benefit of having greater control over the foundational hardware elements of its Azure infrastructure. The company appears to be committed to making in-house chips a permanent addition to its cloud infrastructure mix.

Microsoft has not yet shared details about performance, power efficiency, or yields for its inaugural chips. It also remains unclear when exactly the new processors will go into production at scale.

However, by revealing Maia and Cobalt, Microsoft puts silicon vendors on notice that it is serious about exerting more influence and control over its supply chain. With cloud growth projected to remain strong, Microsoft’s production of proprietary chips could have industry-wide ripple effects.