Nvidia co-founder and CEO Jensen Huang unveiled many new exciting avenues that the company is pursuing including new AI supercomputers, chips, and partnerships during his keynote speech at Computex in Taipei.

During the two-hour-long speech, which was his first public address in four years, Huang claimed that he aims to bring generative AI to every data center, according to a report by TechCrunch.

The tech mogul revealed that Nvidia’s GForce RTX 4080 Ti GPU, which is aimed at gamers, is now in full production and being produced in “large quantities” with partners in Taiwan, Huang announced.

He also debuted the Nvidia Avatar Cloud Engine (ACE) for Games, a customizable AI model foundry service that provides pre-trained models for game developers. ACE improves AI-powered language interactions for non-playable characters (NPCs).

Huang went on to reveal that the Cuda computing model is now used by four million developers and more than 3,000 applications worldwide.

He also touted the world’s first computer with a transformer engine, GPU server HGX H100, which has entered full-volume production with several manufacturers in Taiwan.

New Grace Hopper Supercomputers Show That Mellanox Acquisition Has Paid Off

Discussing one of the “greatest strategic decisions” that the company has made, Huang said Nvidia’s acquisition of supercomputer chipmaker Mellanox for $6.9 billion in 2019 has paid off — a fact evidenced by the GH200 Grace Hopper computer’s release.

Grace Hopper boasts a whopping 4 PetaFIOPS TE, 72 Arm CPUs connected by Nvidia’s incredible chip-to-chip link, 96GB HBM3, and 576 GB of total memory. Huang heralded it as a “computer, not a chip” and said it was designed for high-resilience data center applications.

The DGX GH200 provides a solution when additional memory capacity is required.

It connects eight Grace Hoppers to three NVLINK switches and then to 32 additional switches, enabling up to 256 Grace Hopper chips to be linked.

Combing 256 of these incredible chips may seem like extreme overkill but the largest tech companies (like Google, Meta and Amazon) will likely buy them in droves to ramp up their AI and server performance.

The outcome is an ExaFLOPS Transformer Engine with a mammoth 144 TB GPU memory that functions as a colossal GPU, which will be used by Google Cloud, Meta, and Microsoft for research into artificial intelligence capabilities.

Nvidia Partners with SoftBank to Add Grace Hopper Supercomputers into New Data Centers

Nvidia and SoftBank have also partnered to introduce the Grace Hopper super chip into new distributed data centers belonging to SoftBank in Japan.

These data centers will be able to host generative AI and wireless applications in a multi-tenant common server platform, reducing costs and energy.

The chipmaker also announced its Spectrum-X accelerated networking platform, including the Spectrum 4 switch and the Bluefield 3 Smart Nic, to speed up Ethernet-based cloud services.

Spectrum 4 switch offers 128 ports of 400GB per second and 51.2T per second while the Smart Nic handles congestion control.

WPP, the world’s largest advertising agency, had teamed up with Nvidia to build a content engine based on the Nvidia Omniverse for producing photo and video content for advertising.

Moreover, Huang announced that the Nvidia Isaac ARM full-stack robot platform is now available for robot builders. The Isaac ARM starts with a chip called Nova Orin and is the first robotics full-reference stack.

Nvidia’s stock price has increased considerably since the start of the year due to its importance in AI computing, giving the company a market valuation of around $960 billion.

Read More:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops