Nvidia has published a new server design platform called MGX. Announced at the Computex conference in Taiwan, the company claims it will provide a GPU-first architecture that system engineers can use to build a variety of servers geared towards AI and high-performance computing. It is the latest in a series of AI-focused moves by the company, which have led to its value topping $1trn.
The rapid rise of artificial intelligence throughout the economy, driven in part by the success of ChatGPT from OpenAI, has led to ever-growing demand for GPU-based compute power.
To meet the growing demand Nvidia says a new architecture is required. MGX allows for 100 server variations and early adopters include ASUS, Gigabyte, QCT and Supermicro. Nvidia promises its MGX will cut development time of a new system by two-thirds to just under six months, and costs down by three-quarters over other platforms.
“Enterprises are seeking more accelerated computing options when architecting data centres that meet their specific business and application needs,” said Kaustubh Sanghani, vice president of GPU products at Nvidia. “We created MGX to help organisations bootstrap enterprise AI, while saving them significant amounts of time and money.”
The platform starts with a system architecture that has been optimised for accelerated computing. Engineers can then select the processing units that best fit their needs. It has also been built to work across data centres and in cloud platforms, Nvidia explained.
Move to GPU-centred compute
Nvidia has cashed in on the AI revolution, with the vast majority of the most popular models trained using Nvidia hardware. The company’s A100 GPU – and its recently launched successor the H100 – being snapped up by AI labs around the world in their thousands.
Last week, Nvidia reported record quarterly and forecast income $4bn higher than expected for the current period. The news saw the company’s share price shoot up, and today its market cap surpassed $1trn for the first time when the markets opened after the holiday weekend.
Jensen Huang, Nvidia CEO said during his keynote at Computex that existing CPU-centred servers aren’t up to the task of housing multiple GPUs and NICs. He told delegates it was necessary as existing designs aren’t built to cope with the amount of heat produced by Nvidia accelerators.
Content from our partners
How to get the best of both worlds in the hybrid cloud
The key to good corporate cybersecurity is defence in depth
Cybersecurity in 2023 is a two-speed system
The MGX architecture allows for air or water cooling and comes in a range of form factors to be more sustainable and customisable. It comes in 1, 2 and 4U chassis options and can also work with any Nvidia GPU, the company’s new Grace Hopper CPU or any CPU using Intel’s x86 architecture.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
Huang said the era of the CPU was coming to an end. He claimed the performance improvement in CPUs had plateaued and that we are now moving to an era dominated by GPUs and accelerator-assisted compute. Huang said the effort required to train a large language model can be reduced under the new architecture.
He cited a hypothetical 960-server system today that cost $10m and used 11GWh to train an LLM. In comparison, using the new architecture two Nvidia-powered MGX servers costing $400,000 filled with GPUs could do the same job while consuming just 0.13 GWh of electricity. He said a $34m Nvidia setup with 172 servers could train 150 large language models and use the same power as the 960-server CPU-first system of today.
This is all driven by the growing demand for AI, which Huang described as a leveller and a way to “end the digital divide”. He was referring to its ability to create code, and explained: “There’s no question we’re in a new computing era. Every single computing era you could do different things that weren’t possible before, and artificial intelligence certainly qualifies.”
Read more: Why Google’s AI supercomputing breakthrough won’t worry Nvidia
Topics in this article : AI , NVIDIA