Google's Guide is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.

HPE is developing a high-performance AI supercomputer with the world’s largest CPU

Advertisements

A wafer-sized processor with 850,000 cores will power the next AI supercomputer.

image credits: networkworld

Hewlett Packard Enterprise (HPE) has revealed that it is working on a new AI supercomputer alongside Cerebras Systems, the company behind the world’s largest chip.

The new system will be built up of HPE Superdome Flex servers and Cerebras CS-2 accelerators, both of which will be powered by the Wafer-Scale Engine 2 (WSE-2) processor.

Advertisements

The unnamed supercomputer will go operational at the Leibniz Supercomputing Center (LRZ) in Bavaria later this summer, enabling researchers with a new resource to help expedite research projects on issues ranging from medical imaging to aeronautical engineering.

Supercomputer with artificial intelligence

Cerebras announced the WS2-E in April of last year, with the goal of speeding up AI training and inference workloads. The device, which spans 46,225 mm(2) of silicon and contains 2.6 trillion transistors and 850,000 AI cores, is said to give the AI performance of hundreds of GPUs.

The wafer-sized chip also has 40GB of on-chip memory and a memory bandwidth of 20PB/s, allowing all parameters of large-scale AI models to be stored on-chip at the same time, reducing calculation time.

Advertisements

The WSE-2 chip will be used for the first time in a European supercomputer when the new system is launched in Germany.

Andrew Feldman, CEO and Co-Founder of Cerebras Systems, stated, “We developed Cerebras to reinvent compute.” “We’re excited to collaborate with LRZ and HPE to provide Bavaria’s researchers with lightning-fast AI, allowing them to test novel hypotheses, train big language models, and promote scientific discovery.”

Researchers at the LRZ have also expressed their delight at the advent of the new system, claiming that it will greatly improve the speed with which they can complete crucial AI and general-purpose HPC workloads.

Advertisements

“At the moment, we see our users’ AI compute demand increasing every three to four months,” stated LRZ Director Prof. Dr. Dieter Kranzlmüller.

“Cerebras delivers excellent performance and speed by tightly integrating CPUs, memory, and on-board networks on a single chip. This promises a huge increase in data processing efficiency and, as a result, a faster breakthrough of scientific findings.”

With the best bare metal hosting services, you can get rid of the virtualization.

Advertisements

Leave a Comment