21.5 C
London
Sunday, June 4, 2023

Meta Has a Giant New AI Supercomputer to Shape The Metaverse

Meta Has a Giant New AI Supercomputer to Shape The Metaverse – Meta, the tech juggernaut once known as Facebook, announced on Monday that it had created one of the world’s fastest supercomputers, the Research SuperCluster, or RSC.

It’s the fastest system created for AI activities, according to the Chief Executive Officer Mark Zuckerberg, with 6,080 graphics processing units packed into 760 Nvidia A100 modules.

- Advertisement -

The Perlmutter supercomputer, which employs more than 6,000 of the same Nvidia GPUs and is now the world’s fifth fastest supercomputer, can match that computing power.

Meta plans to grow to 16,000 GPUs in a second phase this year, doubling speed by a factor of 2.5
RSC will be used by Meta for a variety of research projects requiring next-generation performance, such as “multimodal” AI, which draws conclusions from a combination of sound, imagery, and actions rather than only one sort of input data.

This could be useful in dealing with the nuances of one of Facebook’s major issues: identifying dangerous content.

Meta, a leading AI researcher, believes that the investment will yield great result by allowing RSC to assist in the development of the company’s most recent priority: the metaverse which is a virtual environment.

- Advertisement -

Also Read: Expert NFT Predictions For 2022

meta has a giant new ai supercompute to shape the metaverse

RSC may be powerful enough to translate speech for a large group of people speaking multiple languages at the same time.

In a statement, Meta CEO Mark Zuckerberg said, “The experiences we’re designing for the metaverse demand immense compute capacity.” “With RSC, new AI models will be able to learn from billions of samples, grasp hundreds of languages, and much more.”

Meta and other artificial intelligence proponents have shown that training AI models with larger data sets produces better results. It takes more computational power to train AI models compare to how it does to run them, which is the reason iPhones can unlock when they identify the users face without a connection to a data center full of servers.

Designers of supercomputers developed their machines by balancing memory, CPU performance, GPU performance as well as internal data paths.

The GPU, a type of processor originally designed for accelerated graphics but now utilized for many other computer tasks, is often the star of the show in today’s AI.

Also Read: How The African Crypto Community Can Benefit From Wakanda Inu

The cutting-edge A100 chips from Nvidia are created for AI and other heavy-duty data center activities. Big firms like Google, alongside a lot of startups, are developing AI processors, some of which are the world’s largest semiconductors.

Facebook chooses the relatively flexible A100 GPU foundation because it is the most productive environment for developers when combined with Facebook’s own PyTorch AI engine.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More From Evoclique