Atos will start working on an EFLOPS system with Nvidia hardware

On November 16, The Next Platform released an interview with Nvidia CEO Jen-Hsun Huang, in which the editor did not forgive allusions about AMD Instinct MI250X accelerators, contracts for the construction of EFLOPS systems, which AMD won as a supplier of processors and accelerators, and in short competing position. At the time of the interview, there were no indications that Nvidia had won a contract for the EFLOPS system. The CEO responded to trivializing the phrase “Nvidia killer”, which the editor did not use, and by stating that competition was always there, some “Nvidia killer” comes every year and so on. Nevertheless, he admitted that the competition was now seriously intense.

We can only speculate what effect this interview and relatively unpleasant questions had on the upcoming events, but these took place as follows:

On November 18, Atos issued a press release to prepare the EFLOPS system with Nvidia. This system is not the result of winning a contract (such as AMD Frontier, Intel Aurora, AMD El Capitan, etc.), but purely the decision of Nvidia and Athos to build a supercomputer that will be part of an initiative called Excellence AI Lab (EXAIL). It aims to bring together European scientists and researchers focused primarily on climate and health, who will in some way have access to the computing power of this supercomputer.

However, we do not know what specific performance this system will achieve, on which accelerators it will be built, or when it is planned to be completed. It is supposed to be a BullSequana X class system, the processors will be based on the Nvidia architecture Grace (ARM), Nvidia’s accelerators will be from an unspecified future generation and the individual elements will be connected by Atos BXI Exascale Interconnect in combination with Nvidia Quantum-2 InfiniBand.

The press also states that “Atos will develop…”, so it is possible that we are asking about parameters that have not yet been set and at the moment it is not more than a decision that both companies will prepare an EFLOPS system. If we want at least an indicative date of its completion, we can infer from the information about the use of some future generation of accelerators that it will be a generation Hopper. Nvidia is expected to report it in the middle of next year, but it doesn’t seem likely that the EFLOPS system built on it could be in the world until next year. So 2023.

supercomputercomplete.FP64 powerconsumedCPU(GP)GPU
Summit20180,2 EFLOPS13 MWIBMNvidia
Sierra20180,125 EFLOPS11 MWIBMNvidia
Perlmutter20200,1 EFLOPS21,5 MWAMDNvidia
HPC Mega-Project?0,275 EFLOPS?AMDAMD
Fugaku20210,415 EFLOPS18 MWFujitsu
Frontier2021>1,5 EFLOPS27 MWAMDAMD
Oceanlite20211,3 EFLOPS35 MWSW26010
Tianhe-320211,3 EFLOPS?FeiTeng
Aurora2022?~2,4 EFLOPS60 MWIntelIntel
Captain2022/23>2 EFLOPS33 MWAMDAMD
? (pro EXAIL)2023?? EFLOPS?NvidiaNvidia

In addition, the so-called JUWELS Booster system built on the Atos BullSequana XH2000 “with almost 2.5 EFLOPS AI” and 3744 × Nvidia A100 Tensor Core GPU and Nvidia Quantum InfiniBand was announced.

Translated from marketing language to supercomputer language, it will be a system with a performance of less than 39 PFLOPS. The devil lies in the detail, ie in the note “AI”. This suggests that the 2.5 EFLOPS is neither universal computing power nor performance in the traditionally stated accuracy of FP64. From the number of accelerators (3744) it can be easily calculated that those 2.5 EFLOPS relate to tensor operations in Int8 accuracy.


Source: Diit.cz by diit.cz.

*The article has been translated based on the content of Diit.cz by diit.cz. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!