REVIEWS: Intel® Server System M50CYP2UR208 – Two 10nm Intel Xeony Ice Lake – Server Introduction

For me, this year is marked by servers, typically with AMD EPYC processors, and I have occasionally managed to test some of them and write some articles. I tried to get some pieces with Intel processors, however, more or less all vendors replied that servers with Intel processors are unavailable, respectively. the processors are generally poorly accessible, which was reflected in the fact that I often had to change the configurations of some servers. Coincidentally, one of the readers said that he had a server for rent from Intel with new Intel Ice Lake Xeons, I burned some diesel for Greta, tested the server and wrote this article.

To refresh your memory about Xeon Ice lake processors, I recommend one of our older articles:

Technically, the Ice Lake Xeons built on the 10nm process were to start work sometime between 2018-2019, but this did not happen and they did not reach the market until this year. Unlike previous Cascade Lake-X Xeons, Intel finally brings fourth-generation PCI Express support to servers. One processor can provide up to 64 PCIe lines, which is an upgrade from the older generation 48 Gen3 lines, however, even with these Xeons, Intel has retained a chipset that provides some USB and SATA connectivity, so it is not always possible to use the full 64 lines. UPI buses with a throughput of 11.2 GT / s are used for communication with the second processor in two-socket systems, or each processor has three of these buses for the fastest possible communication. Intel has also improved RAM support, so some processors can allegedly address up to 6TB of RAM, but a maximum of DDR4-2933, so in a 2DPC configuration, expect a slowdown to 2667 MT / s, while the memory controller is eight-channel. Newly, these processors can have up to 40 physical cores, which is again a jump from the 28 cores of the previous generation (at the moment I mean primarily the standard socket LGA4189 and not 56C BGA models).

However, some features are truncated, not all Xeony Ice Lake can work with Intel Optane memories, and the maximum size of the SGX security enclave differs. Intel, of course, kept the Bronze / Silver / Gold / Platinum markings, which still seems a bit strange to me even today, but probably why not.

The tested server is officially called Intel® Server System M50CYP2UR208 and the server is really from Intel, which is quite new to me, as I have never worked with servers directly from Intel, typically various DELL, HPE, Supermicro, SUN or IBM machines. So I was curious how Intel took on different server habits. I will mention in advance that the server has GPU preparation, but no GPU kit yet, so it has fewer PCIe slots. Again, I have to remind you that servers are typically tailored to customers, this piece served as such a traveling demo, rather than finally arriving (if it is not yet demoted) to a customer.

I brought the server in its original box, inside we can find KingSlide rails outside the server itself, which is such a classic, I also had these rails on the recently tested Gigabyte R282-Z92 A00 server.

As the name of the server suggests, this is a 2U chassis, at the very front we find three separate “baskets” for 2.5 “disks, which of course require a backplane and PCIe / SAS / SATA connectivity to a controller or HBA. only one disk bin and one backplane, then equipped with two SSD Kingston DC1000M 960GB, these are U.2 PCIe disks for server use, their DPWD reaches 1 for five years, or 1.6 DPWD for three years, If you have no idea what DPWD is, it is short for “Drive Writes Per Day”, ie how many times we can completely overwrite the SSD per day during the warranty period, so we can overwrite these disks for five years once daily until the NAND is completely depleted.

Intel used a bit of special drawers here, the disks themselves do not mount in any drawer and just slide into the server. I was a little surprised by the extremely dim LEDs, they are only visible at a certain angle and are almost invisible, which I see for the first time, typically it is desirable that the disk activity is somehow reasonably visible, I hope that in case of failure it lights up orange or red higher brightness.

In the front part of the server we find the status LEDs and the power button, which I forgot to take a picture, oh. At the top of the server we find a plan of the server describing the positions of various slots and components, a fairly standard affair, but I was very amused that Intel refers to the 15-pin D-Sub connector as “Video Port (RJ45)”, probably someone was wrong.

So let’s look at the back of the server, because the port equipment directly on the board is surprisingly weak, we find only three USB-A 5Gb / s ports, VGA, RJ45 for remote management (operated by ASPEED AST2500) and RJ45 providing a serial port.

Furthermore, we only have various PCIe slots and one OCP 3.0 slot, in which a new Intel E810 network card with two SFP28 ports was installed, unfortunately I did not have another machine or switch with SFP28 on hand so that I could try it somehow, maybe next time.

The server has a position for two power supplies, but only one was installed, which was enough for my tests without any problems. The source was somewhat brutally oversized, delivering up to 2,100 watts on the 12V branch.

The rest of the back is occupied by PCIe card slots, theoretically the server has up to eight slots, while only two are low-profile, however, it depends on the configuration of PCIe trees, because the tested configuration does not provide all eight slots, but only five x16 Gen4.

Because I needed a network other than optics via SFP28 / SFP +, I installed an old quad-port gigabit card with chipsets from Broadcom in the server, I suspect it’s from some DELL PowerEdge R620. I also installed a graphics card so as not to rely only on the 2D ASPEED AST2500, I installed an economical 50W TDP Sapphire Radeon RX 550 2GB.

As for the PCIe slots themselves within the trees, it is interesting that we can fit two PCIe x16 Gen4 cards in two low-profile slots, here I would rather expect two x8 slots.

The middle PCIe tree will again offer two PCIe x16 Gen4 slots, I have just installed a Radeon RX 550 2GB in one of them.

The latest PCIe tree offers a whole PCIe x16 Gen4 slot, but it also has two connectors for connecting a PCIe SSD, most likely to the front backplane. Because PCIe is prone to signal quality and wire length, there is also the smart AstimerLabs PT4161L retimer, for which the manufacturer boasts low latencies.

Now still inside the server, the layout of the components is more or less standard, in front of the disks there are a total of six Foxconn PIA060K12W fans, each fan can theoretically take up to 78 watts, but this will probably never happen. Of course, the fans can be replaced without tools. Behind the fans we find a plastic tunnel that corrects the air flow through the processor coolers, RAM and VRM. I was surprised that all power cascades have relatively high passive coolers, but that’s actually quite positive.

The processor heatsinks are quite massive and aren’t funny about the GPU, but I don’t understand why some of their screws are made of plastic.

Under each heatsink is equipped with one eight-core Intel Xeon Gold 5315Y processor with eight physical cores, the base frequency is 3.2GHz and the maximum Turbo boost is up to 3.6GHz, while using all cores, the processor is willing to boost at 3.4GHz, which not bad. Now the worse news, each of these processors has a TDP set to 140 Watts and keeps that value. Processors are thus relatively high-frequency, but let’s not forget that they also provide PCIe lines and other functions, so the power supply does not fall purely on the processor cores. In terms of efficiency, however, the 16-core AMD EPYC 7282 will look like a king with its 120W TDP. In comparisons, I recommend looking mainly at the sixteen cores in the test, or at two octaves, the comparison against the older Ivy Bridge Xeons will be interesting, I also tested them as two octaves. Power limits can be configured in the BIOS, but not until later in the next chapter.

Around the processors we find a total of 32 slots for DDR4-Registered memory, so each processor is optimal to connect at least eight memory modules, which the tested configuration meets, each CPU has eight 8GB of DDR4-2933 memory, these modules support speeds up to 3200 MT / s , but the processors can handle a maximum of 2933 MT / s.

Directly on the board we find the rest of the “sauce”, ASPEED AST2500 BMC, Intel VROC Premium module (it is basically a license hardware key to create software RAID via SATA / SAS / NVMe SSD) and OCP3.0 card, which bears the designation Intel E810, it is a two-port network card with two 25Gb / s SFP28 connectors. The card can be removed without having to open the server.

In terms of testing, I mostly did it on Windows Server 2019 and a little bit on Ubuntu 21.04.

The server greeted me with a bunch of error messages related to NvmExpress devices, these messages also appeared in the background of the boot menu. I also had a problem booting the Windows Server 2019 installation, the system immediately fell into the BSOD “DRIVER VERIFIER DMA VIOLATION”, I solved this problem by making SW RAID0 over both 960GB NVMe U.2 SSDs and I turned off virtualization including Secure Boot. I’m not 100% sure what the Windos didn’t like, but it worked out. In Windows, the device manager looked quite humorous because a few drivers were missing. All can be downloaded from the Intel website.

In the next chapter we will look in more detail at BIOS, IPMI and VROC RAID settings.


Source: Diit.cz by diit.cz.

*The article has been translated based on the content of Diit.cz by diit.cz. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!