Mini Review: Gigabyte R282-Z92 A00 Server – Two AMD EPYC Milan Processors in Action – Introducing Server and IPMI Management

After a recent group of dual-socket DELL servers, we have a more modern Gigabyte server with two AMD SP3 sockets, which is also ready for new AMD EPYC Milan processors based on Zen3 architecture and 7nm TSMC process. This server also became a bit of a victim of time, specifically he suffered from a lack of my time and the need to count some things in another physical location. That’s why some tests are missing, but I did most of them. At the same time, I have to repeat that testing servers is quite tricky, usually building servers for a specific deployment, so configurations vary greatly and may not always be optimal in terms of maximizing performance. This is also the case with today’s piece, which was compiled for some specific use, which I can’t talk about, but it’s not important for the mini-review itself.

The Gigabyte R282-Z92 A00 is a good 2U server with a pile of 2.5 “hot swap slots, PCIe slots, etc. The manufacturer states the dimensions as 438 x 87 x 730 mm, while the weight is somewhere in the range of 18-25 kg depending on The Gigabyte R282-Z92 A00 is a refresh of the older R282-Z92 100 server, which is essentially identical but does not support the latest AMD EPYC Milan processors, only the first EPYCy Naples and the second generation EPYC Rome can be tested. a bit for a change does not support the first generation of AMD EPYC processors, only the EPYC Rome (Zen2) and Milan (Zen3) generations work here.

The box is robust enough and traditionally we find the server itself in plastic packaging and foam padding so that it is not damaged along the way. In the box we also find KingSlide rails for rack mounting. This is followed by a trifle such as a manual or GPU cables, or something else that the supplier will attach to the server.

On the front of the server we can see several LEDs (power, UID, activity of two onboard networks / disk, disk activity, error LED) and a power button. There are also two USB-A 5Gb / s ports.

The entire front of the server then occupies a total of twenty-four hot-swap shafts on 2.5 “disks, the server has a backplane with a stack of U.2 connectors and can easily serve 24 PCIe NVMe 2.5” disks in NVMe U.2 2.5 “The board has some U.2 ports directly on the board, but if you want to connect all NVMe disks to the CPU, you need a stack of Gigabyte NVMe riser cards that physically convert a standard PCIe slot to SAS connectors. with the necessary SAS cables, you then connect the backplane to the disks and you’re done.

However, if you plan to use a hardware RAID controller, you can simply connect the backplane directly to it using miniSAS cables.

At the top of the server we find a long sticker that describes some basic elements of the server, including the disassembly of resources and disks.

In the back of the server we can see two 2.5 “SATA / SAS drawers, I have seen this solution on many 2U Supermicro servers, it is a useful solution where we can do with two SAS SSDs in RAID1 for the system and the front positions then serve some another storage.

At the bottom are two power supplies, of course fully redundant. Each of the sources can supply up to 1600 Watts, while it has the 80PLUS Platinum specification, as has been the custom for servers for a long time, the highest possible efficiency is extremely important here. The power supplies have one 12V 133A branch and a 12Vsb 2.5A branch. All lower voltages such as 5V, 3.3V are then provided by DC-DC converters on the motherboard.

In addition to the resources and at the very bottom, we can see the connectivity provided by the motherboard itself. We’ll find a classic 19-pin D-Sub connector, or VGA if you want. It is of course connected to the 2D display ASPEED AST2500, which is part of the BMC. In addition to VGA, we find an OCP 2.0 slot for additional network cards, which should offer up to eight PCIe Gen3 lines and is typically used for all kinds of network cards. In our case, there is an OCP card from ASUS, which is wearing a Mellanox Connect-X 4 2x SFP28 25Gb / s chipset. The advantage of these OCP slots is that network cards do not “eat” large PCIe slots in this way. The server originally included a PCIe 2x SFP + card from Intel, but got an upgrade to 2x SFP28, which is a pretty nice upgrade.

In addition to the OCP 2.0 slot, there are two USB-A 5Gb / s ports, two Gigabit RJ45 Ethernets provided by the Intel i350 chipset, and a third RJ45 connected directly to the BMC for remote management. Next to this RJ45, we will find the ID button, which lights up when you press blue on the front and back of the server and is used for easy identification in the rack, which is useful if you have several identical servers. The equipment ends with an OCP 3.0 slot, which offers sixteen PCIe Gen3 lines, which surprised me a bit, as such a DELL R6525 has an OCP with sixteen Gen4 lines. Either Gigabyte has a flaw in its materials, or the slot is really just Gen3.

Of course, I’m not allowed to bump into a total of eight PCIe slots, which is something I’m very happy about with the server, because there are never enough PCIe slots, especially if you want to add some of those cards. For enthusiasts, I will add a wiring diagram of all the buses from the Gigabyte website and cheers for a tour of the server’s insides!

After opening the top cover of the server, we find out that it is actually a simplified manual, at the bottom of the cover we find a pile of diagrams about the components in the server.

In front of the server, just behind the NVMe backplane for the front disks we find a set of four 80mm fans next to each other, these are Delta PFM0812HE-01 fans that can spin infernally fast, up to 16300 RPM at 84 Watts. The fans achieve an air flow of up to 129 CFM at a noise of 77 cannons.

Behind the fans we find a plastic cover that directs the flow of air through the coolers of the processors and memory. The server has a total of 32 slots for DDR4 Registered ECC memory, so each processor has sixteen slots and we can use an eight-channel memory connection.

The specialty of this server is that it can run 3200 MT / s RAM throughput even with 2DPC connection, but it must be turned on specifically in the BIOS and I expect that it probably increases the overall consumption. If we install 256GB modules, the total capacity climbs to 8 TB of RAM. However, the tested configuration is not optimized for as much memory as possible, quite the opposite. Each processor has only four single-rank 8GB DDR4-3200 Registered ECC modules. It’s enough for what the server does, of course in terms of performance, it would be better to install eight 16GB dual-rank memory for each CPU, but it simply did not happen. For this reason, the performance is lower in some tests.

AMD EPYC 7413 processors live under the massive coolers, these are 24C / 48T Zen3 pieces with a basic clock speed of 2.65 GHz and a Turbo boost of up to 3.6 GHz. The processors talk to each other using the Infinity Fabric bus in the form of 64 PCIe Gen4 lines, however, this platform offers a total of 128 PCIe Gen4 lines, 64 from each CPU. The processors have a configurable TDP, the choice is 165 or 200 Watts, they ran in the tested server and run in a 200W configuration. The server itself allegedly supports up to 280W TDP CPU. Interestingly, the processors maintain a frequency of around 3.4GHz when using all cores, which is not bad at all.

I was a little surprised that the power cascade on the board lacks some passive heatsinks, however, these are the first components on the fan wound, so the blow in the air-conditioned room will probably be sufficient, the power cascade does not brutally heat despite the absence of passive heatsinks.

Now a little about PCIe slots, there are a total of eight in the server and they are provided by three PCIe trees. All of these standard slots are fourth generation PCIe. The closest to the basket with two SAS / SATA 2.5 “frames we find two PCIe Gen4 x8 low-profile slots, which is suitable for various network cards and controllers. Later I moved the MIDRAID 9361-8i RAID card and its BBU to these slots. .

Furthermore, there are six full-height slots, three are always in the x16x8x8 configuration, so we can only fit two PCIe x16 cards, the other slots are only x8, but all are PCIe Gen4. As you can see from the photos, the server is equipped with two single-slot PCIe x16 graphics cards, these are two PNY NVIDIA Quadro RTX 4000 8GB cards, which require some additional power, it is solved using standard PCIe eight-pin, but the other end of the cable uses a Gigabyte proprietary osmipin from the motherboard.

On the board itself we find two OCP slots, which are accessible from outside the server, one M.2 22110 slot for PCIe NVMe SSD including a heatsink and several power connectors for GPU or a cage with 2.5 “disks.

In terms of storage, the server was equipped with six Samsung PM883 1.92TB SATA 6Gb / s SSDs, connected to a MegaRAID 9361-8i card and configured in RAID5. The operating system was Windows Server 2019 Essentials and current updates, later Ubuntu 21.04. I always used drivers from the manufacturer, ie for chipset, ASPEED AST2500 and onboard network I used drivers from Gigabyte website, for OCP Mellanox card drivers from Mellanox website, for NVIDIA RTX 4000 I used latest Quadro drivers for Windows server.

GIGABYTE Management Console – BMC – IPMI

The form of remote management must not be missing on the server, ASPEED AST2500 is a very common and well-known chip. Most manufacturers modify and add additional functions to their BMC / IPMI interface. Gigabyte, for example, boasts explicit support for Broadom MegaRAID adapters within their interface. And here are some other features that aren’t quite common with other server manufacturers. Gigabyte also offers a Gigabyte Server Management (GSM) feature and service, which can be useful for bulk management of multiple servers, including phone connections. This can be interesting for some admins, or even scary.

I primarily dealt only with the BMC / IPMI interface within the server, I can’t say how well the mass administration works or does not work, because I did not test it. The web interface looks more modern than older versions of RedFish, and here we find standard things like monitoring various sensors, network settings, sending alarms and so on. It is interesting that all sensors log their status in some detail and in many we can manually change high or dangerous values.

IPMI will also show us what processors, memories and PCIe cards we have installed, which is again a standard feature.

The ACD function is also interesting, which is a novelty, this function should be able to monitor the status of the running OS and log what happened when the OS crashed. Theoretically, we can easily find out why Windows fell into the BSOD. I didn’t try to invoke BSOD, I noticed this choice almost to the end.

There are many other standard options in the settings, including HTML5 console settings, services, users, security, and so on.

The fan speed control profiles are interesting, we can reset here how the fans are controlled and how they should behave. We can even set up each individual fan in quite detail. There can be several temperature sources, and the factory setting is based on the processor temperature for each pair of fans. In Gigabyte, they have even prepared a bunch of profiles for different graphics cards, which can be useful if you plan to use mainly GPUs, because in the case of loading only GPUs, the fans will hang in the factory settings, and this gives graphics cards room to overheat.

The HTML5 console is standard, I appreciate the ability to easily mount ISO or the ability to change the image quality, not all HTML5 KVMs can do this.


Source: Diit.cz by diit.cz.

*The article has been translated based on the content of Diit.cz by diit.cz. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!