The Sunnyvale giant has presented the new EPYC 8004, a family of next-generation processors for servers and data centers that represent a clear commitment to specialization to deal with different workloads more effectively.
To better understand where the EPYC 8004 is positioned, and what objective it has, it is necessary to review the different versions of new generation EPYC processors that AMD currently has in its catalog:
- AMD EPYC Genoa, based on the Zen 4 architecture and configured with up to 96 cores and 192 threads per socket. Designed for general purpose.
- AMD EPYC Bergamo, based on the Zen 4c architecture and configured with up to 128 cores and 256 threads per socket. Designed for cloud computing.
- AMD EPYC Genoa-X, based on the Zen 4 architecture but with more 3D-stacked L3 cache, and configured with up to 96 cores and 192 threads per socket. Ideal for more technical tasks capable of taking advantage of a larger cache.
- AMD EPYC Siena, which are the new 8004 that AMD has just presented, and which are aimed at intelligent edge computing.
AMD EPYC 8004: values
The new AMD EPYC Siena processors are designed to offer a outstanding energy efficiencybalanced performance, very high performance per watt valuewhich translates into a lower cost of ownership, and also allows the creation of more flexible designs thanks to its thermal optimizations.
In the attached image you can see the differences that exist between the AMD EPYC 9004 processors, which includes both Genoa, Bergamo and Genoa-X, and the AMD EPYC 8004, which includes the new Siena. This new generation:
- It will be available in configurations between 8 cores and 16 threads and 64 cores and 128 threads.
- It can only be configured under the single socket model.
- It uses the Zen 4c architecture, in which each chiplet consists of 8 cores and 16 threads, 1 MB of L2 cache per core and 16 MB of L3 cache shared by all cores.
- They will have a TDP of 70 to 225 watts.
- They will support DDR5 memory configurations in up to 6 channels at 4,800 MHz.
- Support for PCIe Gen5 and a maximum of 96 lanes.
- Support of the CXL 1.1+ standard and 48 lines.
- Will support RDIMM modules.
As I have told you on previous occasions, the Zen 4c architecture is a slightly tweaked version of the Zen 4 architecture that, in essence, maintains the most important keys of this, including the ISA subsystem, the L1 and L2 caches and the IPC, so the only important difference is found in the reduction of the L3 cache, which goes from 32 MB per chiplet to 16 MB per chiplet.
In the attached image we can see that Siena offers a clearly defined value compared to Genoa, and that it focuses on improving total cost of ownership and energy efficiency. This is what makes it a specialized solution, and what gives you an important differential value compared to the rest of the options that AMD has in its catalog.
To facilitate identification, AMD has shared a slide where we can see the most important keys in this sense based on its nomenclature. The 8 indicates the family or range in which it fits, the second number indicates the number of cores within the series, the third number refers to the performance level (the higher the higher) and the last number refers to the generation. . The final two letters indicate particularities such as whether it is a single socket model.
A more in-depth look and available models
Look closely at the image below these lines. In it we can see a complete breakdown of all the components that make up an EPYC 8004 with 64 cores and 128 threads. We have a total of 8 chiplets with 8 cores and 16 threads of processing each of them that integrate 16 MB of L3 cache, which leaves us with a total of 128 MB of L3 cache. It is important that you remember that each chiplet can only access its L3 cache.
CPU chiplets are manufactured in the TS 5nm nodeMCand the I/O chiplet is manufactured in the 6nm node also from TSMC. The entire connectivity subsystem is integrated into the latter, including the DDR5 memory controllers, the PCIe Gen5 interface and the third generation Infinity Fabric. As I told you, it supports 6-channel DDR5 configurations at 4,800 MHz with a maximum capacity of 1,152 TB, offering up to 96 PCIe Gen5 lanes and 48 CXL 1.1+ lanes.
The entire AMD EPYC 8004 family has the set of AVX-512 instructionsdispose of SMT (two threads per core), has a turbo mode that adjusts the working frequencies dynamically, integrates a state-of-the-art Infinity Fabric interconnection system and has protection technology AMD Infinity Guard and security at the hardware level.
These processors use a new socket with a smaller design, the SP6. In the last image you can see a list with all the models that AMD will launch within the EPYC 8004 series. The most basic configuration will have 8 cores and 16 threads, and from there it will scale in steps of 8 cores and 16 threads until reaching a maximum of 64 cores and 128 threads.
Performance data provided by AMD
To better demonstrate what its new EPYC 8004 processors are capable of, the Sunnyvale company has shown us some performance data, and in them we can see that an EPYC 8534P processor It is capable of almost doubling the performance per watt of a 60-core Intel Xeon Platinum 8490H.
Obviously, the results depend a lot on the test we use and the task to be performed. In transcoding we see that the performance per core is 16% higher on an EPYC 8324P compared to an Intel Xeon Gold 6421N. There is no doubt that the EPYC 8004 stand out mainly for their value in terms of consumption and cost of ownership, thanks to their excellent performance per watt consumed, but as we have seen they are not far behind in raw power either.
AMD has confirmed that Dell, Lenovo and Supermicro will be among the first OEMs that will offer platforms configured with these new processors, and has shown in a single image the impressive ecosystem of partners that it is creating within its commitment to the intelligent Edge, where we can see names as important as VMWare, Fujitsu, HPE, Ciscon and Oracleamong others.