AMD Reaffirms EPYC Bergamo CPUs In 1H 2023, Instinct MI300 APUs In 2H 2023

Hassan Mujtaba Comments

AMD reaffirmed the launch plans of its next-generation EPYC Bergamo CPUs and the Instinct MI300 APUs which launch this year.

AMD EPYC Bergamo CPUs & Instinct MI300 APUs To Power Next-Gen Data Centers This Year

AMD already got a lead on Intel with its EPYC Genoa CPUs that launched months ahead of the Xeon Sapphire Rapids CPUs. Fast forward to 2023, and AMD is planning to launch four brand new data-center products which include Genoa-X, Bergamo, Siena, and Instinct MI300. During its recent Q4 2022 earnings call, AMD once again confirmed that its EPYC Bergamo CPUs will launch in 1H 2023 followed by Instinct MI300 APUs in 2H 2023.

Related Story After NVIDIA, AMD Discloses A Significant Hit – Equivalent To 50 Percent Of Its FY 2024 Net Income – From China-Related Export Licensing Requirements

AMD Instinct MI300 In 2H 2023 - Powering 2+ Exaflops El Capitan Supercomputer

The AMD Instinct MI300 will be a multi-chip and multi-IP Instinct accelerator that not only features the next-gen CDNA 3 GPU cores but is also equipped with the next-generation Zen 4 CPU cores.

amd-instinct-mi300-exasacale-apu-with-zen-4-cpu-cdna-3-gpu-cores-_1
amd-instinct-mi300-exasacale-apu-with-zen-4-cpu-cdna-3-gpu-cores-_2
amd-instinct-mi300-exasacale-apu-with-zen-4-cpu-cdna-3-gpu-cores-_3

The latest specifications that were unveiled for the AMD Instinct MI300 accelerator confirm that this exascale APU is going to be a monster of a chiplet design. The CPU will encompass several 5nm 3D chiplet packages, all combining to house an insane 146 Billion transistors. Those transistors include various core IPs, memory interfaces, interconnects, and much more. The CDNA 3 architecture is the fundamental DNA of the Instinct MI300 but the APU also comes with a total of 24 Zen 4 Data Center CPU cores & 128 GB of the next-generation HBM3 memory running in 8192-bit wide bus config that is truly mind-blowing.

AMD will be utilizing both 5nm and 6nm process nodes for its Instinct MI300 'CDNA 3' APUs. The chip will be outfitted with the next generation of Infinity Cache and feature the 4th Gen Infinity architecture which enables CXL 3.0 ecosystem support. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive. AMD is also projecting over 8x the AI performance versus the CDNA 2-based Instinct MI250X accelerators. The CDNA 3 GPU's UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copies while delivering low TCO.

In terms of when - we've talked before about sort of our Data Center GPU ambitions and the opportunity there. We see it as a large opportunity. As we go into the second half of the year and launch MI300, sort of the first user of MI300 will be the supercomputers or El Capitan, but we're working closely with some large cloud vendors as well to qualify MI300 in AI workloads. And we should expect that to be more of a meaningful contributor in 2024. So lots of focus on just a huge opportunity, lots of investments in software as well to bring the ecosystem with us.

AMD CEO, Lisa Su (Q4 2022 Earnings Call)

AMD EPYC Bergamo In 1H 2023 - Topping Up The Core Count To 128 With Zen 4C

The AMD EPYC Bergamo chips will be featuring up to 128 cores and will be aiming at the HBM-powered Xeon chips along with server products from Apple, Amazon, and Google with higher core counts (ARM architecture). Both Genoa and Bergamo will utilize the same SP5 socket and the main difference is that Genoa is optimized for higher clocks while Bergamo is optimized around higher-throughput workloads.

Bergamo will launch in the first half of the year. We are on track for the Bergamo launch, and you'll see that become a larger contributor in the second half. So as we think about the Zen 4 ramp and the crossover to our Zen 3 ramp, it should be towards the end of the year, sort of in the fourth quarter, that you would see a crossover of sort of Zen 4 versus Zen 3, if that helps you.

AMD CEO, Lisa Su (Q4 2022 Earnings Call)

It is stated that AMD's EPYC Bergamo CPUs will be arriving in the first half of 2023 and will use the same code as Genoa and also run like Genoa but the code is half the size of Genoa. The CPUs are specifically mentioned to compete against the likes of AWS's Graviton CPUs and other ARM-based solutions where peak frequency isn't a requirement but throughput through the number of cores is. One workload example for Bergamo would be Java where the extra cores can definitely come in handy. Following Bergamo will be the TCO-optimized Siena lineup for the SP6 platform which will play a crucial role in expanding AMD's TAM growth in the server segment.

AMD's EPYC & Instinct chips are expected to push the company's market share holding to 30% and possibly even breach it by the end of this year. The company really has a strong roadmap laid out in the server market segment and we can't wait to see how things evolve in the coming quarters.

AMD EPYC CPU Families:

Family NameAMD EPYC VeniceAMD EPYC Turin-DenseAMD EPYC Turin-XAMD EPYC TurinAMD EPYC SienaAMD EPYC BergamoAMD EPYC Genoa-XAMD EPYC GenoaAMD EPYC Milan-XAMD EPYC MilanAMD EPYC RomeAMD EPYC Naples
Family BrandingEPYC 11K?EPYC 9005EPYC 9005EPYC 9005EPYC 8004EPYC 9004EPYC 9004EPYC 9004EPYC 7004EPYC 7003EPYC 7002EPYC 7001
Family Launch2025+2025?2025?202420232023202320222022202120192017
CPU ArchitectureZen 6?Zen 5CZen 5Zen 5Zen 4Zen 4CZen 4 V-CacheZen 4Zen 3Zen 3Zen 2Zen 1
Process NodeTBD3nm TSMC?4nm TSMC4nm TSMC5nm TSMC4nm TSMC5nm TSMC5nm TSMC7nm TSMC7nm TSMC7nm TSMC14nm GloFo
Platform NameTBDSP5SP5SP5SP6SP5SP5SP5SP3SP3SP3SP3
SocketTBDLGA 6096 (SP5)LGA 6096 (SP5)LGA 6096LGA 4844LGA 6096LGA 6096LGA 6096LGA 4094LGA 4094LGA 4094LGA 4094
Max Core Count384?19212812864128969664646432
Max Thread Count768?38425625612825619219212812812864
Max L3 CacheTBD384 MB1536 MB384 MB256 MB256 MB1152 MB384 MB768 MB256 MB256 MB64 MB
Chiplet DesignTBD12 CCD's (1CCX per CCD) + 1 IOD16 CCD's (1CCX per CCD) + 1 IOD16 CCD's (1CCX per CCD) + 1 IOD8 CCD's (1CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (2 CCX's per CCD) + 1 IOD4 CCD's (2 CCX's per CCD)
Memory SupportTBDDDR5-6000?DDR5-6000?DDR5-6000?DDR5-5200DDR5-5600DDR5-4800DDR5-4800DDR4-3200DDR4-3200DDR4-3200DDR4-2666
Memory ChannelsTBD12 Channel (SP5)12 Channel (SP5)12 Channel6-Channel12 Channel12 Channel12 Channel8 Channel8 Channel8 Channel8 Channel
PCIe Gen SupportTBDTBDTBDTBD96 Gen 5128 Gen 5128 Gen 5128 Gen 5128 Gen 4128 Gen 4128 Gen 464 Gen 3
TDP (Max)TBD500W (cTDP 600W)500W (cTDP 600W)500W (cTDP 600W)70-225W320W (cTDP 400W)400W400W280W280W280W200W