128-Core AMD Epyc Rome Server Tear-Down

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
"This AMD Epyc server is a dual-socket configuration in a Gigabyte chassis with capacity for up to 4TB of memory, 24 NVMe drives, and plenty more. The server has numerous world records and is semi-overclockable, and in our video, we'll take it apart (as much as we're allowed) to look closer at the base components of this monster server. These cost tens of thousands of dollars to configure, mostly depending on the expansion devices chosen. It's probably the highest-end system we'll have in our video set in some time."

 
When his hair goes sideways, wew lad.
Ghost of Jensen is fucking hilarious too. Worth a watch if you get a boner for world record holding server gear..
 
i remember having those like 8k rpm delta fans on my Athlon MP back in the day.
 
Yum, database monster.


Depending on the DB platform.. Bad use case for it. License costs based on core and socket would bankrupt most orgs. Virtualization monster. Even then, you'd have to numa rules the hell out of your resources to dial in consistent performance. Still a bad ass piece of hardware to get hands on experience with.
 
Depending on the DB platform.. Bad use case for it. License costs based on core and socket would bankrupt most orgs. Virtualization monster. Even then, you'd have to numa rules the hell out of your resources to dial in consistent performance. Still a bad ass piece of hardware to get hands on experience with.

Per core rather than per socket*
 
Depending on the DB platform.. Bad use case for it. License costs based on core and socket would bankrupt most orgs. Virtualization monster. Even then, you'd have to numa rules the hell out of your resources to dial in consistent performance. Still a bad ass piece of hardware to get hands on experience with.

I don't know about licensing I was more referring to raw speed. With all table spaces on PCIE4 and all cores blazing, well, I would just love to test something like that for a week. :)
 
  • Like
Reactions: N4CR
like this
There are database options that do not charge licensing fees. It's not like the only option is mssql
 
those fans though rated up to 7amp only run [email protected] rpm with ~129.5cfm@77dba. the lowest rpm they can run is ~3k. each fan at 100% uses ~51.6w.

specsheet on the fans.
https://www.delta-fan.com/Download/Spec/PFM0812HE-01BFY.pdf
Holy crap 51 Watts!
Thats a lot of heat generated by a fan.
Having several of those in a server and then you have multiple servers, I can see why you need to pipe AC in server rooms or else the ambient temp will just keep rising.
 
These are the gems i come here for. This forum is full of bullshit artists, but shit like this is gold.

meh, i only ended up looking it up when wendell screwed up and said they ran at 3k rpm when Steve asked him.. was like yeah there's no damn way delta fans are running at 3k rpm so i ended up looking up the spec sheet while watching the video. i definitely wasn't expecting 16k rpm though.
 
Depending on the DB platform.. Bad use case for it. License costs based on core and socket would bankrupt most orgs. Virtualization monster. Even then, you'd have to numa rules the hell out of your resources to dial in consistent performance. Still a bad ass piece of hardware to get hands on experience with.
You only pay per “core” assigned to the VM running the DB, you wouldn’t install the DB on the core system. Hell for new builds it’s not even recommended to run the core system on the core system, part of most licenses MS included let’s you run that 1 server license as both the physical and one virtual.
 
In regards to DB work loads, I was speaking as if it was a single bare metal system running Oracle or MSSQL in terms of licensing. Both Oracle and MSSQL licenses schemes are not friendly to multiple core / sockets if you ran that on bare metal. They stick it to you even if you run a hypervisor and then vm the hell out of those platforms. I did a poor job articulating that this hardware would be better off running a hypervisor of sorts for multiple VMs doing non-db type things. Can you run VM backed DBs? Hell yea. Is it the right direction? Totally personal preference that dictated by the infrastructure seat you are currently sitting in. Sure there are free DB platforms out there that are absolutely solid and run some incredibly amazing applications. I see that stuff in my day job all the time. But very few people will run this hardware as a single system running one DB stack on it. The trends for the last 10 years have shown that these kinds of monster systems are being replaced with smaller redundant nodes and spreading the work load around at a cheaper cost. People ran away from Sun 15Ks and monster AIX Power servers for a reason. This is an absolutely amazing piece of hardware, but its got a very very specific use case for it.
 
meh, i only ended up looking it up when wendell screwed up and said they ran at 3k rpm when Steve asked him.. was like yeah there's no damn way delta fans are running at 3k rpm so i ended up looking up the spec sheet while watching the video. i definitely wasn't expecting 16k rpm though.

Yeah, i was thinking more like 8k. 16k was a cool surprise. #nerd
 
I don't know about licensing I was more referring to raw speed. With all table spaces on PCIE4 and all cores blazing, well, I would just love to test something like that for a week. :)

I think you might actually be let down. Well.. wait.. You'd be blown away, but the numbers might be lower then you expect. I know its not apples to apples.. but im doing a lot of ucs work. full width blades. We had a couple of monster servers with 40 cores per socket that we ran physical mssql dbs for a monster consolidation project. Due to the amount of ram(3TBS), and cores, we actually went down in performance. A lot of it had to do with numa rules, and how each CPU has access to banks of ram. On paper, it looked like it would scream. In practice? Not so much. We ended up hypervisoring the hell out of these monster servers, vm'd the DBs, pinning numa rules, and banks of ram. None of its perfect, and there are better ways to do it, but that way worked in my shop.
 
I think you might actually be let down. Well.. wait.. You'd be blown away, but the numbers might be lower then you expect. I know its not apples to apples.. but im doing a lot of ucs work. full width blades. We had a couple of monster servers with 40 cores per socket that we ran physical mssql dbs for a monster consolidation project. Due to the amount of ram(3TBS), and cores, we actually went down in performance. A lot of it had to do with numa rules, and how each CPU has access to banks of ram. On paper, it looked like it would scream. In practice? Not so much. We ended up hypervisoring the hell out of these monster servers, vm'd the DBs, pinning numa rules, and banks of ram. None of its perfect, and there are better ways to do it, but that way worked in my shop.

Very interesting, I have been out of the sys admin game for a long time. Would love to shadow someone setting this shit up.
 
Very interesting, I have been out of the sys admin game for a long time. Would love to shadow someone setting this shit up.

been in the sysadmin game nonstop for roughly 20 years, but super heavy focus on the large scale emc storage platforms for the last 10. die hard infrastructure guy. only work i ever wanna do. one snippet i forgot with my post.. We are in the process of pulling those 40 core cpus for lower core counts, and higher frequency. Sounds insane.. But we actually will save an insane amount in licensing because of less cores, and we will get a performance boost in the process. parallel processing is king but some shit just does better on less cores, but with higher clocked frequencies.
 
"Ladies and gentlemen, please take your seats and fasten your seatbelts for take-off...."
Oh, lets slap some wings on that server and how far it'll fly!
 
In that video he mentions he was able to overclock it, any ideas how? I thought it wasn't possible to overclock the Rome epyc cpus.
 
In that video he mentions he was able to overclock it, any ideas how? I thought it wasn't possible to overclock the Rome epyc cpus.

probably by using the bclk and taking advantage of the way cold scaling works with zen 2.
 
Depending on the DB platform.. Bad use case for it. License costs based on core and socket would bankrupt most orgs. Virtualization monster. Even then, you'd have to numa rules the hell out of your resources to dial in consistent performance. Still a bad ass piece of hardware to get hands on experience with.

The NUMA node rules for the Epyc 7002 lineup is fair easier than the 7001: only one NUMA node per socket. Yep, all 512 bit width of the memory interface is available as one single node on the socket. Dialing in rules for per socket migration are fair easier than the previous four nodes per socket, specially in a dual socket config.

What has me excited isn't necessary the 64 core per socket chips (which are genuinely bad ass due to their brute for performance) but the low core count, high cache, high clock models. There should be a small but noticeable increase in IPC due to the wider memory interface and combine that with high clocks for possibly the highest single threaded performance AMD can offer. The ideal would be a 16 core chip with 256 MB of L3 cache (one core enabled per CCX), a >4.2 Ghz turbo and 225W default TDP with 240W and 280W configurable options and you have a single threaded beast. The software side of licensing there wouldn't be so bad and potentially creep into the performance space of the 24 core chips due to the clock speed differences if it can run continually at its >4.2 Ghz turbo.

The single NUMA node is also why I'm excited for TR3. The new 32 core model should run circles around the TR2's 32 core chip based upon that single architectural change. The advances of Zen2 should only widen that gap. Drastic changes in performance are in store for applications that preferred a single NUMA node.
 
  • Like
Reactions: kdh
like this
Back
Top