Threadripper 3990X 64C HEDT Outperforms Dual Xeon Platinum 8280 CPUs Costin $20K

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
Really hoping for some Epic Review Kits for the 3990X. The 2990WX was awesome!

"Compared to AMD 3990X, that's a 6.25x higher price which just goes off to show how much of a beating AMD is giving to Intel in the HEDT and server landscape. This also paints us a good picture of what's happening with Intel's server segment where EPYC, that is based on the same foundation of their Zen 2 lineup, is offering stellar price to performance value, outperforming Intel's Xeon lineup in every workload.

Rumors indicate that Intel has delayed Cooper Lake and Ice Lake Xeon processors by a few months (3 months shift in the schedule) and there are also reports that they would be announcing major price cuts and repositioning of CPUs in the second half of 2020 which might indicate more aggressive pricing with their next Xeon parts. But for now, the real enthusiasts and workstation builders should definitely be looking forward to the 3990X and its humongous compute power."


https://wccftech.com/amd-ryzen-threadripper-3990x-versus-xeon-platinum-8280-cpu-benchmarks/
 
Feels like a price drop in 2nd half is way too late, Intel would have needed it since beginning the year.
 
  • Like
Reactions: N4CR
like this
Maybe I am stupid, but that's a chart for dual socket systems, making it $14,000 and 128 cores vs $20,000 and 56 cores, not $7,000 and 64 cores. Intel's not doing *that* badly per core.
 
FWLIW, I'm looking at switching to these for my next build and that is probably going to be great. Not being lane-limited and RAM limited will be quite nice, in addition to the significantly increased compute.

But I'm not sure just how large of an impact this is going to have on Intel's sales. I think these might have come out just a little too late for most customers to jump on them this cycle. Intel's recently released sales figures seem to indicate that they captured almost all of their enterprise customers on the most recent upgrade cycle. I expect the next enterprise upgrade cycle to be the one where these hit Intel hard, provided that Intel hasn't released anything comparable by then. But to really make a dent in enterprise sales, you basically need to offer a dominant product two cycles in a row. If AMD can do that, then Intel will be hurting and will be in AMD's now-current/then-former position of having to offer dominant products for two cycles.
 
FWLIW, I'm looking at switching to these for my next build and that is probably going to be great. Not being lane-limited and RAM limited will be quite nice, in addition to the significantly increased compute.

But I'm not sure just how large of an impact this is going to have on Intel's sales. I think these might have come out just a little too late for most customers to jump on them this cycle. Intel's recently released sales figures seem to indicate that they captured almost all of their enterprise customers on the most recent upgrade cycle. I expect the next enterprise upgrade cycle to be the one where these hit Intel hard, provided that Intel hasn't released anything comparable by then. But to really make a dent in enterprise sales, you basically need to offer a dominant product two cycles in a row. If AMD can do that, then Intel will be hurting and will be in AMD's now-current/then-former position of having to offer dominant products for two cycles.

The this about enterprise sales is that Intel already has contracts in place with HPE, Cisco, Dell to purchase X amount of processors. At least Cisco has already committed to offering their USC blades with EPYC processors.

Then comes to the platform issues. In a VMWare environment you can't simply mix and match Intel and AMD hosts in clusters. Most companies are running Intel bases hosts. So when it comes time to expand they have to buy Intel. Or when they're replacing hosts piece meal they have to replace with Intel. It hard to off load an entire cluster to replace it all at once. This is why Intel is still dominating enterprise sales. People can't get off the Intel platform due to virtualization constraints.
 
Maybe I am stupid, but that's a chart for dual socket systems, making it $14,000 and 128 cores vs $20,000 and 56 cores, not $7,000 and 64 cores. Intel's not doing *that* badly per core.

As stated above, this is a single CPU versus a dual-CPU system.
 
The bench mark linked is a pair of Epyc 7742s.

Are you daft? You're referring to the Epyc benches included in the "linked" article, which was only referring to the Epyc line in that they are the same family. o_O
 
The bench mark linked is a pair of Epyc 7742s.
Where does it say Epyc 7742?

AMD Ryzen Threadripper 3990X Sisoftware Database Benchmarks Leak Out
 
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
 
This makes me want to toss all 5 Intel servers into the recycling pile, which still doesn't add up to 128 cores
 
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
ME: Looks at the number of vulnerabilities found in Intel CPUs, then looks at this post confused.

If anything I'd not want intel cpus anywhere near my critical enterprise gear now.
 
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
Not trying to defend AMD here, but what bugs are you talking about?
The last bug I remember hearing anything about was when ZEN 1 was originally released back in 2017, there was a compiling error that occurred on certain processors, and AMD replaced all of them for free.
Aside from firmware updates, I haven't seen many, if any, "bugs" on ZEN+ or ZEN 2, and if there are any, please point them out.

Intel has had how many CPU hardware exploits and flaws in the last 2 years?
What are they up to now, around 40+ exploits?

Each of which, I might add, has been a minor to major performance hit in various areas, and HyperThreading (SMT) on their CPUs is basically dead at this point without becoming a major security risk, not to mention disabling it is only a partial mitigation.
I absolutely would not trust any of their CPUs or platforms in enterprise at this point in time.
 
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
I can attest to that, had to roll back more than once BIOS “fix” because of bugs it introduced were worse than the ones it fixed. Took 6 months to get my EPYC 7551p systems stable and 4 months of that was attributed to BIOS issues. I am giving the new EPYC’s a good lead time before I jump in, not that they are actually available yet.
 
Not trying to defend AMD here, but what bugs are you talking about?
The last bug I remember hearing anything about was when ZEN 1 was originally released back in 2017, there was a compiling error that occurred on certain processors, and AMD replaced all of them for free.
Aside from firmware updates, I haven't seen many, if any, "bugs" on ZEN+ or ZEN 2, and if there are any, please point them out.

Intel has had how many CPU hardware exploits and flaws in the last 2 years?
What are they up to now, around 40+ exploits?

Each of which, I might add, has been a minor to major performance hit in various areas, and HyperThreading (SMT) on their CPUs is basically dead at this point without becoming a major security risk, not to mention disabling it is only a partial mitigation.
I absolutely would not trust any of their CPUs or platforms in enterprise at this point in time.
My EPYC servers had bugs with power states, Ram stability, I/O, and temperature reporting. There were also incompatibility issues with HyperV for a brief while where the virtual systems would just soft lock. All were resolved over a few months with BIOS and Driver updates.
 
My EPYC servers had bugs with power states, Ram stability, I/O, and temperature reporting. There were also incompatibility issues with HyperV for a brief while where the virtual systems would just soft lock. All were resolved over a few months with BIOS and Driver updates.

Lets also be fair here. BIOS and drivers are also a responsibility of the manufacturer. Be it Dell, HP, Supermicro, IBM or otherwise. I've had more then enough BIOS issues on HP servers and Intel chipsets that our Dells have not.
 
Lets also be fair here. BIOS and drivers are also a responsibility of the manufacturer. Be it Dell, HP, Supermicro, IBM or otherwise. I've had more then enough BIOS issues on HP servers and Intel chipsets that our Dells have not.
But AMD still issues them the main core of those BIOS updates and the manufacturer has little more to do than issue the parts needed to match their UI.
and when updating the drivers while I was getting them from the Dell site they were the AMD installers that matched the versions listed from the AMD site.
 
But AMD still issues them the main core of those BIOS updates and the manufacturer has little more to do than issue the parts needed to match their UI.
and when updating the drivers while I was getting them from the Dell site they were the AMD installers that matched the versions listed from the AMD site.

Oh yeah, I agree. No matter the industry, the end user is the tester. An apology will always be cheaper then QA.
 
Oh yeah, I agree. No matter the industry, the end user is the tester. An apology will always be cheaper then QA.
Still gonna pull the trigger on another one though, just waiting to get some numbers back. The EPYC 7502P should be a nice little unit for an accounting server upgrade :)
 
Yep. I can't tell you how many times I've had to update HP firmware for SAS controllers and other things to correct various issues on Intel based servers.
At this point I'd take firmware updates over replacing lsi cards because their heatsinks get too hot and melt the spacers so the heatsinks fall off and give raid corruption. After the 4th time we pulled the servers out of the rack and threw them in the garbage. Never use intel branded servers, ever. lol New ones are Dell/Hp and been fine though they are all Xeon Silver platform., not used any epyc yet since clients are slow to want to adopt new platforms.
 
At this point I'd take firmware updates over replacing lsi cards because their heatsinks get too hot and melt the spacers so the heatsinks fall off and give raid corruption. After the 4th time we pulled the servers out of the rack and threw them in the garbage. Never use intel branded servers, ever. lol New ones are Dell/Hp and been fine though they are all Xeon Silver platform., not used any epyc yet since clients are slow to want to adopt new platforms.

HP's P8xx and P4xx series controllers are HP branded LSI cards that do the same thing. I have had to have HP replace dozens of them.
 
HP's P8xx and P4xx series controllers are HP branded LSI cards that do the same thing. I have had to have HP replace dozens of them.
That sucks, I hate those stupid shitty clips they use on the things. Thanks for the heads up about them. I wonder if the new dells still use the thermal adhesive on the heatsinks.
 
Just got my servers, 7702p running esxi with a windows vm spanked my water cooled auto overclocked 3960x in cinebench r15. Just waiting on 16X Micron 9300 6.4tb max drives. Hope I spank LTT’s benchmarks. :rolleyes:
 
Comparing this to an Intel/esxi combo...

Unless your on latest esxi with vsphere 6.7update3 using the better optimized performance mitigation kernel option, guess what happens when an Intel cpu is over 70% usage, you lose performance... 30% if you max the CPU out or 1% for each percentage over 70. Amazing stuff for sql server. Really great investment going with Intel servers.
 
HP's P8xx and P4xx series controllers are HP branded LSI cards that do the same thing. I have had to have HP replace dozens of them.

Same issues with the P4xx controllers in my gen 7 and 8's. Weve also had a sudden rash of cache module and cache battery failures. PITA.
 
That sucks, I hate those stupid shitty clips they use on the things. Thanks for the heads up about them. I wonder if the new dells still use the thermal adhesive on the heatsinks.

Don't know, but I haven't seen the issue with the Dells nearly as much.

Same issues with the P4xx controllers in my gen 7 and 8's. Weve also had a sudden rash of cache module and cache battery failures. PITA.

Yeah, those problems are things I have seen all too often.
 
One of our programs insists on using HP servers, for which I do not see the appeal regardless of the CPU sourced.
Their RAID controller firmware is cumbersome, and the documentation bare minimum.
Dell wherever possible, please.
 
At this point I'd take firmware updates over replacing lsi cards because their heatsinks get too hot and melt the spacers so the heatsinks fall off and give raid corruption. After the 4th time we pulled the servers out of the rack and threw them in the garbage. Never use intel branded servers, ever. lol New ones are Dell/Hp and been fine though they are all Xeon Silver platform., not used any epyc yet since clients are slow to want to adopt new platforms.
LSI is the Devil
 
One of our programs insists on using HP servers, for which I do not see the appeal regardless of the CPU sourced.
Their RAID controller firmware is cumbersome, and the documentation bare minimum.
Dell wherever possible, please.
Yeah the documentation on their perc cards is great.
 
Back
Top