Threadripper 3990X 64C HEDT Outperforms Dual Xeon Platinum 8280 CPUs Costin $20K

erek

Supreme [H]ardness
Joined
Dec 19, 2005
Messages
5,922
Really hoping for some Epic Review Kits for the 3990X. The 2990WX was awesome!

"Compared to AMD 3990X, that's a 6.25x higher price which just goes off to show how much of a beating AMD is giving to Intel in the HEDT and server landscape. This also paints us a good picture of what's happening with Intel's server segment where EPYC, that is based on the same foundation of their Zen 2 lineup, is offering stellar price to performance value, outperforming Intel's Xeon lineup in every workload.

Rumors indicate that Intel has delayed Cooper Lake and Ice Lake Xeon processors by a few months (3 months shift in the schedule) and there are also reports that they would be announcing major price cuts and repositioning of CPUs in the second half of 2020 which might indicate more aggressive pricing with their next Xeon parts. But for now, the real enthusiasts and workstation builders should definitely be looking forward to the 3990X and its humongous compute power."


https://wccftech.com/amd-ryzen-threadripper-3990x-versus-xeon-platinum-8280-cpu-benchmarks/
 

RPGWiZaRD

[H]ard|Gawd
Joined
Jan 24, 2009
Messages
1,099
Feels like a price drop in 2nd half is way too late, Intel would have needed it since beginning the year.
 
  • Like
Reactions: N4CR
like this

idiomatic

Limp Gawd
Joined
Jan 12, 2018
Messages
142
Maybe I am stupid, but that's a chart for dual socket systems, making it $14,000 and 128 cores vs $20,000 and 56 cores, not $7,000 and 64 cores. Intel's not doing *that* badly per core.
 
Joined
Oct 23, 2018
Messages
599
FWLIW, I'm looking at switching to these for my next build and that is probably going to be great. Not being lane-limited and RAM limited will be quite nice, in addition to the significantly increased compute.

But I'm not sure just how large of an impact this is going to have on Intel's sales. I think these might have come out just a little too late for most customers to jump on them this cycle. Intel's recently released sales figures seem to indicate that they captured almost all of their enterprise customers on the most recent upgrade cycle. I expect the next enterprise upgrade cycle to be the one where these hit Intel hard, provided that Intel hasn't released anything comparable by then. But to really make a dent in enterprise sales, you basically need to offer a dominant product two cycles in a row. If AMD can do that, then Intel will be hurting and will be in AMD's now-current/then-former position of having to offer dominant products for two cycles.
 

Riccochet

Fully [H]
Joined
Apr 11, 2007
Messages
23,791
FWLIW, I'm looking at switching to these for my next build and that is probably going to be great. Not being lane-limited and RAM limited will be quite nice, in addition to the significantly increased compute.

But I'm not sure just how large of an impact this is going to have on Intel's sales. I think these might have come out just a little too late for most customers to jump on them this cycle. Intel's recently released sales figures seem to indicate that they captured almost all of their enterprise customers on the most recent upgrade cycle. I expect the next enterprise upgrade cycle to be the one where these hit Intel hard, provided that Intel hasn't released anything comparable by then. But to really make a dent in enterprise sales, you basically need to offer a dominant product two cycles in a row. If AMD can do that, then Intel will be hurting and will be in AMD's now-current/then-former position of having to offer dominant products for two cycles.
The this about enterprise sales is that Intel already has contracts in place with HPE, Cisco, Dell to purchase X amount of processors. At least Cisco has already committed to offering their USC blades with EPYC processors.

Then comes to the platform issues. In a VMWare environment you can't simply mix and match Intel and AMD hosts in clusters. Most companies are running Intel bases hosts. So when it comes time to expand they have to buy Intel. Or when they're replacing hosts piece meal they have to replace with Intel. It hard to off load an entire cluster to replace it all at once. This is why Intel is still dominating enterprise sales. People can't get off the Intel platform due to virtualization constraints.
 

KazeoHin

Supreme [H]ardness
Joined
Sep 7, 2011
Messages
8,085
Maybe I am stupid, but that's a chart for dual socket systems, making it $14,000 and 128 cores vs $20,000 and 56 cores, not $7,000 and 64 cores. Intel's not doing *that* badly per core.
As stated above, this is a single CPU versus a dual-CPU system.
 

thesmokingman

Supreme [H]ardness
Joined
Nov 22, 2008
Messages
5,987
The bench mark linked is a pair of Epyc 7742s.
Are you daft? You're referring to the Epyc benches included in the "linked" article, which was only referring to the Epyc line in that they are the same family. o_O
 

aokman

[H]ard|Gawd
Joined
Jan 3, 2012
Messages
1,043
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
 

blandead

Limp Gawd
Joined
Nov 6, 2010
Messages
280
This makes me want to toss all 5 Intel servers into the recycling pile, which still doesn't add up to 128 cores
 

M76

[H]F Junkie
Joined
Jun 12, 2012
Messages
11,168
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
ME: Looks at the number of vulnerabilities found in Intel CPUs, then looks at this post confused.

If anything I'd not want intel cpus anywhere near my critical enterprise gear now.
 

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,555
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
Not trying to defend AMD here, but what bugs are you talking about?
The last bug I remember hearing anything about was when ZEN 1 was originally released back in 2017, there was a compiling error that occurred on certain processors, and AMD replaced all of them for free.
Aside from firmware updates, I haven't seen many, if any, "bugs" on ZEN+ or ZEN 2, and if there are any, please point them out.

Intel has had how many CPU hardware exploits and flaws in the last 2 years?
What are they up to now, around 40+ exploits?

Each of which, I might add, has been a minor to major performance hit in various areas, and HyperThreading (SMT) on their CPUs is basically dead at this point without becoming a major security risk, not to mention disabling it is only a partial mitigation.
I absolutely would not trust any of their CPUs or platforms in enterprise at this point in time.
 

Lakados

2[H]4U
Joined
Feb 3, 2014
Messages
2,423
Yeah... no I still don’t get the warm fuzzies over using AMD especially in an enterprise space. AMD are too willing to push out platforms with bugs.
I can attest to that, had to roll back more than once BIOS “fix” because of bugs it introduced were worse than the ones it fixed. Took 6 months to get my EPYC 7551p systems stable and 4 months of that was attributed to BIOS issues. I am giving the new EPYC’s a good lead time before I jump in, not that they are actually available yet.
 

Lakados

2[H]4U
Joined
Feb 3, 2014
Messages
2,423
Not trying to defend AMD here, but what bugs are you talking about?
The last bug I remember hearing anything about was when ZEN 1 was originally released back in 2017, there was a compiling error that occurred on certain processors, and AMD replaced all of them for free.
Aside from firmware updates, I haven't seen many, if any, "bugs" on ZEN+ or ZEN 2, and if there are any, please point them out.

Intel has had how many CPU hardware exploits and flaws in the last 2 years?
What are they up to now, around 40+ exploits?

Each of which, I might add, has been a minor to major performance hit in various areas, and HyperThreading (SMT) on their CPUs is basically dead at this point without becoming a major security risk, not to mention disabling it is only a partial mitigation.
I absolutely would not trust any of their CPUs or platforms in enterprise at this point in time.
My EPYC servers had bugs with power states, Ram stability, I/O, and temperature reporting. There were also incompatibility issues with HyperV for a brief while where the virtual systems would just soft lock. All were resolved over a few months with BIOS and Driver updates.
 
Joined
Apr 29, 2002
Messages
2,437
My EPYC servers had bugs with power states, Ram stability, I/O, and temperature reporting. There were also incompatibility issues with HyperV for a brief while where the virtual systems would just soft lock. All were resolved over a few months with BIOS and Driver updates.
Lets also be fair here. BIOS and drivers are also a responsibility of the manufacturer. Be it Dell, HP, Supermicro, IBM or otherwise. I've had more then enough BIOS issues on HP servers and Intel chipsets that our Dells have not.
 

Lakados

2[H]4U
Joined
Feb 3, 2014
Messages
2,423
Lets also be fair here. BIOS and drivers are also a responsibility of the manufacturer. Be it Dell, HP, Supermicro, IBM or otherwise. I've had more then enough BIOS issues on HP servers and Intel chipsets that our Dells have not.
But AMD still issues them the main core of those BIOS updates and the manufacturer has little more to do than issue the parts needed to match their UI.
and when updating the drivers while I was getting them from the Dell site they were the AMD installers that matched the versions listed from the AMD site.
 
Joined
Apr 29, 2002
Messages
2,437
But AMD still issues them the main core of those BIOS updates and the manufacturer has little more to do than issue the parts needed to match their UI.
and when updating the drivers while I was getting them from the Dell site they were the AMD installers that matched the versions listed from the AMD site.
Oh yeah, I agree. No matter the industry, the end user is the tester. An apology will always be cheaper then QA.
 

Lakados

2[H]4U
Joined
Feb 3, 2014
Messages
2,423
Oh yeah, I agree. No matter the industry, the end user is the tester. An apology will always be cheaper then QA.
Still gonna pull the trigger on another one though, just waiting to get some numbers back. The EPYC 7502P should be a nice little unit for an accounting server upgrade :)
 

D-EJ915

[H]ard|Gawd
Joined
Jan 31, 2003
Messages
1,182
Yep. I can't tell you how many times I've had to update HP firmware for SAS controllers and other things to correct various issues on Intel based servers.
At this point I'd take firmware updates over replacing lsi cards because their heatsinks get too hot and melt the spacers so the heatsinks fall off and give raid corruption. After the 4th time we pulled the servers out of the rack and threw them in the garbage. Never use intel branded servers, ever. lol New ones are Dell/Hp and been fine though they are all Xeon Silver platform., not used any epyc yet since clients are slow to want to adopt new platforms.
 

Dan_D

Extremely [H]
Joined
Feb 9, 2002
Messages
56,450
At this point I'd take firmware updates over replacing lsi cards because their heatsinks get too hot and melt the spacers so the heatsinks fall off and give raid corruption. After the 4th time we pulled the servers out of the rack and threw them in the garbage. Never use intel branded servers, ever. lol New ones are Dell/Hp and been fine though they are all Xeon Silver platform., not used any epyc yet since clients are slow to want to adopt new platforms.
HP's P8xx and P4xx series controllers are HP branded LSI cards that do the same thing. I have had to have HP replace dozens of them.
 

D-EJ915

[H]ard|Gawd
Joined
Jan 31, 2003
Messages
1,182
HP's P8xx and P4xx series controllers are HP branded LSI cards that do the same thing. I have had to have HP replace dozens of them.
That sucks, I hate those stupid shitty clips they use on the things. Thanks for the heads up about them. I wonder if the new dells still use the thermal adhesive on the heatsinks.
 

vxspiritxv

[H]ard|Gawd
Joined
Feb 10, 2001
Messages
1,526
Just got my servers, 7702p running esxi with a windows vm spanked my water cooled auto overclocked 3960x in cinebench r15. Just waiting on 16X Micron 9300 6.4tb max drives. Hope I spank LTT’s benchmarks. :rolleyes:
 

blandead

Limp Gawd
Joined
Nov 6, 2010
Messages
280
Comparing this to an Intel/esxi combo...

Unless your on latest esxi with vsphere 6.7update3 using the better optimized performance mitigation kernel option, guess what happens when an Intel cpu is over 70% usage, you lose performance... 30% if you max the CPU out or 1% for each percentage over 70. Amazing stuff for sql server. Really great investment going with Intel servers.
 
Joined
Apr 29, 2002
Messages
2,437
HP's P8xx and P4xx series controllers are HP branded LSI cards that do the same thing. I have had to have HP replace dozens of them.
Same issues with the P4xx controllers in my gen 7 and 8's. Weve also had a sudden rash of cache module and cache battery failures. PITA.
 

Dan_D

Extremely [H]
Joined
Feb 9, 2002
Messages
56,450
That sucks, I hate those stupid shitty clips they use on the things. Thanks for the heads up about them. I wonder if the new dells still use the thermal adhesive on the heatsinks.
Don't know, but I haven't seen the issue with the Dells nearly as much.

Same issues with the P4xx controllers in my gen 7 and 8's. Weve also had a sudden rash of cache module and cache battery failures. PITA.
Yeah, those problems are things I have seen all too often.
 

IdiotInCharge

[H]F Junkie
Joined
Jun 13, 2003
Messages
14,533
One of our programs insists on using HP servers, for which I do not see the appeal regardless of the CPU sourced.
Their RAID controller firmware is cumbersome, and the documentation bare minimum.
Dell wherever possible, please.
 

Lakados

2[H]4U
Joined
Feb 3, 2014
Messages
2,423
At this point I'd take firmware updates over replacing lsi cards because their heatsinks get too hot and melt the spacers so the heatsinks fall off and give raid corruption. After the 4th time we pulled the servers out of the rack and threw them in the garbage. Never use intel branded servers, ever. lol New ones are Dell/Hp and been fine though they are all Xeon Silver platform., not used any epyc yet since clients are slow to want to adopt new platforms.
LSI is the Devil
 

Lakados

2[H]4U
Joined
Feb 3, 2014
Messages
2,423
One of our programs insists on using HP servers, for which I do not see the appeal regardless of the CPU sourced.
Their RAID controller firmware is cumbersome, and the documentation bare minimum.
Dell wherever possible, please.
Yeah the documentation on their perc cards is great.
 
Top