AMD Ryzen Threadripper Spec Leaks

Wow, I am really interested looks similar to the ncase m1 which I love. This might be the case for me.



I just started looking 5 mins ago. I already have an idea what I want but I dont think I'll reach 5k on it. Probably around 4k though, its mostly going to be a workstation to crunch simulations.
They also market a smaller ATX unit: the Cerberus X. It is just shy of 20 liters and would probably be the smallest you can practically get ATX. Wanted to share in case you needed the extra GPU slots, dimm slots, etc...
user3.jpg
 
These prices are going to leave the door open for Intel HEDT with a better performing IPC and substantially higher clocks . Was really hoping to see the 32 core come in at under 1500 USD.
Ahh but they have scope to drop the price when Intel HEDT comes out.

They don't know what price intel will release at and equally they have supply chain dominance at the moment as intel are not there with their product. Nothing like beating the competition to marketplace with a price and then slapping them with the reduction when they appear
 
Guess Kyle did not time travel into the future and come back with the answer to what the "W" in the model name means. :cry:

Surely it means Workstation.

According to AMD the WX series are designed for “creators and innovators”, whereas the X series are for “enthusiasts and gamers”.
 
Crazy thought, but maybe AMD didn't release an 8/16 TR4 CPU because the 12/24 CPU replaced it in the TR4 lineup.

My meaning being maybe there really is a 10-12 core AM4 2800X part out there to combat Intel's Z390 8 core part.

I'm pretty sure its been debated here many times why that would not be a possibility, but it does make me wonder! AMD certainly wouldn't want an 8 core TR4 2000 series CPU if they knew they had a 12 core AM4 CPU in the pipeline.

No gonna happen. There is only 8-cores in the die.
 
My initial list was
CPU - AMD 2990X - $1800-ish
Corsair Vengeance LPX 128GB DDR4 DRAM 3600MHz C18 Kit, Black - $1800
ASUS ROG ZENITH EXTREME AMD Ryzen Threadripper TR4 DDR4 M.2 U.2 X 399 E-ATX HEDT Motherboard - $428.50
CORSAIR AXi Series, AX1200i - $314.56
(8) CableMod ModFlex Right Angle SATA 3 Sleeved Cables (Red) 30cm - $63.20
CableMod PSU Cable kit $99.99
(1) Samsung 960EVO 500GB M.2
(1) Asus Strix 1080Ti OC - Already have
(3) HGST 8TB HDDs - Already have

but now that I saw it, I want the EK-FB ASUS X399 GAMING RGB Monoblock to cool the VRMs as well as the CPU, which will require purchasing all the crap to do a custom loop. If I go that route I may as well get a block for the 1080Ti as well so now I need to either buy a second loop for the card or get some ungodly-large radiator which means I need an even larger case. (I won't be using my spare TT Core P200)

https://www.amazon.com/Phanteks-Ent...&sr=8-2&keywords=phanteks+enthoo+series+prime want this case
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
There's absolutely a market for very high clock speed multi CPU. There's a reason the Platinum Xeon's are so expensive.

They'd be silly if they didn't do something with these clocks in a server board. Even for the (lots of) niche user cases out there, the cloud vendors would LOVE them.

I need 32 cores of the fastest CPU I can get (the current ones are too slow) and getting the extra dimm slots of the dual CPU is great. More cores and I just hit the software limit on my software.

The cloud is most certainly NOT the area for ultra high performance chips. Efficiency drops as speed increases. And it's all about efficiency.

https://storageservers.wordpress.com
/2016/07/06/what-is-white-space-and-gray-space-in-data-centers/

That said even super computers don't use the super fast stuff as most of the processing power is reserved for mimd matrix ops to solve big simulations which take the form of massive linear equations and pid feedback recurssion loops.
 
Curious to see how Intel will respond.

2920x = 7920x at 1/2 launch price.
2950x = 7960x at 1/2 launch price.

A 2900x at $450 would still be nice for those looking for a long upgrade path.
 
The cloud is most certainly NOT the area for ultra high performance chips. Efficiency drops as speed increases. And it's all about efficiency.

https://storageservers.wordpress.com
/2016/07/06/what-is-white-space-and-gray-space-in-data-centers/

That said even super computers don't use the super fast stuff as most of the processing power is reserved for mimd matrix ops to solve big simulations which take the form of massive linear equations and pid feedback recurssion loops.

People pay a lot for the faster machines in a cloud context and it exceeds the threading tax that you have to charge. An f2 in azure for example has a premium that far exceeds the baseload margins.

It's never going to be a core volume driver but there is not insignificant demand, more importantly and to my point cost effective high GHz is something AMD can potentially play in, that they couldnt before.

Thanks for the lesson though
 
Strange, 2920x is a lot lower price than 1900x yet has the same number of cores/basic clocks and has a higher boost clock.
The 2950x vs 1950x also. The 2950x is cheaper, faster basic clocks and even faster boost clock.
 
Interesting that AMD actually beat Intel to the punch with HEDT CPU's while Intel has nothing yet to offer to compete against AMD"s top dog CPU, not counting that 28 core Xeon CPU they showed off which was laughable at best.

If Intel were to release a 32 core CPU right now, it would most likely cost twice the price, $1799 is not exactly cheap either but better than Intel's pricing record.
 
Also why is everyone making comments about the 250w TDP? Just curious. It is 32 cores, I feel like that's very reasonable. 32 core EPYC is a 180 TDP part at lower frequencies.

Well, CPU's typically run at 3.3-3.5v... To generate 250w of what is basically waste heat is a pretty hefty problem for your system to deal with.

Consider this, one of those awesome heat lamps that gets put in expensive bathrooms to warm you as you get in/out of the shower are 250 watt bulbs and their heat output is obviously less than 250 watts. Basically, a system designed to heat an entire room and a 200lb body is putting off less heat than this cpu under normal loads. That is pretty fricken crazy.
 
Since AMD is fitting the top tier in Threadripper2 with 32 cores on 4 dies (where only 2 dies handle accessing the RAM), does anyone know if the 16 and less core varients are dropping down to only 2 dies for 16 cores?

I am hopeful that AMD will make the 16 core variant to use only 2 dies with 8 cores each for the 16 total cores so that the processor doesn't have half of its cores dealing with high latency whenever the off-ram cores are needing access to ram data.
 
I justified spending extra on an x99 system and 6 cores thinking it was for futureproofing. Shit.
AMD guys will tell you otherwise, but what do you need more than 6/12 for right now? Are you a prosumer? Scientist? Otherwise there is basically no software that will take care of 12 threads. Unless of course you were trying to grow your e-peen, then... well yeah you have a tiny e-peen.
 
Well, CPU's typically run at 3.3-3.5v... To generate 250w of what is basically waste heat is a pretty hefty problem for your system to deal with.

Consider this, one of those awesome heat lamps that gets put in expensive bathrooms to warm you as you get in/out of the shower are 250 watt bulbs and their heat output is obviously less than 250 watts. Basically, a system designed to heat an entire room and a 200lb body is putting off less heat than this cpu under normal loads. That is pretty fricken crazy.


Not quite as crazy as intel's 5ghz stunt on a 28 core than needed a 1kw chiller to run.
 
Interesting that AMD actually beat Intel to the punch with HEDT CPU's while Intel has nothing yet to offer to compete against AMD"s top dog CPU, not counting that 28 core Xeon CPU they showed off which was laughable at best.

If Intel were to release a 32 core CPU right now, it would most likely cost twice the price, $1799 is not exactly cheap either but better than Intel's pricing record.
It's because Intel is making smart business decisions. Better IPC on a smaller core count works just fine in Virtualization (which is where I'm sure AMD's salespeople are going with this otherwise why the fuck would you need that for anything other than shits and giggles?). Since Intel has saturated the market during their supremacy and orgs typically drag their feet for generations of CPUs unlike guys like us, it's gonna be tough for AMD to really break through using the "greater amount of cores strategy." Unless of course someone starts writing software that efficiently utilizes all the cores to show value to the upgrade... Then these will be selling like hotcakes.
 
Since AMD is fitting the top tier in Threadripper2 with 32 cores on 4 dies (where only 2 dies handle accessing the RAM), does anyone know if the 16 and less core varients are dropping down to only 2 dies for 16 cores?

I am hopeful that AMD will make the 16 core variant to use only 2 dies with 8 cores each for the 16 total cores so that the processor doesn't have half of its cores dealing with high latency whenever the off-ram cores are needing access to ram data.
Well that's what 1950x does. Two dies for 16 cores and 2 dummy dies to even out the heatspreader.

So I'm assuming it will be the same? 2990wx will just be like EPYC with 4 active dies and no dummy dies?
 
Since AMD is fitting the top tier in Threadripper2 with 32 cores on 4 dies (where only 2 dies handle accessing the RAM), does anyone know if the 16 and less core varients are dropping down to only 2 dies for 16 cores?

I am hopeful that AMD will make the 16 core variant to use only 2 dies with 8 cores each for the 16 total cores so that the processor doesn't have half of its cores dealing with high latency whenever the off-ram cores are needing access to ram data.

my guess is it'll be 2 cores per ccx 4 cores per die given the higher boost clock but we'll find out for sure soon.
 
People pay a lot for the faster machines in a cloud context and it exceeds the threading tax that you have to charge. An f2 in azure for example has a premium that far exceeds the baseload margins.

It's never going to be a core volume driver but there is not insignificant demand, more importantly and to my point cost effective high GHz is something AMD can potentially play in, that they couldnt before.

Thanks for the lesson though

Again, no. It's about total cost of ownership. (TCO) If you get lower TCO out of a slower more efficient machine then you buy that machine. The exact configurations of these machines is a closely guarded secret by venders like Amazon, Google, and Microsoft. Power through a chip is (V*V)/R. As you increase voltage to get better speeds, the power goes up exponentially. If you get better 20% better power efficiency with 10% performance loss, that means you still win in the end.

Those super high end XEONS are used at professional workstations, extra large spreadsheets, local simulations, video production and CAD type applications. These kinds of task don't work on a cloud.
 
It's because Intel is making smart business decisions. Better IPC on a smaller core count works just fine in Virtualization (which is where I'm sure AMD's salespeople are going with this otherwise why the fuck would you need that for anything other than shits and giggles?). Since Intel has saturated the market during their supremacy and orgs typically drag their feet for generations of CPUs unlike guys like us, it's gonna be tough for AMD to really break through using the "greater amount of cores strategy." Unless of course someone starts writing software that efficiently utilizes all the cores to show value to the upgrade... Then these will be selling like hotcakes.
1. Intel was brilliant with that 28-core reveal
2. It is for shits and giggles really, selling them for servers, why not offer them to enthusiasts?
3. Xeon has been losing market share (very little but still) for a reason. Just refer to Intel's comments on holding onto market share in the segment.
4. Someone has to build it for them (software designers) to come.
If AMD doing crazy things means Intel will be pushed then it's good for all of us.
 
Well, CPU's typically run at 3.3-3.5v... To generate 250w of what is basically waste heat is a pretty hefty problem for your system to deal with.

Consider this, one of those awesome heat lamps that gets put in expensive bathrooms to warm you as you get in/out of the shower are 250 watt bulbs and their heat output is obviously less than 250 watts. Basically, a system designed to heat an entire room and a 200lb body is putting off less heat than this cpu under normal loads. That is pretty fricken crazy.
The closest intel procs I could find (8180 and 8176) were 165-205w tdp, with 12.5% fewer cores and a 9.5% lower max single-core boost of 3.8GHz. If you multiply the tdp of 205w by 120% you get about 250w, and realistically you could expect more than that. Of course, they aren't on the same process technology, but even at 10nm it wouldn't be much better.
 
If you're buying an $1800 32 core CPU I think you'll know what it takes to dissipate 250w right?
 
Since AMD is fitting the top tier in Threadripper2 with 32 cores on 4 dies (where only 2 dies handle accessing the RAM), does anyone know if the 16 and less core varients are dropping down to only 2 dies for 16 cores?

I am hopeful that AMD will make the 16 core variant to use only 2 dies with 8 cores each for the 16 total cores so that the processor doesn't have half of its cores dealing with high latency whenever the off-ram cores are needing access to ram data.

I guess WX models have four enabled dies, whereas X models have two dies plus two dummy (as first gen).
 
Looks good to me, though I'll be waiting for zen2 for personal use. Almost killer proposition really
 
Again, no. It's about total cost of ownership. (TCO) If you get lower TCO out of a slower more efficient machine then you buy that machine. The exact configurations of these machines is a closely guarded secret by venders like Amazon, Google, and Microsoft. Power through a chip is (V*V)/R. As you increase voltage to get better speeds, the power goes up exponentially. If you get better 20% better power efficiency with 10% performance loss, that means you still win in the end.

Those super high end XEONS are used at professional workstations, extra large spreadsheets, local simulations, video production and CAD type applications. These kinds of task don't work on a cloud.

To get lower TCO as a customer you finish your job as soon as possible and shut the workload down, or ideally don't use IaaS. As a provider it is utilisation/ratio. As a customer even reserved instances IaaS cost morethan buying a server in almost all scenarios. Especially those with large machines, as you are implicitly calling out there is a tax that you pay for large VMs, which is a necessary part of the economic model because of what it does to your platform (destroys consolidation ratio because of the CPU scheduling)


They're not supersecret. They literally tell you what processors they run for the IaaS stuff. The energy efficiency stuff at a hardware level isnt particuarly secret either, see we have these meetings and we all talk about it. I used to go. You get to a certain level and scale and the people with your problems are vanishingly small so you talk. White box oems are the same people and are building to basically the same spec. The secret sauce is the management, and a bit of behavioural economics. Don't talk about that, don't talk a out numbers, don't even talk about the size of the teams, but they do talk about hardware. Before i did what I do now I was heavily involved in web scale stuff so I know all this. I also know a fair few of the major actors of the AWS, Facebook, Google, GS of the world.

My point is there is a gap for high thread speed box because some workloads need that because some things need to be done quickly and can't be distributed, you describe generic best case most efficient but there's others, that's why I can get tensorflow Asics, 4tb of ram, 8 V100's, all sorts of niches. Of which AMD is none of them. It would be amazing if we could run the world on OCP Xeon D boxes, but you can't.

You can't get a high hz high core instance. So I have to buy $30,000 workstations, even though they are only used intermittently (but with a job that takes over a day). I'd happily spend the money I do for the GPU ones. Which means a nice healthy profit for my vendor.

You don't need the didactism and no one is arguing.
 
Interesting that AMD actually beat Intel to the punch with HEDT CPU's while Intel has nothing yet to offer to compete against AMD"s top dog CPU, not counting that 28 core Xeon CPU they showed off which was laughable at best.

If Intel were to release a 32 core CPU right now, it would most likely cost twice the price, $1799 is not exactly cheap either but better than Intel's pricing record.

If you think about it it's a pretty brilliant plan

Home users become the qualification testors for amds sever chips. If big businesses see track taking off in home space with little to no issues then it would give them a green light to use and.
 
To get lower TCO as a customer you finish your job as soon as possible and shut the workload down, or ideally don't use IaaS. As a provider it is utilisation/ratio. As a customer even reserved instances IaaS cost morethan buying a server in almost all scenarios. Especially those with large machines, as you are implicitly calling out there is a tax that you pay for large VMs, which is a necessary part of the economic model because of what it does to your platform (destroys consolidation ratio because of the CPU scheduling)


They're not supersecret. They literally tell you what processors they run for the IaaS stuff. The energy efficiency stuff at a hardware level isnt particuarly secret either, see we have these meetings and we all talk about it. I used to go. You get to a certain level and scale and the people with your problems are vanishingly small so you talk. White box oems are the same people and are building to basically the same spec. The secret sauce is the management, and a bit of behavioural economics. Don't talk about that, don't talk a out numbers, don't even talk about the size of the teams, but they do talk about hardware. Before i did what I do now I was heavily involved in web scale stuff so I know all this. I also know a fair few of the major actors of the AWS, Facebook, Google, GS of the world.

My point is there is a gap for high thread speed box because some workloads need that because some things need to be done quickly and can't be distributed, you describe generic best case most efficient but there's others, that's why I can get tensorflow Asics, 4tb of ram, 8 V100's, all sorts of niches. Of which AMD is none of them. It would be amazing if we could run the world on OCP Xeon D boxes, but you can't.

You can't get a high hz high core instance. So I have to buy $30,000 workstations, even though they are only used intermittently (but with a job that takes over a day). I'd happily spend the money I do for the GPU ones. Which means a nice healthy profit for my vendor.

You don't need the didactism and no one is arguing.

V100 and Asics are not xeons. I was specifically talking xeons and how inefficiency increases because of clock speed.

The only area I agree with you is for a given task you want the lowest total energy consumed possible. That's the mark of efficiency.

There are applications for high speed but they are in the minority. Google doesn't mind if you wait a few ms more for a result if it cost them less money in electricity.

When I talk too I mean up front cost and operating electricity cost plus cooling cost plus maintenance cost. And I don't know what white and grey space you been in but all the access passes I have witnessed have non disclosure agreents. We can't even talk of the power requirements outside the job.

Intel even puts in special chip functions just for individual cloud companies and they aren't documented as they are trade secrets.

But if it isn't such a big secret please post all the internals of a Google web query server for us.
 
Last edited by a moderator:
Well, CPU's typically run at 3.3-3.5v... To generate 250w of what is basically waste heat is a pretty hefty problem for your system to deal with.

Consider this, one of those awesome heat lamps that gets put in expensive bathrooms to warm you as you get in/out of the shower are 250 watt bulbs and their heat output is obviously less than 250 watts. Basically, a system designed to heat an entire room and a 200lb body is putting off less heat than this cpu under normal loads. That is pretty fricken crazy.
Incandescent lights that aren't halogen are about 98%-99% inefficient, so yes it's basically the wattage on the box out as heat.
Peak tdp isn't always achieved. At least they meet their tdp, unlike the competing xeons that exceed it by 25% in some workloads with less performance than Epyc.

https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/22
You'll notice they have 180w tdp epyc with lower clocks. Probably binned for max efficiency e.g well below second critical point.
So you can have your lunch and eat it if power is such a [H]uge issue for you.
P.s. you should check out the vrms for the 5ghz water chiller stunt. That thing was probably 500W plus as a low ball estimate lol
 
Last edited:
But if it isn't such a big secret please post all the internals of a Google web query server for us.

That's their app servers, I would imagine they are completely different from their GCP servers given those (assuming it's the same style it was 6 years ago) would be absolute bollocks for IaaS. I know they had a desire to utilise their generalised capacity, however that's the whole PaaS arguement and a different ball game. However openly Google are the ones I've never worked with and don't know any of them. Most of the AWS secret sauce is what they had to do to recut the network when they implemented Nitro and moved away from Cisco. The EC2 nodes were, to the point I was across it, still largely the same efficient 2 socket design, not that different to OCP, which after all came about because Facebook had the same problems Google did with how to have hundreds of thousands and then milions of servers which needed to be cheaper to own and operate. MS have less secret sauce. I did MS Foundation DC's earlier in the decade and from the people under the limited amount you can ask over a beer I know I don't think they've really moved on that much.

I generally avoid going in the DC's but I've built them, commissioned them, and used to work for Mike Manos. I'm not a neophyte in where the costs are. It's literally what I used to do.
 
I would make sure that monoblock uses the updated coldplate: Link

Otherwise you'd be taking a significant hit in performance. HardOCP's initial review

I'm already aware of the issues surrounding the TR CPU Block issues but thanks for the post. :)

Supposedly they held off on the release of the X399 Monoblocks (Covers the CPU -and- VRMs) until after they implemented the fixes for the issues that plagued the TR Block.
 
250w seems like a lot for even a pro-consumer socket. You would need at least a 240 radiator to keep it cool at stock.

my phis run @ right around 250w and have no issue staying cool with a 1u heatsink and just abit of airflow ;)
 
Back
Top