Forbes - "AMD Killing Off Threadripper Processors Suddenly Makes Perfect Sense" - ??!?!?

I read that article earlier. The article does make some good points. However, I still think there is room for a workstation oriented product even if AMD doesn't put any priority on one. Between the Ryzen 3000 series having 16 cores and increased Epyc production, it makes sense to refocus on those markets. HEDT, and Threadripper are niche products at best.
 
At least a model using Zen2 cores-chiplets for TR4 will be offered later this year or early in 2020. Most possibly a 32 cores highly clocked. My 5 cents.
 
With TR, anything over 16 cores is now performing work on CPU cores that do not have a local memory controller. This can still work for a variety of applications, but these are rarely consumer or even professional applications, so volume and cost are not in TR's favor.

Further, while the platform with all its PCIe lanes and memory bandwidth is certainly enticing, the real-world uses for such are even less common than uses for all of the cores. Generally speaking, consumer desktop platforms have more than enough bandwidth to keep their CPUs fed, and if more is needed for say storage, then either a discrete storage solution would be more ideal, or an actual Epyc solution that utilizes the enterprise platform would be both a better fit for the workload and less expensive in terms of supply chain for AMD to provide.


As AMD grows volume and marketshare with Ryzen, instead of borging an enterprise solution down for workstation / HEDT use, perhaps they can introduce a separate line as Intel does for lower-end single-socket workstation and server use.
 
I have to agree with the article, anything Threadripper could do can now be covered by either the new Ryzen's with PCIE 4, or by a lower series of EPYC when it launches at a later date. The only difference between the SP3 and the TR4 sockets was an identifier pin that prevented one from working in the other, they could simply disable that pin via a bios update and maintain a degree of compatibility and simply make it so some of the EPYC's work in the TR4 socket to allow current Threadripper users an upgrade path.
 
As AMD grows volume and marketshare with Ryzen, instead of borging an enterprise solution down for workstation / HEDT use, perhaps they can introduce a separate line as Intel does for lower-end single-socket workstation and server use.

As much as I'd like to see the Threadripper model continue, this was the same thought I had. Just seems to make sense if 16 core Ryzen on AM4 is really going to happen.
 
As much as I'd like to see the Threadripper model continue, this was the same thought I had. Just seems to make sense if 16 core Ryzen on AM4 is really going to happen.

Along with PCIe 4.0 (and DDR5), neither of which exist in the consumer space yet, just upping the number of available PCIe lanes with the platform is going to cover so many more 'power user' workloads as to almost entirely negate Threadripper's real advantage. Even with the same number of cores (16) it would still have a significant advantage on paper, and certainly also one that can be revealed through benchmarks, but in terms of the heaviest day-to-day usage that said 'power users' would throw at the system, the results would simply be too close.

For an enthusiast example, even if you throw in multiple NVMe drives, multiple GPUs, 10GbE networking, and a SATA HBA for fun all at PCIe 3.0 at most, you're still not really running out of bandwidth. Threadripper will have more, but workloads that stress a sixteen core system won't really be bottlenecked by Ryzen 3 on AM4 with X470 and DDR4, let alone the same with DDR5 and PCIe 4.0 peripherals.
 
I personally think we'll see "Ryzen Threadripper" die off and be replaced with "Epyc Threadripper" in it's place.
I think they can drop the Threadripper all together and just bring it in with the EPYC P series chips, they are single socket only already.
 
Perhaps EPYC WX platform for workstations and limited black edition unlocked single socket chips.
 
  • Like
Reactions: N4CR
like this
Wonder if AMD can do an X580 board with more PCIE lanes on the chipset and 8DIMM slots to serve the HEDT market. Of course, the lack of quad channel will still be a problem, but the lower cost may be worth the tradeoff.
 
I don't think people have to be afraid for zen2 7nm as it will come to TR, just much later (should be there by the time amd does zen2+). If they would release the zen2 TR with 32, and or 64 cores it would cannibalize their current stock of zen+ TR; be patient, and wait.
 
Wonder if AMD can do an X580 board with more PCIE lanes on the chipset and 8DIMM slots to serve the HEDT market. Of course, the lack of quad channel will still be a problem, but the lower cost may be worth the tradeoff.

I can only imagine the headaches trying to run 4 DIMMs per channel since 2 DIMMs per channel can be a pain.
 
I can only imagine the headaches trying to run 4 DIMMs per channel since 2 DIMMs per channel can be a pain.

I'm currently running 64GB (4x16GB) of DDR4 3200 @ 3200 in my recently built rig for VM work.

Setting XMP/DOCP alone wasn't fully stable (wouldn't boot properly after a few power on/off cycles) but either backing off the speed to 3133 or 3066 OR adding .01v to the memory seems to have done the trick

With a year's time to improve the IMC I'm guessing it won't be impossible to do 4 DIMMs per channel at that speed, but would take quite a bit of work.
 
I read that article earlier. The article does make some good points. However, I still think there is room for a workstation oriented product even if AMD doesn't put any priority on one. Between the Ryzen 3000 series having 16 cores and increased Epyc production, it makes sense to refocus on those markets. HEDT, and Threadripper are niche products at best.

That is true while AMD puts priority at desktop and server that does not mean that it is all over for Threadripper.
People are just to impatient these days, instant gratification.
 
I'm just happy they can put out something that represents more than 1-10% gain in CPU performance with recent generations at a price I can afford. I don't care what get's cancelled as long as the train keeps rolling. There's little doubt that my next build will be centered around a AMD cpu and thanks to [H]ard forums I'll know more than just speed specs to pick it by. If it wasn't for threads like this and a few others I wouldn't have a clue about the memory controller or multi-core limitations and strengths. At the same time if they do the usual die shrink and refresh I'll be happy for the power reduction and cooling optimizations. It'll be more in line with what I was used to with older Intel's.

I admit at first I wanted to think someone should do an article called "Forbes should consider killing itself" but then I read the article and it did make some valid point. Thanks for the post.
 
I'm currently running 64GB (4x16GB) of DDR4 3200 @ 3200 in my recently built rig for VM work.

Setting XMP/DOCP alone wasn't fully stable (wouldn't boot properly after a few power on/off cycles) but either backing off the speed to 3133 or 3066 OR adding .01v to the memory seems to have done the trick

With a year's time to improve the IMC I'm guessing it won't be impossible to do 4 DIMMs per channel at that speed, but would take quite a bit of work.

The fact that no one has done 3+ dimms/channel with conventional ram since ram moved from SDR to DDR1 says probably impossible. The faster you run the data bus the harder it is to keep clean electrical signals as you connect more devices to it - and every ram chip on a conventional* dimm is connected to the bus. It's not a problem that's getting any easier to solve; haven't seen anything recently either way but a few years ago there was a fair amount of speculation that DDR5 would end up being single dimm/channel oinly.

* Servers with huge amounts of memory do it by using a buffer chip on each dimm, so that only 1 chip is connected to the memory bus not 9 or 18. Consumer ram doesn't do this because the buffer increases costs and adds a significant amount of latency.
 
I don't think people have to be afraid for zen2 7nm as it will come to TR, just much later (should be there by the time amd does zen2+). If they would release the zen2 TR with 32, and or 64 cores it would cannibalize their current stock of zen+ TR; be patient, and wait.
It sounds like they have something they plan to release at some point, just hope it's the same socket when they do.
 
With TR, anything over 16 cores is now performing work on CPU cores that do not have a local memory controller. This can still work for a variety of applications, but these are rarely consumer or even professional applications, so volume and cost are not in TR's favor.

Further, while the platform with all its PCIe lanes and memory bandwidth is certainly enticing, the real-world uses for such are even less common than uses for all of the cores. Generally speaking, consumer desktop platforms have more than enough bandwidth to keep their CPUs fed, and if more is needed for say storage, then either a discrete storage solution would be more ideal, or an actual Epyc solution that utilizes the enterprise platform would be both a better fit for the workload and less expensive in terms of supply chain for AMD to provide.


As AMD grows volume and marketshare with Ryzen, instead of borging an enterprise solution down for workstation / HEDT use, perhaps they can introduce a separate line as Intel does for lower-end single-socket workstation and server use.
Oh brother. How long did you work on that one? Sigh..
 
The fact that no one has done 3+ dimms/channel with conventional ram since ram moved from SDR to DDR1 says probably impossible. The faster you run the data bus the harder it is to keep clean electrical signals as you connect more devices to it - and every ram chip on a conventional* dimm is connected to the bus. It's not a problem that's getting any easier to solve; haven't seen anything recently either way but a few years ago there was a fair amount of speculation that DDR5 would end up being single dimm/channel oinly.

* Servers with huge amounts of memory do it by using a buffer chip on each dimm, so that only 1 chip is connected to the memory bus not 9 or 18. Consumer ram doesn't do this because the buffer increases costs and adds a significant amount of latency.

Intel's Nehalem (1st gen Core i-series) was triple-channel on DDR3 unless I am completely misunderstanding you...
 
Intel's Nehalem (1st gen Core i-series) was triple-channel on DDR3 unless I am completely misunderstanding you...

Three channels, but still only 2 slots/dimms per channel on plain-ddr3-using x58

edit to add/clarify- each memory channel has it's own path between the dimms and the controller so there's no limit to the number of channels from a signal integrity perspective; but multiple dimms on the same channel can interfere with each other because they're wired together which limits the number of dimms per channel
 
Last edited:
the latest leak detailed over at Wccftech claims that a 12-core 3rd Gen Ryzen CPU is quicker than the first gen 12-core Threadripper 1920X.

Wow! Who could have imagined that twelve Zen2 cores would be faster than twelve Zen/Zen+ cores?

And why isn't Forbes mentioning the main reason? Weak sales
 
Three channels, but still only 2 slots/dimms per channel on plain-ddr3-using x58

edit to add/clarify- each memory channel has it's own path between the dimms and the controller so there's no limit to the number of channels from a signal integrity perspective; but multiple dimms on the same channel can interfere with each other because they're wired together which limits the number of dimms per channel
So how is that different then quad channel? Or dual for that matter?
 
So how is that different then quad channel? Or dual for that matter?


Three channel is halfway between dual and quad channel.

Adding more channels requires more CPU pins and thus a larger/more expensive socket, and more traces in the mobo making it more expensive as well. Adding more dimms to a channel degrades the signal quality and requires less aggressive timing than one dimm/channel; no generation of DDR has been able to support more than 2 dimms/channel.
 
And why isn't Forbes mentioning the main reason? Weak sales
I definitely agree with this to an extent. I'm sure Threadripper sales are a small slice of the CPU sales pie for AMD, like single digit percentages. R5 and R7 is where it's at for AMD. Being that Threadripper was supposedly an internal enthusiast pursuit that AMD never really planned from the beginning...is TR weak sales enough to legitimize a TR2, while pulling chips away from Epyc and R5/R7/R9?
Maybe yields are good enough that AMD can throw ~8% of their chips at a TR2.
I think my point is if TR1 sold enough to make $ and take market share from Intel, why not do it again.
But who knows.
 
So how is that different then quad channel? Or dual for that matter?

There are two parts - channels and banks. In Nehalem, you had triple channel, dual bank (two dimms per channel, three total channels, up to 6 total DIMMS). Most of what we do today in the enthusiast market is dual channel, dual bank (4 total slots). Most enterprise systems use registered memory (which has, as mentioned before, buffer chips) to let you have more than 2 banks per channel (generally 3) for large RAM quantities. Large DIMMS (128G +) are also low resistance, to help with the same issue (and if you're using them, you should ONLY use them - no mixing sizes then).

Then you get into rank, which is how the DIMM itself is designed (single rank vs dual rank - chips on one side or both, in general, based on how they're wired into the actual silicon).

Memory is fun!
 
If I were AMD, given the Goodwill threadripper has garnered over its lifetime, I would rebrand the top end Ryzen Processors as threadrippers.
 
If I were AMD, given the Goodwill threadripper has garnered over its lifetime, I would rebrand the top end Ryzen Processors as threadrippers.
I would just give them a series of EPYC’s so it’s not only a physical upgrade but a branding one too.
 
If I were AMD, given the Goodwill threadripper has garnered over its lifetime, I would rebrand the top end Ryzen Processors as threadrippers.

Instead of 12 / 16 core Ryzen's being "Ryzen 9" (or whatever), calling them "Ryzen Threadripper" might well be a decent marketing stunt.
 
so, any official news or just an opinion piece?

It's opinion.

Though not an unreasonable one. Threadripper looked great, when it was the cheapest way to get more than 8 cores.

But with 16 core Ryzen coming, Threadripper is likely dead below 24 cores, which might make it too niche to bother with.
 
Even as someone who was planning to buy a Zen2 based Threadripper for my next "main" rig, I may be able to understand this IF AMD does it right. Looking at Threadripper 2000 series today its assets come down to quad-channel RAM , more PCI-E lanes, a few tertiary/specialty features, and the cores of the TR4 chips themselves. Of course, there's a pretty big split between the TR4 "X" chips, having less cores and higher clocks, and the "WX" chips having maximum cores but somewhat lower clocks.

Given I'd be building HEDT as my primary rig for gaming/encoding/general use , if I was going with TR 2000 series, I'd likely get the fastest X series as having 16 cores/32 threads with higher clocks is more meaningful for gaming and many other workloads than having 32 cores/64 threads but less speed. At least at this point in development, I think that overall it seems like those "WX" chips were well branded as those core counts w/ lower frequencies are pretty much only viable at workstation/server specific tasks. Now to be honest I don'ty know where things are going in the next few years - for instance, will AMD's kick in the pants encourage more software of ALL kind to be developed for parallel workloads as much as possible, given that after years "stuck" on 4 cores/8 threads for most users, we have more? Or will we hit a natural wall on desktop/general applications that can't be parallelized well and depend on frequency + IPC?

In any event, if AMD is confident enough in Zen2's Ryzen to dispense with TR, I think they could do it if its done right, by granting formerly TR level features to Ryzen mainstream...at least on high end boards. For instance, if the rumor is true of X580 will run on PCI-E 4.0 (or even 5.0?) spec and more lanes (more than earlier Ryzen but less than TR? Plus lanes on the chipset?) that will handle both bandwidth and lanes. Now, what I don't know if they can port is the option for quad channel memory (and possibly full ECC support for those who want it). Ideally it would be neat if they enable quad channel for full power, but also have something equating to a dual channel implementation as well. If the highest Ryzen has 16 cores/ 32 threads and high clocks (made higher by OCing) , with the aforementioned upgrades, then maybe they really don't need a separate HEDT platform?

This may be a great move against Intel as well, as AMD could offer "HEDT features on Mainstream setup" , they could have both a feature and perhaps a MAJOR price advantage (even bigger than they already have with AMD TR vs Intel HEDT offerings). HEDT as a platform have always been more expensive all the way down - I know that as a guy who bought Nahalem X58 and currently enjoy Haswell-E X99 - so though the highest end Ryzen chips and boards would be expensive, they may not be AS expensive as dedicated HEDT with similar features.

Maybe this is a win for AMD, but I really hope they don't shy away from this for awhile and then pop out later with a surprise that Threadripper 3 (or some equivalent Epyc Blaq branding etc...) suddenly is scheduled for launch, after everyone figured the era had ended and purchased other hardware
 
I am not too sure about AMD killing off threadripper line. I think this more has to do with EPYC demand for the next EPYC Rome 7nm series. It just absolutely makes no sense to push threadripper out on new tech when you are getting companies lining up to get next EPYC processor and all the super computers deals they are winning.

They are going about it right. Deliver the desktop side and server side and hold off on threadripper and rethink that position. This just really shows that they are expecting some major demand for both and they want to deliver on that. There is no need to take chips away from EPYC side and repurpose them for threadripper when they can sell those for much more.

AMD has the next winner on their hands, and they can finally make some damn money and kick intel in the balls and build on the server market share and desktop market share. It all will result in better GPUs for everyone that complains about the GPU side. lol.
 
Back
Top