Anyone else eagerly awaiting Threadripper 3960x results?

Seems like when the next memory standard hits on an actual motherboard the ram prices for it are way off the charts, I would wait a generation or two before moving to DDR 5 just not to waste money, plus when this occurs the speed difference from one generation to next initially just was not that significant, over time it was but not initially. So first DDR 5 memory - Super expensive and not much faster if history is repeated. Also memory issues abound with both Intel and AMD on their first go around with DDR 4 which basically took a year + to resolve for each.
 
Seems like when the next memory standard hits on an actual motherboard the ram prices for it are way off the charts, I would wait a generation or two before moving to DDR 5 just not to waste money, plus when this occurs the speed difference from one generation to next initially just was not that significant, over time it was but not initially. So first DDR 5 memory - Super expensive and not much faster if history is repeated. Also memory issues abound with both Intel and AMD on their first go around with DDR 4 which basically took a year + to resolve for each.

Exactly. As I read between the data lines, seems like DDR5 is more of a cost-down, profit-up development for memory makers - with a small benefit in density, speed and bandwidth for users - to compensate for the higher latency and much, much higher cost.
 
Probably over time DDR 5 will become very good, doubtfully it will be worth it the first go around is my take.
 
What would be great for enthusiasts would be the ability to disable alternate CPU cores so the heat load is spread evenly, for games that dont need so many threads.
Then those cores left running will overclock better and have more cache per core.
Even better if this can be controlled by software.

This could remedy the lower core clocks on chips with a high core count.
 
I use a processor affinity mask to load steam games on my even cores..

C:\WINDOWS\System32\cmd.exe /c start /affinity 555 F:\Games\Steam\Steam.exe -no-browser +open steam://open/minigameslist
 
Yes I am awaiting it but I have a REALLY strong idea of how bad ass these are going to be.

Blue studios who made the CGI for Terminator Darkfate exclusively used Threadripper 3's to render all the scenery in the movie.

I posted a link in this same forum.

These chips are going to be awesome for the creator and video workers. I do a lot of video work and would love to have one. Eyeballing the 32 core right now. Just posted my 3900x and board for sale on the forums. Anticipating big performance uplift in this next gen. Heres to waiting for the 25th.
 
  • Like
Reactions: mikeo
like this
I use a processor affinity mask to load steam games on my even cores..

C:\WINDOWS\System32\cmd.exe /c start /affinity 555 F:\Games\Steam\Steam.exe -no-browser +open steam://open/minigameslist
A good method.
I'm looking for a way to boost the overclock, although your method goes part way there due to less downclocking.

By software method I mean changing bios settings of active cores easily in Windows with a quick reboot to activate, no need to enter the CMOS hopefully.
Disabling cores at low level should reduce power draw a little more.
Use mobo tools to set the overclock.
AMD could allow profiles to do both in tandem.
 
People dont buy these to save power. All this discussion about power savings from folks that will never buy these chips. It better be ripping threads or else you wasted your hard earned. I owned two generations of Threadrippers. At idle they sip power. Gaming they dont use much. Fire up some workloads and they use some mad wattage but 90% of the time were talking sips.
 
Last edited:
  • Like
Reactions: mikeo
like this
People dont buy these to save power. All this discussion about power savings from folks that will never buy these chips. It better be ripping threads or else you wasted your hard earned. I owned two generations of Threadrippers. At idle they sip power. Gaming they dont use much. Fire up some workloads and they use some mad wattage but 90% of the time were talking sips.
The point is higher clocks for gaming because why not have the best chip for all uses or near as dammit if possible.
I didnt mention power savings for another reason.
 
The point is higher clocks for gaming because why not have the best chip for all uses or near as dammit if possible.
I didnt mention power savings for another reason.

Ok cool beans. That makes sense.
 
I need more pcie lanes then 3950x will provide. Its a real shame their isn't something between am4 and tr4 on this front, I don't need 64 lanes, that is vast overkill, but i do want 2 16x video cards(plan to run vfio/project looking glass I need 2 16x cards not 2 8x cards) and 2 nvme drives.
What cards are you looking at? If it's 5700XT doesn't it have pcie 4.0? That would make 4.0 x8 just as fast as pcie 3.0 x16 :). That's a lot of bandwidth though. Do you know if they have nvme 4.0 x2 available? That would be just as fast as 3.0 x4 in theory. I need to read up more on current offerings, but my budget normally keeps me out of the bleeding edge category so I don't tend to stay as current since it's not normally my target.
 
  • Like
Reactions: Mega6
like this
What cards are you looking at? If it's 5700XT doesn't it have pcie 4.0? That would make 4.0 x8 just as fast as pcie 3.0 x16 :). That's a lot of bandwidth though. Do you know if they have nvme 4.0 x2 available? That would be just as fast as 3.0 x4 in theory. I need to read up more on current offerings, but my budget normally keeps me out of the bleeding edge category so I don't tend to stay as current since it's not normally my target.

Why don't he/she buy x399 its cheap as hell right now.
 
My point was to buy the platform for the future. Hell if you could throw in a 1900X and wait it would be worth it.
 
Why don't he/she buy x399 its cheap as hell right now.

Just a / missing or something.

It isn't dead as far as support. Just future chip releases, but I mean a 2950x is no slouch. It would easily meet that persons needs for years to come.

X399 is a dead end. Not only that, but while powerful, 2nd generation Threadrippers are kind of crappy at gaming. If you have a solid X399 board now and want to upgrade, I'd wait for deals on 2970WX and 2990WX CPU's. No doubt they'll be deeply discounted once the 3960X and 3970X hit shelves. Assuming they have any availability.
 
  • Like
Reactions: mikeo
like this
So, no backward compatibility was my point. This was the issue for the incremental upgrader.
Edit: This issue has not been spoken to, am I correct?
 
X399 is a dead end. Not only that, but while powerful, 2nd generation Threadrippers are kind of crappy at gaming. If you have a solid X399 board now and want to upgrade, I'd wait for deals on 2970WX and 2990WX CPU's. No doubt they'll be deeply discounted once the 3960X and 3970X hit shelves. Assuming they have any availability.

Dunno, if properly tweaked, they should just perform similar to a 2700X. My 1950X performs about the same as 1700X as long as I restrict games to running only on 8 of the cores to avoid performance penalties associated with NUMA.

Don't necessarily have to run in game mode to do that either, there are apps like process lasso to restrict which cores you are running on. So games can stay on one numa node, while you run everything else on the rest.

With TR3 you won't have to deal with NUMA which is a big plus but I think the older TR chips once they go on sale will be an amazing deal. At least for me 2700X performance in games is still very good.
 
Dunno, if properly tweaked, they should just perform similar to a 2700X. My 1950X performs about the same as 1700X as long as I restrict games to running only on 8 of the cores to avoid performance penalties associated with NUMA.

Don't necessarily have to run in game mode to do that either, there are apps like process lasso to restrict which cores you are running on. So games can stay on one numa node, while you run everything else on the rest.

With TR3 you won't have to deal with NUMA which is a big plus but I think the older TR chips once they go on sale will be an amazing deal. At least for me 2700X performance in games is still very good.

They perform slightly worse than a 2700X. I've talked about this at length in various articles. You touched on part of the reason for this when mentioning restricting the cores its on as being helpful. 1st and 2nd generation Threadripper CPU's have massive latency penalties for crossing CCX's. It's not just an issue of NUMA access although that's a big part of it as well. Applications have to be NUMA aware and games aren't made that way. They are also sometimes more sensitive to memory latency, not just bandwidth. In other words, Threadripper's complexity hurts it where gaming is concerned.

I would know. I had a 2920X that I used for awhile. I've done plenty of processor benches on the 2700X, TR 2920X, 9900K, and so on. This data can be found in my review of the Ryzen 9 3900X which can be found here. In games the Threadripper 2920X was dead last in the lineup in every situation. I get into a more in depth dive with Destiny 2 performance when AMD updated the AGESA code so Ryzen 3000 series CPU's could run it. Long story short, if you look at the frame rates, Threadripper is extremely erratic. The lows are lower, the highs are higher, and the average appears to look similar to that of better gaming chips like the 9900K. However, that erratic framerates make for a less than stellar gaming experience as I found out the hard way. It isn't capable of delivering a smooth gaming experience in Destiny 2 at 4K, but the 9900K can.

D3.png


The two processors here are the 9900K (Maximus XI APEX) and the AMD Ryzen 9 3900X (MEG X570 GODLIKE). You can see lower lows, higher highs and almost the same averages. The Threadripper was actually worse, but I don't have the graph for that. The low was 26FPS manually overclocked and 36FPS with PBO on. It nearly hit 200FPS as I recall for the high.

These problems are an outlying example, but they illustrate the point. I've seen similar behavior in other games, but its minimized in other games vs. Destiny 2. Essentially, Destiny doesn't like AMD CPU's very much. Its a similar story with the 3900X vs. the 9900K. The basic frame rates tell you that Destiny is as fast or faster on the 3900X, but the frame times, minimums, and maximum frame rates indicate the same erratic behavior. However, its not even noticeable compared to 2nd Generation Threadripper CPU's. I will say this is something I found to be a larger issue at 4K. At 3440x1440, I didn't see as many problems, but I was also using G-Sync at the time which made for a smoother experience anyway.

The Ryzen 7 2700X can provide a reasonably good gaming experience at times. However, there are also some specific situations where I'm going to say that it absolutely cannot deliver. At 4K, that's definitely the case. At least, in Destiny 2. Frankly, that's the game I spend the most time playing as of late. My 2700X is actually in my girlfriend's machine, but she's only at 2560x1600. I've have put a 9600K in her machine but I don't have a mini-ITX motherboard for LGA 1151 CPU's at the moment.

Basically, if Threadripper was acceptable for gaming, I'd still be running one. I had one in my machine and switched it out because the gaming experience it provided wasn't good enough. I actually bought a different processor because it was so bad. If your not gaming at 4K, or your priorities are geared towards multi-threaded workloads, then yes. Going with an older X399 motherboard and a 1st or 2nd generation TR at a discount is probably the way to go.
 
Last edited:
Tr3 is gonna be just as fast at gaming as 3900x mark my words.

It very well could be; there aren't any real reasons now that the 3700X is rocking as well as it is, so the differences should just come down to RAM speed and core speed.

The biggest issues that Dan_D brings up above should also be addressed at this point, assuming that the dance between AMD and Microsoft has resulted in them coming to a complementary solution.
 
Tr3 is gonna be just as fast at gaming as 3900x mark my words.

I wasn't talking about 3rd generation Threadripper CPU's. It might be, but that's going to be way
It very well could be; there aren't any real reasons now that the 3700X is rocking as well as it is, so the differences should just come down to RAM speed and core speed.

The biggest issues that Dan_D brings up above should also be addressed at this point, assuming that the dance between AMD and Microsoft has resulted in them coming to a complementary solution.

I don't know that this is actually the case though. The 3900X is sometimes slower than the 3700X and 3800X's in some games. I think the issue comes from crossing chiplets. CCX to CCX latency within a chiplet doesn't seem to be a problem, but crossing chiplets seems to incur a small hit. We aren't talking about a massive hit to performance or anything, but it is measurable. The Windows scheduler should always utilize one chiplet completely before crossing into the other one, but this may not always happen. With a higher core count Threadripper, this issue will be magnified. On a 3970X, you will have four chiplets instead of one or two. Third generation Threadrippers might be as fast as a 3900X at gaming, but that will depend on whether or not the scheduler behaves and localizes everything to a single chiplet.

It's also important to note, the 3950X may be faster than the 3900X. It will boost slightly higher and it has four cores per CCX and two CCX's per chiplet. That's 8c/16t per chiplet like the 3700X and 3800X. The 3900X on the other hand has 3 cores per CCX and two chiplets. This creates a scenario where crossing chiplets is more common than it would be for a 3950X.
 
Third generation Threadrippers might be as fast as a 3900X at gaming, but that will depend on whether or not the scheduler behaves and localizes everything to a single chiplet.

Agreed; one would hope that the software side has been more or less figured.

It's also important to note, the 3950X may be faster than the 3900X. It will boost slightly higher and it has four cores per CCX and two CCX's per chiplet. That's 8c/16t per chiplet like the 3700X and 3800X. The 3900X on the other hand has 3 cores per CCX and two chiplets. This creates a scenario where crossing chiplets is more common than it would be for a 3950X.

I know that I've contemplated this before, and expect that the 3960X has a good chance to rise above the issues that have plagued the platform on and off with respect to more 'consumer' workloads.
 
Agreed; one would hope that the software side has been more or less figured.



I know that I've contemplated this before, and expect that the 3960X has a good chance to rise above the issues that have plagued the platform on and off with respect to more 'consumer' workloads.

I have no idea. Sadly, I haven't gotten samples from AMD at this point. Obviously, I'd be under NDA at the moment if I had been, but right now we don't have the CPU's in hand, so I can only speculate. I suspect we will see the 3960X and 3970X fall short of the 3900X and 3950X in some consumer workloads. Both the 3900X and 3950X have higher boost clocks than the 3960X and 3970X CPU's do. So that will factor in. Memory is also not likely to clock quite as well on TR40X as it does on X570, so there is that problem as well. Again, this is all speculative as I don't have my hands on any of the hardware yet.
 
Exactly- best case is that there won't be any 'gotchas' as seen in the past, so that it's just a solid choice between peak core performance and number of cores.

Which isn't a bad place to be at all.
 
I have no idea. Sadly, I haven't gotten samples from AMD at this point. Obviously, I'd be under NDA at the moment if I had been, but right now we don't have the CPU's in hand, so I can only speculate. I suspect we will see the 3960X and 3970X fall short of the 3900X and 3950X in some consumer workloads. Both the 3900X and 3950X have higher boost clocks than the 3960X and 3970X CPU's do. So that will factor in. Memory is also not likely to clock quite as well on TR40X as it does on X570, so there is that problem as well. Again, this is all speculative as I don't have my hands on any of the hardware yet.

Ahh but Dan dont sell the 140MB of cache short. It might end up running circles around the 00x and 50x. Remember with Infinity fab all of the cache is available to 1 or the whole shebang of cores. You might be right in very very limited scenarios like.. Tomb Raider lmao oh.. or Adobe junk
 
What cards are you looking at? If it's 5700XT doesn't it have pcie 4.0? That would make 4.0 x8 just as fast as pcie 3.0 x16 :). That's a lot of bandwidth though. Do you know if they have nvme 4.0 x2 available? That would be just as fast as 3.0 x4 in theory. I need to read up more on current offerings, but my budget normally keeps me out of the bleeding edge category so I don't tend to stay as current since it's not normally my target.

A 1080 TI for one. I don't plan on buying newer cards until next year when hbm navi cards become available. nvme x2?!?, I want raid0 speeds not cut up half assing.

"When I build a something, I either give it the whole complete Ass, Or no Ass at all!" -- me
 
Impressive!



I was very surprised that gaming performance surpasses Intel I9 9900K in a number of games and overall is outstanding. Big change over previous Thread Rippers. In fact the gaming performance seems to be better than Ryzen 3 desktop versions (depending upon game as well). The performance is heaps above previous AMD TR chips and makes Intel 18 core look absolutely stupid in a number of ways. Now I want to build one but have to justify a use case for it.
 
  • Like
Reactions: kac77
like this
Sadly, I didn't have the new Threadripper parts for review, but that confirms what I suspected the result would be. Basically, looking at the specs we knew about ahead of time, Intel was going to get crushed. This is why Intel priced the Core i9 10980XE (stupid name) the way it did. It sits in between the Ryzen 3950X and the third generation Threadripper CPU's.
Wish they released a 16 core part from Cascade that way we can determine IPC a little better. It looks like the 16 core Ryzen would beat a 16 core Cascade but it would be guessing without the actual part, but it does look like it possibly would.
 
Sadly, I didn't have the new Threadripper parts for review, but that confirms what I suspected the result would be. Basically, looking at the specs we knew about ahead of time, Intel was going to get crushed. This is why Intel priced the Core i9 10980XE (stupid name) the way it did. It sits in between the Ryzen 3950X and the third generation Threadripper CPU's.
The 10980XE does not look faster in many applications over the 3950 and definitely slower in gaming. Well from the limited reviews I've seen. Yes some applications but overall it does not look that good. From performance perspective (depending on application) looks like the 10980XE sits below the 3950x.
 
Last edited:
Impressive!



I was very surprised that gaming performance surpasses Intel I9 9900K in a number of games and overall is outstanding. Big change over previous Thread Rippers. In fact the gaming performance seems to be better than Ryzen 3 desktop versions (depending upon game as well). The performance is heaps above previous AMD TR chips and makes Intel 18 core look absolutely stupid in a number of ways. Now I want to build one but have to justify a use case for it.

Wow.... The gaming benches!!! With the cache benches it pretty much solidifies what I have been saying with regards to why the IPC is so much higher for Intel systems in gaming previously. It's the cache performance.

But somehow AMD increased the cache performance for threadripper specifically which would explain the delay. The fact that they took the time to do this is amazing.

The threadripper parts seem actually good for gaming this time around. So anyone that gets one will receive the boost in productivity apps and in gaming as opposed to it what it was the previous generation.
 
Wish they released a 16 core part from Cascade that way we can determine IPC a little better. It looks like the 16 core Ryzen would beat a 16 core Cascade but it would be guessing without the actual part, but it does look like it possibly would.

The 10980XE does not look faster in many applications over the 3950 and definitely slower in gaming. Well from the limited reviews I've seen. Yes some applications but overall it does not look that good.

There are no mysteries with Cascade Lake-X and IPC. It's Skylake outside of AVX512 workloads. The relatively low clock speeds Intel went with to keep the TDP down hurts the CPU in its stock form. Overclocked, its a different animal. It's a challenge to keep cool though. I won't kid you there. It puls tons of power and heats up the room. Unfortunately, I do not have a 3950X to test with. However, given the massive gains achieved when overclocking the 10980XE, it may very well compete when OC'ed. At stock speeds, they'll probably trade blows in some applications.
 
Wow.... The gaming benches!!! With the cache benches it pretty much solidifies what I have been saying with regards to why the IPC is so much higher for Intel systems in gaming previously. It's the cache performance.

But somehow AMD increased the cache performance for threadripper specifically which would explain the delay. The fact that they took the time to do this is amazing.

The threadripper parts seem actually good for gaming this time around. So anyone that gets one will receive the boost in productivity apps and in gaming as opposed to it what it was the previous generation.

Yeah I was planning on upgrading to TR last gen but the numa issues and weak single core performance changed my mind. Eagerly awaiting a 3970x to replace my old x79 Xeon.
 
Back
Top